Artificial Intelligence - Who Is Erik Brynjolfsson?

 



The Massachusetts Institute of Technology's Initiative on the Digital Economy is directed by Erik Brynjolfsson (1962–).

He is also a Research Associate at the National Bureau of Economic Research and a Schussel Family Professor at the MIT Sloan School (NBER).

Brynjolfsson's research and writing focuses on the relationship between information technology productivity and labor and innovation.

Brynjolfsson's work has long been at the focus of debates concerning how technology affects economic relationships.

His early research focused on the link between information technology and productivity, particularly the "productivity conundrum." Brynjolfsson discovered "large negative associations between economywide productivity and information worker productivity," according to his findings (Brynjolfs son 1993, 67).

He proposed that the paradox may be explained by effect mismeasurement, a lag between initial cost and final benefits, private benefits accumulating at the expense of the collective benefit, or blatant mismanagement.

However, multiple empirical studies by Brynjolfsson and associates demonstrate that investing in information technology has increased productivity significantly—at least since 1991.

Information technology, notably electronic communication networks, enhances multitasking, according to Brynjolfsson.

Multitasking, in turn, boosts productivity, knowledge network growth, and worker performance.

More than a simple causal connection, the relationship between IT and productivity constitutes a "virtuous cycle": as performance improves, users are motivated to embrace knowledge networks that boost productivity and operational performance.

In the era of artificial intelligence, the productivity paradox has resurfaced as a topic of discussion.

The digital economy faces a new set of difficulties as the battle between human and artificial labor heats up.

Brynjolfsson discusses the phenomenon of frictionless commerce, a trait brought about by internet activities such as smart shopbots' rapid pricing comparison.

Retailers like Amazon have redesigned their supply chains and distribution tactics to reflect how online marketplaces function in the age of AI.

This restructuring of internet commerce has changed the way we think about efficiency.

Price and quality comparisons may be made by covert human consumers in the brick-and-mortar economy.

This procedure may be time-consuming and expensive.

Consumers (and web-scraping bots) may now effortlessly navigate from one website to another, thereby lowering the cost of obtaining various types of internet information to zero.

Brynjolfsson and coauthor Andrew McAfee discuss the impact of technology on employment, the economy, and productivity development in their best-selling book Race Against the Machine (2011).

They're particularly interested in the process of creative destruction, which economist Joseph Schumpeter popularized in his book Capitalism, Socialism, and Democracy (1942).

While technology is a beneficial asset for the economy as a whole, Brynjolfsson and McAfee illustrate that it does not always benefit everyone in society.

In reality, the advantages of technical advancements may be uneven, benefiting small groups of innovators and investors who control digital marketplaces.

The key conclusion reached by Brynjolfsson and McAfee is that humans should collaborate with machines rather than compete with them.

When people learn skills to participate in the new age of smart machines, innovation and human capital improve.

Brynjolfsson and McAfee expanded on this topic in The Second Machine Age (2014), evaluating the significance of data in the digital economy and the growing prominence of artificial intelligence.

Data-driven intelligent devices, according to the authors, are a key component of online business.

Artificial intelligence brings us a world of new possibilities in terms of services and features.

They suggest that these changes have an impact on productivity indices as well as our understanding of what it means to participate in capitalist business.

Brynjolfsson and McAfee both have a lot to say on the disruptive effects of a widening gap between internet billionaires and regular people.

The authors are particularly concerned about the effects of artificial intelligence and smart robots on employment.

Brynjolfsson and McAfee reaffirm in Second Machine Age that there should be no race against technology, but rather purposeful cohabitation with it in order to develop a better global economy and society.

Brynjolfsson and McAfee argue in Machine, Platform, Crowd (2017) that the human mind will have to learn to cohabit with clever computers in the future.

The big difficulty is figuring out how society will utilize technology and how to nurture the beneficial features of data-driven innovation and artificial intelligence while weeding out the undesirable aspects.

Brynjolfsson and McAfee envision a future in which labor is not only suppressed by efficient machines and the disruptive effects of platforms, but also in which new matchmaking businesses govern intricate economic structures and large enthusiastic online crowds, and vast amounts of human knowledge and expertise are used to strengthen supply chains and economic processes.

Machines, platforms, and the crowd, according to Brynjolfsson and McAfee, may be employed in a variety of ways, either to concentrate power or to disperse decision-making and wealth.

They come to the conclusion that individuals do not have to be passively reliant on previous technological trends; instead, they may modify technology to make it more productive and socially good.

Brynjolfsson's current research interests include productivity, inequality, labor, and welfare, and he continues to work on artificial intelligence and the digital economy.

He graduated from Harvard University with degrees in Applied Mathematics and Decision Sciences.

In 1991, he received his doctorate in Managerial Economics from the MIT Sloan School.

"Information Technology and the Reorganization of Work: Theory and Evidence," was the title of his dissertation.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Ford, Martin; Workplace Automation.



Further Reading

Aral, Sinan, Erik Brynjolfsson, and Marshall Van Alstyne. 2012. “Information, Technology, and Information Worker Productivity.” Information Systems Research 23, no. 3, pt. 2 (September): 849–67.

Brynjolfsson, Erik. 1993. “The Productivity Paradox of Information Technology.” Com￾munications of the ACM 36, no. 12 (December): 67–77.

Brynjolfsson, Erik, Yu Hu, and Duncan Simester. 2011. “Goodbye Pareto Principle, Hello Long Tail: The Effect of Search Costs on the Concentration of Product Sales.” Management Science 57, no. 8 (August): 1373–86.

Brynjolfsson, Erik, and Andrew McAfee. 2012. Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy. Lexington, MA: Digital Frontier Press.

Brynjolfsson, Erik, and Andrew McAfee. 2016. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W. W. Norton.

Brynjolfsson, Erik, and Adam Saunders. 2013. Wired for Innovation: How Information Technology Is Reshaping the Economy. Cambridge, MA: MIT Press.

McAfee, Andrew, and Erik Brynjolfsson. 2017. Machine, Platform, Crowd: Harnessing Our Digital Future. New York: W. W. Norton.


Artificial Intelligence - How Has The Blade Runner (1982) Film Envisioned AI Androids?

 



Do Androids Dream of Electric Sheep? by Philip Dick was first published in 1968 and is set in post-industrial San Francisco in the year 2020.

In 1982, the book was renamed Blade Runner for a cinematic adaption set in Los Angeles in the year 2019.

While the texts vary significantly, both recount the narrative of bounty hunter Rick Deckard, who is entrusted with locating (and executing) escaping replicants/androids (six in the novel, four in the film).

The setting for both novels is a future in which cities have grown overcrowded and polluted.

Natural nonhuman life has virtually vanished (due to radiation sickness) and been replaced by synthetic and artificial life.

Natural life has become a valued commodity in the future.

Replicants are meant to perform a variety of industrial functions in this environment, most notably as labor for off-world colonies.

The replicants are an exploited race that was created to serve human masters.

When they are no longer useful, they are discarded, and when they struggle against their circumstances, they are retired.

Blade runners are specialist law enforcement operatives tasked with apprehending and killing renegade replicants.

Rick Deckard, a former Blade Runner, returns from retirement to track down the sophisticated Nexus-6 replicant models.

These replicants have escaped to Earth after rebelling against the slave-like conditions on Mars.

In both texts, the treatment of artificial intelligence serves as an implicit critique of capitalism.

The Rosen Association in the book and the Tyrell Corporation in the film develop replicants to create a more docile labor, implying that capitalism converts people into robots.

Eldon Rosen (who is called Tyrell in the film) emphasizes these obnoxious commercial imperatives: "We provided what the colonists wanted...." Every commercial venture is founded on a time-honored principle.

Other corporations would have developed these progressive more human kinds if our company hadn't." 

There are two types of replicants in the movie: those who are designed to be unaware that they are androids and are filled with implanted memories (like Rachael Tyrell), and those who are aware that they are androids and live by that knowledge (the Nexus-6 fugitives).

Rachael in the film is a new Nexus-7 model that has been implanted with the memories of Eldon Tyrell's niece, Lilith. Deckard is sent to murder her, but instead falls in love with her. The two depart the city together at the conclusion of the film.

Rachael's character is handled differently in the book.

Deckard makes an effort to recruit Rachael's assistance in locating the runaway androids. Rachael agrees to meet Deckard in a hotel room in the hopes of persuading him to drop the case.

Rachael explains during their encounter that one of the runaway androids (Pris Stratton) is a carbon copy of her (making Rachael a Nexus-6 model in the novel).

Deckard and Rachael actually have sex and profess their love for each other.

Rachael, on the other hand, is discovered to have slept with other blade runners.

She is designed to do just that in order to keep them from fulfilling their tasks.

Deckard threatens to murder Rachael but decides to leave the hotel rather than carry out his threat.

The replicants are undetectable in both the literature and the movies.

Even under a microscope, they seem to be totally human.

The Voigt-Kampff test, which separates humans from androids based on emotional reactions to a variety of questions, is the sole method to identify them.

The exam is conducted with the use of a machine that monitors blush reaction, heart rate, and eye movement in response to empathy-related questions.

Deckard's identity as a human or a replicant is unknown at this time.

Rachael even inquires as to whether he has completed the Voigt-Kampff exam.

In the movie, Deckard's position is unclear.

Despite the fact that the audience is free to make their own choice, filmmaker Ridley Scott has hinted that Deckard is a replicant.

Deckard takes and passes the exam at the conclusion of the book, although he starts to doubt the effectiveness of blade running.

More than the movie, the book explores what it means to be human in the face of technological advancements.

The book depicts the fragility of the human experience and how it can be easily harmed by the technology that is supposed to help it.

Individuals with Penfield mood organs, for example, can use them to control their emotions.

All that is required is for a person to look up an emotion in a manual, dial the appropriate number, and then experience whatever emotion they desire.

The device's usage and the generation of artificial sensations implies that people may become robotic, as Deckard's wife Iran points out: My first response was to express gratitude for the fact that we could afford a Penfield mood organ.

But then I understood how harmful it was to sense the lack of vitality everywhere, not only in this building - do you know what I mean? I'm assuming you don't.

That, however, was formerly thought to be an indication of mental disease, referred to as "lack of proper emotion." The argument made by Dick is that the mood organ inhibits humans from feeling the right emotional elements of life, which is precisely what the Voigt-Kampff test reveals replicants are incapable of.

Philip Dick was known for his hazy and maybe gloomy vision of artificial intelligence.

His androids and robots are distinctly ambiguous.

They desire to be like humans, yet they lack empathy and emotions.

Do Androids Dream of Electric Sheep is heavily influenced by this uncertainty, which also appears onscreen in Blade Runner.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Nonhuman Rights and Personhood; Pathetic Fallacy; Turing Test.


Further Reading


Brammer, Rebekah. 2018. “Welcome to the Machine: Artificial Intelligence on Screen.” Screen Education 90 (September): 38–45.

Fitting, Peter. 1987. “Futurecop: The Neutralization of Revolt in Blade Runner.” Science Fiction Studies 14, no. 3: 340–54.

Sammon, Paul S. 2017. Future Noir: The Making of Blade Runner. New York: Dey Street Books.

Wheale, Nigel. 1991. “Recognising a ‘Human-Thing’: Cyborgs, Robots, and Replicants in Philip K. Dick’s Do Androids Dream of Electric Sheep? and Ridley Scott’s Blade Runner.” Critical Survey 3, no. 3: 297–304.




Artificial Intelligence - Who Is Tanya Berger-Wolf? What Is The AI For Wildlife Conservation Software Non-profit, 'Wild Me'?

 


Tanya Berger-Wolf (1972–) is a professor at the University of Illinois at Chicago's Department of Computer Science (UIC).

Her contributions to computational ecology and biology, data science and network analysis, and artificial intelligence for social benefit have earned her acclaim.

She is a pioneer in the subject of computational population biology, which employs artificial intelligence algorithms, computational methodologies, social science research, and data collecting to answer questions about plants, animals, and people.

Berger-Wolf teaches multidisciplinary field courses with engineering students from UIC and biology students from Prince ton University at the Mpala Research Centre in Kenya.

She works in Africa because of its vast genetic variety and endangered species, which are markers of the health of life on the planet as a whole.

Her group is interested in learning more about the effects of the environment on social animal behavior, as well as what puts a species at danger.

Wildbook, a charity that develops animal conservation software, is her cofounder and director.

Berger-work Wolf's for Wildbook included a crowd-sourced project to photograph as many Grevy's zebras as possible in order to complete a full census of the endangered animals.

The group can identify each individual Grevy's zebra by its distinctive pattern of stripes, which acts as a natural bar code or fingerprint, after analyzing the photographs using artificial intelligence systems.

Using convolutional neural networks and matching algorithms, the Wildbook program recognizes animals from hundreds of thousands of images.

The census data is utilized to focus and invest resources in the zebras' preservation and survival.

The Wildbook deep learning program may be used to identify individual mem bers of any striped, spotted, notched, or wrinkled species.

Giraffe Spotter is Wild book software for giraffe populations.

Wildbook's website, which contains gallery photographs from handheld cameras and camera traps, crowdsources citizen-scientist accounts of giraffe encounters.

An intelligent agent extracts still images of tail flukes from uploaded YouTube videos for Wildbook's individual whale shark catalog.

The whale shark census revealed data that persuaded the International Union for Conservation of Nature to alter the status of the creatures from “vulnerable” to “endangered” on the IUCN Red List of Threatened Species.

The software is also being used by Wildbook to examine videos of hawksbill and green sea turtles.

Berger-Wolf also serves as the director of technology for the conservation organization Wild Me.

Machine vision artificial intelligence systems are used by the charity to recognize individual animals in the wild.

Wild Me keeps track of animals' whereabouts, migration patterns, and social groups.

The goal is to gain a comprehensive understanding of global diversity so that conservation policy can be informed.

Microsoft's AI for Earth initiative has partnered with Wild Me.

Berger-Wolf was born in Vilnius, Lithuania, in 1972.

She went to high school in St. Petersburg, Russia, and graduated from Hebrew University in Jerusalem with a bachelor's degree.

She received her doctorate from the University of Illinois at Urbana-Department Champaign's of Computer Science, and did postdoctoral work at the University of New Mexico and Rutgers University.

She has received the National Science Foundation CAREER Award, the Association for Women in Science Chicago Innovator Award, and the University of Illinois at Chicago Mentor of the Year Award.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Deep Learning.


Further Reading


Berger-Wolf, Tanya Y., Daniel I. Rubenstein, Charles V. Stewart, Jason A. Holmberg, Jason Parham, and Sreejith Menon. 2017. “Wildbook: Crowdsourcing, Computer Vision, and Data Science for Conservation.” Chicago, IL: Bloomberg Data for Good Exchange Conference. https://arxiv.org/pdf/1710.08880.pdf.

Casselman, Anne. 2018. “How Artificial Intelligence Is Changing Wildlife Research.” National Geographic, November. https://www.nationalgeographic.com/animals/2018/11/artificial-intelligence-counts-wild-animals/.

Snow, Jackie. 2018. “The World’s Animals Are Getting Their Very Own Facebook.” Fast 

Company, June 22, 2018. https://www.fastcompany.com/40585495/the-worlds-animals-are-getting-their-very-own-facebook.



Artificial Intelligence - What Is Biometric Technology?

 


The measuring of a human attribute is referred to as a biometric.

It might be physiological, like fingerprint or face identification, or behavioral, like keystroke pattern dynamics or walking stride length.

Biometric characteristics are defined by the White House National Science and Technology Council's Subcommittee on Biometrics as "measurable biological (anatomical and physiological) and behavioral traits that may be employed for automated recognition" (White House, National Science and Technology Council 2006, 4).

Biometric technologies are "technologies that automatically confirm the identity of people by comparing patterns of physical or behavioral characteristics in real time against enrolled computer records of those patterns," according to the International Biometrics and Identification Association (IBIA) (International Biometrics and Identification Association 2019).

Many different biometric technologies are either in use or being developed.

Previously used to access personal smartphones, pay for goods and services, and verify identities for various online accounts and physical facilities, fingerprints are now used to access personal smartphones, pay for goods and services, and verify identities for various online accounts and physical facilities.

The most well-known biometric technology is finger print recognition.

Ultrasound, thermal, optical, and capacitive sensors may all be used to acquire fingerprint image collections.

In order to find matches, AI software applications often use minutia-based matching or pattern matching.

By lighting up the palm, sensors capture pictures of human veins, and vascular pattern identification is now feasible.

Other common biometrics are based on facial, iris, or voice characteristics.

Recognizing people by their faces Individual identification, verification, detection, and characterization may all be possible with AI technology.

Detection and characterization processes rarely involve determining an individual's identity.

Although current systems have great accuracy rates, privacy problems arise since a face might be gathered passively, that is, without the subject's awareness.

Iris identification makes use of near-infrared light to extract the iris's distinct structural characteristics.

The retinal blood vessels are examined using retinal technology, which employs a strong light.

The scanned eyeball is compared to the stored picture to evaluate recognition.

Voice recognition is a more advanced technology than voice activation, which identifies speech content.

Each individual user must be able to be identified via voice recognition.

To present, technology has not been sufficiently precise to allow for trustworthy identification in many situations.

For security and law enforcement applications, biometric technology has long been accessible.

However, in the private sector, these systems are increasingly being employed as a verification mechanism for authentication that formerly needed a password.

The introduction of Apple's iPhone fingerprint scanner in 2013 raised public awareness.

The company's newer models have shifted to face recognition access, which further normalizes the notion.

Financial services, transportation, health care, facility access, and voting are just a few of the industries where biometric technology is being used.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Biometric Privacy and Security.


Further Reading

International Biometrics and Identity Association. 2019. “The Technologies.” https://www.ibia.org/biometrics/technologies/.

White House. National Science and Technology Council. 2006. Privacy and Biometrics: Building a Conceptual Foundation. Washington, DC: National Science and Technology Council. Committee on Technology. Committee on Homeland and National Security. Subcommittee on Biometrics.




Artificial Intelligence - What Is The State Of Biometric Security And Privacy?

 


Biometrics is a phrase derived from the Greek roots bio (life) and metrikos (measurement).

It is used to examine data in the biological sciences using statistical or mathematical techniques.

In recent years, the phrase has been used in a more precise, high-tech sense to refer to the science of identifying people based on biological or behavioral features, as well as the artificial intelligence technologies that are employed to do so.

For ages, scientists have been measuring human physical characteristics or behaviors in order to identify them afterwards.

The first documented application of biometrics may be found in the works of Portuguese historian Joao de Barros (1496–1570).

De Barros reported how Chinese merchants stamped and recorded children's hands and footprints with ink.

Biometric methods were first used in criminal justice settings in the late nineteenth century.

Alphonse Bertillon (1853–1914), a police clerk in Paris, started gathering bodily measurements (head circumference, finger length, etc.) of prisoners in jail to keep track of repeat criminals, particularly those who used aliases or altered features of their appearance to prevent detection.

Bertillonage was the name given to his system.

After the 1890s, when it became clear that many people had identical dimensions, it went out of favor.

Richard Edward Henry (1850–1931), of Scotland Yard, created a significantly more successful biometric technique based on fingerprinting in 1901.

On the tips of people's fingers and thumbs, he measured and categorized loops, whorls, and arches, as well as subcategories of these components.

Fingerprinting is still one of the most often utilized biometric identifiers by law enforcement authorities across the globe.

Fingerprinting systems are expanding in tandem with networking technology, using vast national and international databases as well as computer matching.

In the 1960s and 1970s, the Federal Bureau of Investigation collaborated with the National Bureau of Standards to automate fingerprint identification.

This included scanning existing paper fingerprint cards and creating minutiae feature extraction algorithms and automatic classifiers for comparing electronic fingerprint data.

Because of the high expense of electronic storage, the scanned pictures of fingerprints, as well as the categorization data and minutiae, were not kept in digital form.

In 1980, the FBI made the M40 fingerprint matching technology operational.

In 1999, the Integrated Automated Fingerprint Identification System (IAFIS) became live.

In 2014, the FBI's Next Generation Identification system, an outgrowth of IAFIS, was used to record palm print, iris, and face identification.

While biometric technology is often seen as a way to boost security at the price of privacy, it may also be utilized to assist retain privacy in specific cases.

Many sorts of health-care employees in hospitals need access to a shared database of patient information.

The Health Insurance Portability and Accountability Act emphasizes the need of preventing unauthorized individuals from accessing this sensitive data (HIPAA).

For example, the Mayo Clinic in Florida was a pioneer in biometric access to medical records.

In 1997, the clinic started utilizing digital fingerprinting to limit access to patient information.

Today, voice analysis, face or iris recognition, hand geometry, keystroke dynamics, gait, DNA, and even body odor combine with big data and artificial intelligence recognition software to rap idly identify or authenticate individuals based on voice analysis, face or iris recognition, hand geometry, keystroke dynamics, gait, DNA, and even body odor.

The reliability of DNA fingerprinting has evolved to the point that it is widely recognized by courts.

Even in the absence of further evidence, criminals have been convicted based on DNA findings, while falsely incarcerated prisoners have been exonerated.

While biometrics is frequently employed by law enforcement agencies, courts, and other government agencies, it has also come under fire from the public for infringing on individual privacy rights.

Biometric artificial intelligence software research has risen in tandem with actual and perceived criminal and terrorist concerns at universities, government agencies, and commercial enterprises.

National Bank United used technology developed by biometric experts Visionics and Keyware Technologies to install iris recognition identification systems on three ATMs in Texas as an experiment in 1999.

At Super Bowl XXXV in Tampa, Florida, Visage Corporation presented the FaceFINDER System, an automatic face recognition device.

As fans entered the stadium, the technology scanned their faces and matched them to a database of 1,700 known criminals and terrorists.

Officials claimed to have identified a limited number of offenders, but there have been no big arrests or convictions as a result of such identifications.

At the time, the indiscriminate use of automatic face recognition sparked a lot of debate.

The Snooper Bowl was even dubbed after the game.

Following the terrorist events of September 11, 2001, a public policy discussion in the United States focused on the adoption of biometric technology for airport security.

Following 9/11, polls revealed that Americans were prepared to give up significant portions of their privacy in exchange for increased security.

Biometric technology were already widely used in other nations, such as the Netherlands.

The Privium program for passenger iris scan verification has been in effect at Schiphol Airport since 2001.

In 2015, the Transportation Security Administration (TSA) of the United States started testing biometric techniques for identification verification.

In 2019, Delta Air Lines, in collaboration with US Customs and Border Protection, provided customers at Atlanta's Maynard Jackson International Terminal the option of face recognition boarding.

Passengers can get their boarding cards, self-check baggage bags, and navigate TSA checkpoints and gates without interruption thanks to the technology.

Only 2% of travelers choose to opt out during the first launch.

Biometric authentication systems are currently being used by financial institutions in routine commercial transactions.

They are already widely used to secure personal smart phone access.

As smart home gadgets linked to the internet need support for safe financial transactions, intelligent security will become increasingly more vital.

Opinions on biometrics often shift in response to changing circumstances and settings.

People who support the use of face recognition technology at airports to make air travel safer may be opposed to digital fingerprinting at their bank.

Some individuals believe that private companies' use of biometric technology dehumanizes them, treating them as goods rather than persons and following them in real time.

Community policing is often recognized as an effective technique to create connections between law enforcement personnel and the communities they police at the local level.

However, other opponents argue that biometric monitoring shifts the emphasis away from community formation and toward governmental socio-technical control.

The importance of context, on the other hand, cannot be overstated.

Biometrics in the workplace may be seen as a leveler, since it subjects white-collar employees to the same level of scrutiny as blue-collar workers.

For usage in cloud security systems, researchers are starting to build video analytics AI software and smart sensors.

In real-time monitoring of workplaces, public spaces, and residences, these systems can detect known persons, items, sounds, and movements.

They may also be programmed to warn users when they are in the presence of strangers.

Artificial intelligence algorithms that were once used to create biometric systems are now being utilized to thwart them.

GANs, for example, are generative adversarial networks that replicate human users of network technology and applications.

GANs have been used to build fictitious people's faces using biometric training data.

GANs are often made up of a creator system that creates each new picture and a critic system that iteratively compares the fake face to the original photograph.

In 2020, the firm Icons8 claimed that it could make a million phony headshots in a single day using just seventy human models.

The firm distributes stock images of the headshots made using its proprietary StyleGAN technology.

A university, a dating app, and a human resources agency have all been clients.

Rosebud AI distributes GAN-generated photographs to online shopping sites and small companies who can't afford to pay pricey models and photographers.

Deepfake technology has been used to perpetrate hoaxes and misrepresentations, make fake news clips, and conduct financial fraud.

It uses machine learning algorithms to create convincing but counterfeit videos.

Facebook profiles with deepfake profile photographs have been used to boost political campaigns on social media.

Deepfake hacking is possible on smartphones with face recognition locks.

Deepfake technology may also be used for good.

Such technology has been utilized in films to make performers seem younger in flashbacks or other similar scenarios.

Digital technology was also employed in films like Rogue One: A Star Wars Story (2016) to incorporate the late Peter Cushing (1913–1994), who portrayed the same role from the original 1977 Star Wars picture.

Face-swapping is available to recreational users via a number of software apps.

Users may submit a selfie and adjust their hair and facial expression with FaceApp.

In addition, the computer may mimic the aging of a person's features.

Zao is a deepfake program that takes a single picture and replaces the faces of stars from movies and television shows in hundreds of video.

Deepfake algorithms are now being used to identify the deepfakes' own videos.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Biometric Technology.


Further Reading


Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, 

Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” NIPS ’14: Proceedings of the 27th International Conference on Neural Information Processing Systems 2 (December): 2672–80.

Hopkins, Richard. 1999. “An Introduction to Biometrics and Large-Scale Civilian Identification.” International Review of Law, Computers & Technology 13, no. 3: 337–63.

Jain, Anil K., Ruud Bolle, and Sharath Pankanti. 1999. Biometrics: Personal Identification in Networked Society. Boston: Kluwer Academic Publishers.

Januškevič, Svetlana N., Patrick S.-P. Wang, Marina L. Gavrilova, Sargur N. Srihari, and Mark S. Nixon. 2007. Image Pattern Recognition: Synthesis and Analysis in Biometrics. Singapore: World Scientific.

Nanavati, Samir, Michael Thieme, and Raj Nanavati. 2002. Biometrics: Identity Verification in a Networked World. New York: Wiley.

Reichert, Ramón, Mathias Fuchs, Pablo Abend, Annika Richterich, and Karin Wenz, eds. 2018. Rethinking AI: Neural Networks, Biometrics and the New Artificial Intelligence. Bielefeld, Germany: Transcript-Verlag.

Woodward, John D., Jr., Nicholas M. Orlans, and Peter T. Higgins. 2001. Biometrics: Identity Assurance in the Information Age. New York: McGraw-Hill.




Artificial Intelligence - What Are AI Berserkers?

 


Berserkers are intelligent killing robots initially described by science fiction and fantasy novelist Fred Saberhagen (1930–2007) in his 1962 short tale "Without a Thought." Berserkers later emerged as frequent antagonists in many more of Saberhagen's books and novellas.

Berserkers are a sentient, self-replicating race of space-faring robots with the mission of annihilating all life.

They were built as an ultimate doomsday weapon in a long-forgotten interplanetary conflict between two extraterrestrial cultures (i.e., one intended as a threat or deterrent more than actual use).

The facts of how the Berserkers were released are lost to time, since they seem to have killed off their creators as well as their foes and have been ravaging the Milky Way galaxy ever since.

They come in a variety of sizes, from human-scale units to heavily armored planetoids (cf.

Death Star), and are equipped with a variety of weaponry capable of sterilizing worlds.

Any sentient species that fights back, such as humans, is a priority for the Berserkers.

They construct factories in order to duplicate and better themselves, but their basic objective of removing life remains unchanged.

It's uncertain how far they evolve; some individual units end up questioning or even changing their intentions, while others gain strategic brilliance (e.g., Brother Assassin, "Mr.Jester," Rogue Berserker, Shiva in Steel).

While the Berserkers' ultimate purpose of annihilating all life is evident, their tactical activities are uncertain owing to unpredictability in their cores caused by radioactive decay.

Their name is derived from Norse mythology's Berserkers, powerful human warriors who battled in a fury.

Berserkers depict a worst-case scenario for artificial intelligence: murdering robots that think, learn, and reproduce in a wild and emotionless manner.

They demonstrate the deadly arrogance of providing AI with strong weapons, harmful purpose, and unrestrained self-replication in order to transcend its creators' comprehension and control.

If Berserkers are ever developed and released, they may represent an inexhaustible danger to living creatures over enormous swaths of space and time.

They're quite hard to get rid of after they've been unbottled.

This is owing to their superior defenses and weaponry, as well as their widespread distribution, ability to repair and multiply, autonomous functioning (i.e., without centralized control), capacity to learn and adapt, and limitless patience to lay in wait.

The discovery of Berserkers is so horrifying in Saberhagen's books that human civilizations are terrified of constructing their own AI for fear that it may turn against its creators.

Some astute humans, on the other hand, find a fascinating Berserker counter-weapon: Qwib-Qwibs, self-replicating robots designed to eliminate all Berserkers rather than all life ("Itself Surprised" by Roger Zelazny).

Humans have also utilized cyborgs as an anti-Berserker technique, pushing the boundaries of what constitutes biological intelligence (Berserker Man, Ber serker Prime, Berserker Kill).

Berserkers also exemplifies artificial intelligence's potential for inscrutability and strangeness.

Even while Berserkers can communicate with each other, their huge brains are generally unintelligible to sentient organic lifeforms fleeing or battling them, and they are difficult to study owing to their proclivity to self-destruct if caught.

What can be deduced from their reasoning is that they see life as a plague, a material illness that must be eradicated.

In consequence, the Berserkers lack a thorough understanding of biological intellect and have never been able to adequately duplicate organic life, despite several tries.

They do, however, sometimes enlist human defectors (dubbed "goodlife") to aid the Berserkers in their struggle against "badlife" (i.e., any life that resists extermination).

Nonetheless, Berserkers and humans think in almost irreconcilable ways, hindering attempts to reach a common understanding between life and nonlife.

The seeming contrasts between human and machine intellect are at the heart of most of the conflict in the tales (e.g., artistic appreciation, empathy for animals, a sense of humor, a tendency to make mistakes, the use of acronyms for mnemonics, and even fake encyclopedia entries made to detect pla giarism).

Berserkers have been known to be defeated by non-intelligent living forms such as plants and mantis shrimp ("Pressure" and "Smasher").

Berserkers may be seen of as a specific example of the von Neumann probe, which was invented by mathematician and physicist John von Neumann (1903–1957): self-replicating space-faring robots that might be deployed over the galaxy to efficiently investigate it In the Berserker tales, the Turing Test, developed by mathematician and computer scientist Alan Turing (1912–1954), is both investigated and upended.

In "Inhuman Error," human castaways compete with a Berserker to persuade a rescue crew that they are human, while in "Without a Thought," a Berserker tries to figure out whether its game opponent is human.

The Fermi paradox—the concept that if intelligent extraterrestrial civilizations exist, we should have heard from them by now—is also explained by Berserkers.

It's possible that extraterrestrial civilizations haven't contacted Earth because they were destroyed by Berserker-like robots or are hiding from them.

Berserkers, or anything like them, have featured in a number of science fiction books in addition to Saberhagen's (e.g., works by Greg Bear, Gregory Benford, David Brin, Ann Leckie, and Martha Wells; the Terminator series of movies; and the Mass Effect series of video games).

All of these instances demonstrate how the potential for existential risks posed by AI may be investigated in the lab of fiction.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

de Garis, Hugo; Superintelligence; The Terminator.


Further Reading


Saberhagen, Fred. 2015a. Berserkers: The Early Tales. Albuquerque: JSS Literary Productions.

Saberhagen, Fred. 2015b. Berserkers: The Later Tales. Albuquerque: JSS Literary Productions.

Saberhagen’s Worlds of SF and Fantasy. http://www.berserker.com.

The TAJ: Official Fan site of Fred Saberhagen’s Berserker® Universe. http://www.berserkerfan.org.




Artificial Intelligence - What Is The Asilomar Conference On Beneficial AI?

 


The Asilomar Meeting on Beneficial AI has most prominently portrayed social concerns around artificial intelligence and danger to people via Isaac Asimov's Three Laws of Robotics.

"A robot may not damage a human being or, by inactivity, enable a human being to come to harm; A robot must follow human instructions unless such orders would contradict with the First Law; A robot must safeguard its own existence unless such protection would clash with the First or Second Law" (Asimov 1950, 40).

In subsequent books, Asimov added a Fourth Law or Zeroth Law, which is often quoted as "A robot may not hurt mankind, or, by inactivity, enable humanity to come to harm," and is detailed in Robots and Empire by the robot character Daneel Olivaw (Asimov 1985, chapter 18).

Asimov's zeroth rule sparked debate on how to judge whether or not something is harmful to mankind.

This was the topic of the 2017 Asilomar Conference on Beneficial AI, which went beyond the Three Laws and the Zeroth Law to propose twenty-three principles to protect mankind in the future of AI.

The conference's sponsor, the Future of Life Institute, has posted the principles on its website and has received 3,814 signatures from AI experts and other multidisciplinary supporters.

There are three basic kinds of principles: research questions, ethics and values, and long-term concerns.

These research guidelines are intended to guarantee that the aims of artificial intelligence continue to be helpful to people.

They're meant to help investors decide where to put their money in AI research.

To achieve useful AI, Asilomar signatories con incline that research agendas should encourage and preserve openness and conversation between AI researchers, policymakers, and developers.

Researchers interested in the development of artificial intelligence systems should work together to prioritize safety.

Proposed concepts relating to ethics and values are aimed to prevent damage and promote direct human control over artificial intelligence systems.

Parties to the Asilomar principles believe that AI should reflect human values such as individual rights, freedoms, and diversity acceptance.

Artificial intelligences, in particular, should respect human liberty and privacy, and should only be used to empower and enrich humanity.

Human social and civic norms must be adhered to by AI.

The Asilomar signatories believe that AI creators should be held accountable for their work.

One aspect that stands out is the likelihood of an autonomous weapons arms race.

Because of the high stakes, the designers of the Asilomar principles incorporated principles that addressed longer-term challenges.

They advised prudence, meticulous planning, and human supervision.

Superintelligences must be produced for the wider welfare of mankind, and not merely to further the aims of one industry or government.

The Asilomar Conference's twenty-three principles have sparked ongoing discussions about the need for beneficial AI and specific safeguards for the future of AI and humanity.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Accidents and Risk Assessment; Asimov, Isaac; Autonomous Weapons Systems, Ethics of; Campaign to Stop Killer Robots; Robot Ethics.



Further Reading


Asilomar AI Principles. 2017. https://futureoflife.org/ai-principles/.

Asimov, Isaac. 1950. “Runaround.” In I, Robot, 30–47. New York: Doubleday.

Asimov, Isaac. 1985. Robots and Empire. New York: Doubleday.

Sarangi, Saswat, and Pankaj Sharma. 2019. Artificial Intelligence: Evolution, Ethics, and Public Policy. Abingdon, UK: Routledge.





Nanotech - Nano Resolution Color Imaging To Help Create Nano Electronics



Researchers at UC Riverside have developed a method for squeezing tungsten lamp light into a 6-nanometer area at the end of a silver nanowire. 

Rather of having to settle with molecular vibrations, scientists may now achieve color imaging at a "unprecedented" level. 

The researchers tweaked an existing "superfocusing" technology (which was previously used to detect vibrations) to detect signals throughout the visible spectrum. 

Light travels along a conical path, similar to that of a flashlight. 

The device records the impact of an object on the form and color of the beam as the nanowire's tip passes over it (including through a spectrometer). 


The scientists can make color photographs of carbon nanotubes that would otherwise seem gray by using two sections of spectra for every 6nm pixel. 



Scientists have created new materials for next-generation electronics that are so small that they are not only unidentifiable when tightly packed, but they also don't reflect enough light for even the most powerful optical microscopes to reveal minute features like colors. 

Carbon nanotubes, for example, appear grey under an optical microscope. 

Scientists find it difficult to investigate nanoparticles' unique features and find methods to improve them for industrial application since they can't differentiate small details and variations between individual pieces. 


Researchers from UC Riverside describe a revolutionary imaging technology that compresses lamp light into a nanometer-sized spot in a new paper published in Nature Communications. 


Like a Hogwarts student learning the "Lumos" spell, it keeps the light at the end of a silver nanowire and utilizes it to show previously unseen features, including colors. 

Scientists will be able to examine nanomaterials in enough detail to make them more useful in electronics and other applications thanks to the breakthrough, which improves color imaging resolution to an unparalleled 6 nanometer level. 

With a superfocusing approach developed by the team, Ming Liu and Ruoxue Yan, associate professors at UC Riverside's Marlan and Rosemary Bourns College of Engineering, created this unique instrument. 


Previous research has utilized the technology to examine molecular bond vibrations at 1-nanometer spatial resolution without the need of a focusing lens. 



Liu and Yan improved the method in the current paper to measure signals covering the whole visible wavelength range, which may be used to produce color and portray the object's electrical band structures rather than just molecular vibrations. 

Light from a tungsten lamp is squeezed into a silver nanowire with near-zero scattering or reflection, where it is conveyed by the oscillation wave of free electrons at the silver surface. 

The condensed light travels in a conical route from the silver nanowire tip, which has a radius of only 5 nanometers, similar to a flashlight's light beam. 

The impact of an item on the beam shape and color is detected and recorded as the tip passes over it. 

"It's like controlling the water spray from a hose with your thumb," Liu said. 

"You know how to change the thumb position to acquire the desired spraying pattern, and similarly, in the experiment, we read the light pattern to extract the specifics of the item obstructing the 5 nm-sized light nozzle." The light is then concentrated into a spectrometer, where it takes the shape of a small ring. 


The researchers can colorize absorption and scattering pictures by scanning the probe across an area and capturing two spectra for each pixel. 


The previously grey carbon nanotubes are photographed in color for the first time, and each carbon nanotube may now display its own distinct hue. 

"The imaging is dependent on the atomically clean sharp-tip silver nanowire and its almost scatterless optical coupling and focusing," Yan stated. 

"Otherwise, there would be a lot of stray light in the backdrop, which would sabotage the entire thing." The researchers believe the new approach will be useful in assisting the semiconductor sector in producing homogenous nanomaterials with consistent characteristics for use in electronic devices. 

The new full-color nano-imaging approach should help researchers learn more about catalysis, quantum optics, and nanoelectronics. 

Xuezhi Ma, who worked on the topic as part of his PhD research at UCR Riverside, joined Liu, Yan, and Ma in the study. 


The study is titled "6 nm super-resolution optical transmission and scattering spectroscopic imaging of carbon nanotubes employing a nanometer-scale white light source." 


Although the ability to compress light is impressive in and of itself, the creators believe it will play a significant role in nanotechnology. 

Semiconductor manufacturers may be able to create more consistent nanomaterials for use in chips and other tightly packed devices. 

The constricted light might also help mankind grasp nanoelectronics, quantum optics, and other scientific domains that haven't had this resolution before.


~ Jai Krishna Ponnappan


You May Also Want To Read More About Nano Technology here.


Read the research paper attached  below:

What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...