Artificial Intelligence - How Has The Blade Runner (1982) Film Envisioned AI Androids?

 



Do Androids Dream of Electric Sheep? by Philip Dick was first published in 1968 and is set in post-industrial San Francisco in the year 2020.

In 1982, the book was renamed Blade Runner for a cinematic adaption set in Los Angeles in the year 2019.

While the texts vary significantly, both recount the narrative of bounty hunter Rick Deckard, who is entrusted with locating (and executing) escaping replicants/androids (six in the novel, four in the film).

The setting for both novels is a future in which cities have grown overcrowded and polluted.

Natural nonhuman life has virtually vanished (due to radiation sickness) and been replaced by synthetic and artificial life.

Natural life has become a valued commodity in the future.

Replicants are meant to perform a variety of industrial functions in this environment, most notably as labor for off-world colonies.

The replicants are an exploited race that was created to serve human masters.

When they are no longer useful, they are discarded, and when they struggle against their circumstances, they are retired.

Blade runners are specialist law enforcement operatives tasked with apprehending and killing renegade replicants.

Rick Deckard, a former Blade Runner, returns from retirement to track down the sophisticated Nexus-6 replicant models.

These replicants have escaped to Earth after rebelling against the slave-like conditions on Mars.

In both texts, the treatment of artificial intelligence serves as an implicit critique of capitalism.

The Rosen Association in the book and the Tyrell Corporation in the film develop replicants to create a more docile labor, implying that capitalism converts people into robots.

Eldon Rosen (who is called Tyrell in the film) emphasizes these obnoxious commercial imperatives: "We provided what the colonists wanted...." Every commercial venture is founded on a time-honored principle.

Other corporations would have developed these progressive more human kinds if our company hadn't." 

There are two types of replicants in the movie: those who are designed to be unaware that they are androids and are filled with implanted memories (like Rachael Tyrell), and those who are aware that they are androids and live by that knowledge (the Nexus-6 fugitives).

Rachael in the film is a new Nexus-7 model that has been implanted with the memories of Eldon Tyrell's niece, Lilith. Deckard is sent to murder her, but instead falls in love with her. The two depart the city together at the conclusion of the film.

Rachael's character is handled differently in the book.

Deckard makes an effort to recruit Rachael's assistance in locating the runaway androids. Rachael agrees to meet Deckard in a hotel room in the hopes of persuading him to drop the case.

Rachael explains during their encounter that one of the runaway androids (Pris Stratton) is a carbon copy of her (making Rachael a Nexus-6 model in the novel).

Deckard and Rachael actually have sex and profess their love for each other.

Rachael, on the other hand, is discovered to have slept with other blade runners.

She is designed to do just that in order to keep them from fulfilling their tasks.

Deckard threatens to murder Rachael but decides to leave the hotel rather than carry out his threat.

The replicants are undetectable in both the literature and the movies.

Even under a microscope, they seem to be totally human.

The Voigt-Kampff test, which separates humans from androids based on emotional reactions to a variety of questions, is the sole method to identify them.

The exam is conducted with the use of a machine that monitors blush reaction, heart rate, and eye movement in response to empathy-related questions.

Deckard's identity as a human or a replicant is unknown at this time.

Rachael even inquires as to whether he has completed the Voigt-Kampff exam.

In the movie, Deckard's position is unclear.

Despite the fact that the audience is free to make their own choice, filmmaker Ridley Scott has hinted that Deckard is a replicant.

Deckard takes and passes the exam at the conclusion of the book, although he starts to doubt the effectiveness of blade running.

More than the movie, the book explores what it means to be human in the face of technological advancements.

The book depicts the fragility of the human experience and how it can be easily harmed by the technology that is supposed to help it.

Individuals with Penfield mood organs, for example, can use them to control their emotions.

All that is required is for a person to look up an emotion in a manual, dial the appropriate number, and then experience whatever emotion they desire.

The device's usage and the generation of artificial sensations implies that people may become robotic, as Deckard's wife Iran points out: My first response was to express gratitude for the fact that we could afford a Penfield mood organ.

But then I understood how harmful it was to sense the lack of vitality everywhere, not only in this building - do you know what I mean? I'm assuming you don't.

That, however, was formerly thought to be an indication of mental disease, referred to as "lack of proper emotion." The argument made by Dick is that the mood organ inhibits humans from feeling the right emotional elements of life, which is precisely what the Voigt-Kampff test reveals replicants are incapable of.

Philip Dick was known for his hazy and maybe gloomy vision of artificial intelligence.

His androids and robots are distinctly ambiguous.

They desire to be like humans, yet they lack empathy and emotions.

Do Androids Dream of Electric Sheep is heavily influenced by this uncertainty, which also appears onscreen in Blade Runner.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Nonhuman Rights and Personhood; Pathetic Fallacy; Turing Test.


Further Reading


Brammer, Rebekah. 2018. “Welcome to the Machine: Artificial Intelligence on Screen.” Screen Education 90 (September): 38–45.

Fitting, Peter. 1987. “Futurecop: The Neutralization of Revolt in Blade Runner.” Science Fiction Studies 14, no. 3: 340–54.

Sammon, Paul S. 2017. Future Noir: The Making of Blade Runner. New York: Dey Street Books.

Wheale, Nigel. 1991. “Recognising a ‘Human-Thing’: Cyborgs, Robots, and Replicants in Philip K. Dick’s Do Androids Dream of Electric Sheep? and Ridley Scott’s Blade Runner.” Critical Survey 3, no. 3: 297–304.




Artificial Intelligence - Who Is Tanya Berger-Wolf? What Is The AI For Wildlife Conservation Software Non-profit, 'Wild Me'?

 


Tanya Berger-Wolf (1972–) is a professor at the University of Illinois at Chicago's Department of Computer Science (UIC).

Her contributions to computational ecology and biology, data science and network analysis, and artificial intelligence for social benefit have earned her acclaim.

She is a pioneer in the subject of computational population biology, which employs artificial intelligence algorithms, computational methodologies, social science research, and data collecting to answer questions about plants, animals, and people.

Berger-Wolf teaches multidisciplinary field courses with engineering students from UIC and biology students from Prince ton University at the Mpala Research Centre in Kenya.

She works in Africa because of its vast genetic variety and endangered species, which are markers of the health of life on the planet as a whole.

Her group is interested in learning more about the effects of the environment on social animal behavior, as well as what puts a species at danger.

Wildbook, a charity that develops animal conservation software, is her cofounder and director.

Berger-work Wolf's for Wildbook included a crowd-sourced project to photograph as many Grevy's zebras as possible in order to complete a full census of the endangered animals.

The group can identify each individual Grevy's zebra by its distinctive pattern of stripes, which acts as a natural bar code or fingerprint, after analyzing the photographs using artificial intelligence systems.

Using convolutional neural networks and matching algorithms, the Wildbook program recognizes animals from hundreds of thousands of images.

The census data is utilized to focus and invest resources in the zebras' preservation and survival.

The Wildbook deep learning program may be used to identify individual mem bers of any striped, spotted, notched, or wrinkled species.

Giraffe Spotter is Wild book software for giraffe populations.

Wildbook's website, which contains gallery photographs from handheld cameras and camera traps, crowdsources citizen-scientist accounts of giraffe encounters.

An intelligent agent extracts still images of tail flukes from uploaded YouTube videos for Wildbook's individual whale shark catalog.

The whale shark census revealed data that persuaded the International Union for Conservation of Nature to alter the status of the creatures from “vulnerable” to “endangered” on the IUCN Red List of Threatened Species.

The software is also being used by Wildbook to examine videos of hawksbill and green sea turtles.

Berger-Wolf also serves as the director of technology for the conservation organization Wild Me.

Machine vision artificial intelligence systems are used by the charity to recognize individual animals in the wild.

Wild Me keeps track of animals' whereabouts, migration patterns, and social groups.

The goal is to gain a comprehensive understanding of global diversity so that conservation policy can be informed.

Microsoft's AI for Earth initiative has partnered with Wild Me.

Berger-Wolf was born in Vilnius, Lithuania, in 1972.

She went to high school in St. Petersburg, Russia, and graduated from Hebrew University in Jerusalem with a bachelor's degree.

She received her doctorate from the University of Illinois at Urbana-Department Champaign's of Computer Science, and did postdoctoral work at the University of New Mexico and Rutgers University.

She has received the National Science Foundation CAREER Award, the Association for Women in Science Chicago Innovator Award, and the University of Illinois at Chicago Mentor of the Year Award.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Deep Learning.


Further Reading


Berger-Wolf, Tanya Y., Daniel I. Rubenstein, Charles V. Stewart, Jason A. Holmberg, Jason Parham, and Sreejith Menon. 2017. “Wildbook: Crowdsourcing, Computer Vision, and Data Science for Conservation.” Chicago, IL: Bloomberg Data for Good Exchange Conference. https://arxiv.org/pdf/1710.08880.pdf.

Casselman, Anne. 2018. “How Artificial Intelligence Is Changing Wildlife Research.” National Geographic, November. https://www.nationalgeographic.com/animals/2018/11/artificial-intelligence-counts-wild-animals/.

Snow, Jackie. 2018. “The World’s Animals Are Getting Their Very Own Facebook.” Fast 

Company, June 22, 2018. https://www.fastcompany.com/40585495/the-worlds-animals-are-getting-their-very-own-facebook.



Artificial Intelligence - What Is Biometric Technology?

 


The measuring of a human attribute is referred to as a biometric.

It might be physiological, like fingerprint or face identification, or behavioral, like keystroke pattern dynamics or walking stride length.

Biometric characteristics are defined by the White House National Science and Technology Council's Subcommittee on Biometrics as "measurable biological (anatomical and physiological) and behavioral traits that may be employed for automated recognition" (White House, National Science and Technology Council 2006, 4).

Biometric technologies are "technologies that automatically confirm the identity of people by comparing patterns of physical or behavioral characteristics in real time against enrolled computer records of those patterns," according to the International Biometrics and Identification Association (IBIA) (International Biometrics and Identification Association 2019).

Many different biometric technologies are either in use or being developed.

Previously used to access personal smartphones, pay for goods and services, and verify identities for various online accounts and physical facilities, fingerprints are now used to access personal smartphones, pay for goods and services, and verify identities for various online accounts and physical facilities.

The most well-known biometric technology is finger print recognition.

Ultrasound, thermal, optical, and capacitive sensors may all be used to acquire fingerprint image collections.

In order to find matches, AI software applications often use minutia-based matching or pattern matching.

By lighting up the palm, sensors capture pictures of human veins, and vascular pattern identification is now feasible.

Other common biometrics are based on facial, iris, or voice characteristics.

Recognizing people by their faces Individual identification, verification, detection, and characterization may all be possible with AI technology.

Detection and characterization processes rarely involve determining an individual's identity.

Although current systems have great accuracy rates, privacy problems arise since a face might be gathered passively, that is, without the subject's awareness.

Iris identification makes use of near-infrared light to extract the iris's distinct structural characteristics.

The retinal blood vessels are examined using retinal technology, which employs a strong light.

The scanned eyeball is compared to the stored picture to evaluate recognition.

Voice recognition is a more advanced technology than voice activation, which identifies speech content.

Each individual user must be able to be identified via voice recognition.

To present, technology has not been sufficiently precise to allow for trustworthy identification in many situations.

For security and law enforcement applications, biometric technology has long been accessible.

However, in the private sector, these systems are increasingly being employed as a verification mechanism for authentication that formerly needed a password.

The introduction of Apple's iPhone fingerprint scanner in 2013 raised public awareness.

The company's newer models have shifted to face recognition access, which further normalizes the notion.

Financial services, transportation, health care, facility access, and voting are just a few of the industries where biometric technology is being used.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Biometric Privacy and Security.


Further Reading

International Biometrics and Identity Association. 2019. “The Technologies.” https://www.ibia.org/biometrics/technologies/.

White House. National Science and Technology Council. 2006. Privacy and Biometrics: Building a Conceptual Foundation. Washington, DC: National Science and Technology Council. Committee on Technology. Committee on Homeland and National Security. Subcommittee on Biometrics.




Artificial Intelligence - What Is The State Of Biometric Security And Privacy?

 


Biometrics is a phrase derived from the Greek roots bio (life) and metrikos (measurement).

It is used to examine data in the biological sciences using statistical or mathematical techniques.

In recent years, the phrase has been used in a more precise, high-tech sense to refer to the science of identifying people based on biological or behavioral features, as well as the artificial intelligence technologies that are employed to do so.

For ages, scientists have been measuring human physical characteristics or behaviors in order to identify them afterwards.

The first documented application of biometrics may be found in the works of Portuguese historian Joao de Barros (1496–1570).

De Barros reported how Chinese merchants stamped and recorded children's hands and footprints with ink.

Biometric methods were first used in criminal justice settings in the late nineteenth century.

Alphonse Bertillon (1853–1914), a police clerk in Paris, started gathering bodily measurements (head circumference, finger length, etc.) of prisoners in jail to keep track of repeat criminals, particularly those who used aliases or altered features of their appearance to prevent detection.

Bertillonage was the name given to his system.

After the 1890s, when it became clear that many people had identical dimensions, it went out of favor.

Richard Edward Henry (1850–1931), of Scotland Yard, created a significantly more successful biometric technique based on fingerprinting in 1901.

On the tips of people's fingers and thumbs, he measured and categorized loops, whorls, and arches, as well as subcategories of these components.

Fingerprinting is still one of the most often utilized biometric identifiers by law enforcement authorities across the globe.

Fingerprinting systems are expanding in tandem with networking technology, using vast national and international databases as well as computer matching.

In the 1960s and 1970s, the Federal Bureau of Investigation collaborated with the National Bureau of Standards to automate fingerprint identification.

This included scanning existing paper fingerprint cards and creating minutiae feature extraction algorithms and automatic classifiers for comparing electronic fingerprint data.

Because of the high expense of electronic storage, the scanned pictures of fingerprints, as well as the categorization data and minutiae, were not kept in digital form.

In 1980, the FBI made the M40 fingerprint matching technology operational.

In 1999, the Integrated Automated Fingerprint Identification System (IAFIS) became live.

In 2014, the FBI's Next Generation Identification system, an outgrowth of IAFIS, was used to record palm print, iris, and face identification.

While biometric technology is often seen as a way to boost security at the price of privacy, it may also be utilized to assist retain privacy in specific cases.

Many sorts of health-care employees in hospitals need access to a shared database of patient information.

The Health Insurance Portability and Accountability Act emphasizes the need of preventing unauthorized individuals from accessing this sensitive data (HIPAA).

For example, the Mayo Clinic in Florida was a pioneer in biometric access to medical records.

In 1997, the clinic started utilizing digital fingerprinting to limit access to patient information.

Today, voice analysis, face or iris recognition, hand geometry, keystroke dynamics, gait, DNA, and even body odor combine with big data and artificial intelligence recognition software to rap idly identify or authenticate individuals based on voice analysis, face or iris recognition, hand geometry, keystroke dynamics, gait, DNA, and even body odor.

The reliability of DNA fingerprinting has evolved to the point that it is widely recognized by courts.

Even in the absence of further evidence, criminals have been convicted based on DNA findings, while falsely incarcerated prisoners have been exonerated.

While biometrics is frequently employed by law enforcement agencies, courts, and other government agencies, it has also come under fire from the public for infringing on individual privacy rights.

Biometric artificial intelligence software research has risen in tandem with actual and perceived criminal and terrorist concerns at universities, government agencies, and commercial enterprises.

National Bank United used technology developed by biometric experts Visionics and Keyware Technologies to install iris recognition identification systems on three ATMs in Texas as an experiment in 1999.

At Super Bowl XXXV in Tampa, Florida, Visage Corporation presented the FaceFINDER System, an automatic face recognition device.

As fans entered the stadium, the technology scanned their faces and matched them to a database of 1,700 known criminals and terrorists.

Officials claimed to have identified a limited number of offenders, but there have been no big arrests or convictions as a result of such identifications.

At the time, the indiscriminate use of automatic face recognition sparked a lot of debate.

The Snooper Bowl was even dubbed after the game.

Following the terrorist events of September 11, 2001, a public policy discussion in the United States focused on the adoption of biometric technology for airport security.

Following 9/11, polls revealed that Americans were prepared to give up significant portions of their privacy in exchange for increased security.

Biometric technology were already widely used in other nations, such as the Netherlands.

The Privium program for passenger iris scan verification has been in effect at Schiphol Airport since 2001.

In 2015, the Transportation Security Administration (TSA) of the United States started testing biometric techniques for identification verification.

In 2019, Delta Air Lines, in collaboration with US Customs and Border Protection, provided customers at Atlanta's Maynard Jackson International Terminal the option of face recognition boarding.

Passengers can get their boarding cards, self-check baggage bags, and navigate TSA checkpoints and gates without interruption thanks to the technology.

Only 2% of travelers choose to opt out during the first launch.

Biometric authentication systems are currently being used by financial institutions in routine commercial transactions.

They are already widely used to secure personal smart phone access.

As smart home gadgets linked to the internet need support for safe financial transactions, intelligent security will become increasingly more vital.

Opinions on biometrics often shift in response to changing circumstances and settings.

People who support the use of face recognition technology at airports to make air travel safer may be opposed to digital fingerprinting at their bank.

Some individuals believe that private companies' use of biometric technology dehumanizes them, treating them as goods rather than persons and following them in real time.

Community policing is often recognized as an effective technique to create connections between law enforcement personnel and the communities they police at the local level.

However, other opponents argue that biometric monitoring shifts the emphasis away from community formation and toward governmental socio-technical control.

The importance of context, on the other hand, cannot be overstated.

Biometrics in the workplace may be seen as a leveler, since it subjects white-collar employees to the same level of scrutiny as blue-collar workers.

For usage in cloud security systems, researchers are starting to build video analytics AI software and smart sensors.

In real-time monitoring of workplaces, public spaces, and residences, these systems can detect known persons, items, sounds, and movements.

They may also be programmed to warn users when they are in the presence of strangers.

Artificial intelligence algorithms that were once used to create biometric systems are now being utilized to thwart them.

GANs, for example, are generative adversarial networks that replicate human users of network technology and applications.

GANs have been used to build fictitious people's faces using biometric training data.

GANs are often made up of a creator system that creates each new picture and a critic system that iteratively compares the fake face to the original photograph.

In 2020, the firm Icons8 claimed that it could make a million phony headshots in a single day using just seventy human models.

The firm distributes stock images of the headshots made using its proprietary StyleGAN technology.

A university, a dating app, and a human resources agency have all been clients.

Rosebud AI distributes GAN-generated photographs to online shopping sites and small companies who can't afford to pay pricey models and photographers.

Deepfake technology has been used to perpetrate hoaxes and misrepresentations, make fake news clips, and conduct financial fraud.

It uses machine learning algorithms to create convincing but counterfeit videos.

Facebook profiles with deepfake profile photographs have been used to boost political campaigns on social media.

Deepfake hacking is possible on smartphones with face recognition locks.

Deepfake technology may also be used for good.

Such technology has been utilized in films to make performers seem younger in flashbacks or other similar scenarios.

Digital technology was also employed in films like Rogue One: A Star Wars Story (2016) to incorporate the late Peter Cushing (1913–1994), who portrayed the same role from the original 1977 Star Wars picture.

Face-swapping is available to recreational users via a number of software apps.

Users may submit a selfie and adjust their hair and facial expression with FaceApp.

In addition, the computer may mimic the aging of a person's features.

Zao is a deepfake program that takes a single picture and replaces the faces of stars from movies and television shows in hundreds of video.

Deepfake algorithms are now being used to identify the deepfakes' own videos.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Biometric Technology.


Further Reading


Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, 

Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” NIPS ’14: Proceedings of the 27th International Conference on Neural Information Processing Systems 2 (December): 2672–80.

Hopkins, Richard. 1999. “An Introduction to Biometrics and Large-Scale Civilian Identification.” International Review of Law, Computers & Technology 13, no. 3: 337–63.

Jain, Anil K., Ruud Bolle, and Sharath Pankanti. 1999. Biometrics: Personal Identification in Networked Society. Boston: Kluwer Academic Publishers.

Januškevič, Svetlana N., Patrick S.-P. Wang, Marina L. Gavrilova, Sargur N. Srihari, and Mark S. Nixon. 2007. Image Pattern Recognition: Synthesis and Analysis in Biometrics. Singapore: World Scientific.

Nanavati, Samir, Michael Thieme, and Raj Nanavati. 2002. Biometrics: Identity Verification in a Networked World. New York: Wiley.

Reichert, Ramón, Mathias Fuchs, Pablo Abend, Annika Richterich, and Karin Wenz, eds. 2018. Rethinking AI: Neural Networks, Biometrics and the New Artificial Intelligence. Bielefeld, Germany: Transcript-Verlag.

Woodward, John D., Jr., Nicholas M. Orlans, and Peter T. Higgins. 2001. Biometrics: Identity Assurance in the Information Age. New York: McGraw-Hill.




Artificial Intelligence - What Are AI Berserkers?

 


Berserkers are intelligent killing robots initially described by science fiction and fantasy novelist Fred Saberhagen (1930–2007) in his 1962 short tale "Without a Thought." Berserkers later emerged as frequent antagonists in many more of Saberhagen's books and novellas.

Berserkers are a sentient, self-replicating race of space-faring robots with the mission of annihilating all life.

They were built as an ultimate doomsday weapon in a long-forgotten interplanetary conflict between two extraterrestrial cultures (i.e., one intended as a threat or deterrent more than actual use).

The facts of how the Berserkers were released are lost to time, since they seem to have killed off their creators as well as their foes and have been ravaging the Milky Way galaxy ever since.

They come in a variety of sizes, from human-scale units to heavily armored planetoids (cf.

Death Star), and are equipped with a variety of weaponry capable of sterilizing worlds.

Any sentient species that fights back, such as humans, is a priority for the Berserkers.

They construct factories in order to duplicate and better themselves, but their basic objective of removing life remains unchanged.

It's uncertain how far they evolve; some individual units end up questioning or even changing their intentions, while others gain strategic brilliance (e.g., Brother Assassin, "Mr.Jester," Rogue Berserker, Shiva in Steel).

While the Berserkers' ultimate purpose of annihilating all life is evident, their tactical activities are uncertain owing to unpredictability in their cores caused by radioactive decay.

Their name is derived from Norse mythology's Berserkers, powerful human warriors who battled in a fury.

Berserkers depict a worst-case scenario for artificial intelligence: murdering robots that think, learn, and reproduce in a wild and emotionless manner.

They demonstrate the deadly arrogance of providing AI with strong weapons, harmful purpose, and unrestrained self-replication in order to transcend its creators' comprehension and control.

If Berserkers are ever developed and released, they may represent an inexhaustible danger to living creatures over enormous swaths of space and time.

They're quite hard to get rid of after they've been unbottled.

This is owing to their superior defenses and weaponry, as well as their widespread distribution, ability to repair and multiply, autonomous functioning (i.e., without centralized control), capacity to learn and adapt, and limitless patience to lay in wait.

The discovery of Berserkers is so horrifying in Saberhagen's books that human civilizations are terrified of constructing their own AI for fear that it may turn against its creators.

Some astute humans, on the other hand, find a fascinating Berserker counter-weapon: Qwib-Qwibs, self-replicating robots designed to eliminate all Berserkers rather than all life ("Itself Surprised" by Roger Zelazny).

Humans have also utilized cyborgs as an anti-Berserker technique, pushing the boundaries of what constitutes biological intelligence (Berserker Man, Ber serker Prime, Berserker Kill).

Berserkers also exemplifies artificial intelligence's potential for inscrutability and strangeness.

Even while Berserkers can communicate with each other, their huge brains are generally unintelligible to sentient organic lifeforms fleeing or battling them, and they are difficult to study owing to their proclivity to self-destruct if caught.

What can be deduced from their reasoning is that they see life as a plague, a material illness that must be eradicated.

In consequence, the Berserkers lack a thorough understanding of biological intellect and have never been able to adequately duplicate organic life, despite several tries.

They do, however, sometimes enlist human defectors (dubbed "goodlife") to aid the Berserkers in their struggle against "badlife" (i.e., any life that resists extermination).

Nonetheless, Berserkers and humans think in almost irreconcilable ways, hindering attempts to reach a common understanding between life and nonlife.

The seeming contrasts between human and machine intellect are at the heart of most of the conflict in the tales (e.g., artistic appreciation, empathy for animals, a sense of humor, a tendency to make mistakes, the use of acronyms for mnemonics, and even fake encyclopedia entries made to detect pla giarism).

Berserkers have been known to be defeated by non-intelligent living forms such as plants and mantis shrimp ("Pressure" and "Smasher").

Berserkers may be seen of as a specific example of the von Neumann probe, which was invented by mathematician and physicist John von Neumann (1903–1957): self-replicating space-faring robots that might be deployed over the galaxy to efficiently investigate it In the Berserker tales, the Turing Test, developed by mathematician and computer scientist Alan Turing (1912–1954), is both investigated and upended.

In "Inhuman Error," human castaways compete with a Berserker to persuade a rescue crew that they are human, while in "Without a Thought," a Berserker tries to figure out whether its game opponent is human.

The Fermi paradox—the concept that if intelligent extraterrestrial civilizations exist, we should have heard from them by now—is also explained by Berserkers.

It's possible that extraterrestrial civilizations haven't contacted Earth because they were destroyed by Berserker-like robots or are hiding from them.

Berserkers, or anything like them, have featured in a number of science fiction books in addition to Saberhagen's (e.g., works by Greg Bear, Gregory Benford, David Brin, Ann Leckie, and Martha Wells; the Terminator series of movies; and the Mass Effect series of video games).

All of these instances demonstrate how the potential for existential risks posed by AI may be investigated in the lab of fiction.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

de Garis, Hugo; Superintelligence; The Terminator.


Further Reading


Saberhagen, Fred. 2015a. Berserkers: The Early Tales. Albuquerque: JSS Literary Productions.

Saberhagen, Fred. 2015b. Berserkers: The Later Tales. Albuquerque: JSS Literary Productions.

Saberhagen’s Worlds of SF and Fantasy. http://www.berserker.com.

The TAJ: Official Fan site of Fred Saberhagen’s Berserker® Universe. http://www.berserkerfan.org.




Artificial Intelligence - What Is The Asilomar Conference On Beneficial AI?

 


The Asilomar Meeting on Beneficial AI has most prominently portrayed social concerns around artificial intelligence and danger to people via Isaac Asimov's Three Laws of Robotics.

"A robot may not damage a human being or, by inactivity, enable a human being to come to harm; A robot must follow human instructions unless such orders would contradict with the First Law; A robot must safeguard its own existence unless such protection would clash with the First or Second Law" (Asimov 1950, 40).

In subsequent books, Asimov added a Fourth Law or Zeroth Law, which is often quoted as "A robot may not hurt mankind, or, by inactivity, enable humanity to come to harm," and is detailed in Robots and Empire by the robot character Daneel Olivaw (Asimov 1985, chapter 18).

Asimov's zeroth rule sparked debate on how to judge whether or not something is harmful to mankind.

This was the topic of the 2017 Asilomar Conference on Beneficial AI, which went beyond the Three Laws and the Zeroth Law to propose twenty-three principles to protect mankind in the future of AI.

The conference's sponsor, the Future of Life Institute, has posted the principles on its website and has received 3,814 signatures from AI experts and other multidisciplinary supporters.

There are three basic kinds of principles: research questions, ethics and values, and long-term concerns.

These research guidelines are intended to guarantee that the aims of artificial intelligence continue to be helpful to people.

They're meant to help investors decide where to put their money in AI research.

To achieve useful AI, Asilomar signatories con incline that research agendas should encourage and preserve openness and conversation between AI researchers, policymakers, and developers.

Researchers interested in the development of artificial intelligence systems should work together to prioritize safety.

Proposed concepts relating to ethics and values are aimed to prevent damage and promote direct human control over artificial intelligence systems.

Parties to the Asilomar principles believe that AI should reflect human values such as individual rights, freedoms, and diversity acceptance.

Artificial intelligences, in particular, should respect human liberty and privacy, and should only be used to empower and enrich humanity.

Human social and civic norms must be adhered to by AI.

The Asilomar signatories believe that AI creators should be held accountable for their work.

One aspect that stands out is the likelihood of an autonomous weapons arms race.

Because of the high stakes, the designers of the Asilomar principles incorporated principles that addressed longer-term challenges.

They advised prudence, meticulous planning, and human supervision.

Superintelligences must be produced for the wider welfare of mankind, and not merely to further the aims of one industry or government.

The Asilomar Conference's twenty-three principles have sparked ongoing discussions about the need for beneficial AI and specific safeguards for the future of AI and humanity.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Accidents and Risk Assessment; Asimov, Isaac; Autonomous Weapons Systems, Ethics of; Campaign to Stop Killer Robots; Robot Ethics.



Further Reading


Asilomar AI Principles. 2017. https://futureoflife.org/ai-principles/.

Asimov, Isaac. 1950. “Runaround.” In I, Robot, 30–47. New York: Doubleday.

Asimov, Isaac. 1985. Robots and Empire. New York: Doubleday.

Sarangi, Saswat, and Pankaj Sharma. 2019. Artificial Intelligence: Evolution, Ethics, and Public Policy. Abingdon, UK: Routledge.





Nanotech - Nano Resolution Color Imaging To Help Create Nano Electronics



Researchers at UC Riverside have developed a method for squeezing tungsten lamp light into a 6-nanometer area at the end of a silver nanowire. 

Rather of having to settle with molecular vibrations, scientists may now achieve color imaging at a "unprecedented" level. 

The researchers tweaked an existing "superfocusing" technology (which was previously used to detect vibrations) to detect signals throughout the visible spectrum. 

Light travels along a conical path, similar to that of a flashlight. 

The device records the impact of an object on the form and color of the beam as the nanowire's tip passes over it (including through a spectrometer). 


The scientists can make color photographs of carbon nanotubes that would otherwise seem gray by using two sections of spectra for every 6nm pixel. 



Scientists have created new materials for next-generation electronics that are so small that they are not only unidentifiable when tightly packed, but they also don't reflect enough light for even the most powerful optical microscopes to reveal minute features like colors. 

Carbon nanotubes, for example, appear grey under an optical microscope. 

Scientists find it difficult to investigate nanoparticles' unique features and find methods to improve them for industrial application since they can't differentiate small details and variations between individual pieces. 


Researchers from UC Riverside describe a revolutionary imaging technology that compresses lamp light into a nanometer-sized spot in a new paper published in Nature Communications. 


Like a Hogwarts student learning the "Lumos" spell, it keeps the light at the end of a silver nanowire and utilizes it to show previously unseen features, including colors. 

Scientists will be able to examine nanomaterials in enough detail to make them more useful in electronics and other applications thanks to the breakthrough, which improves color imaging resolution to an unparalleled 6 nanometer level. 

With a superfocusing approach developed by the team, Ming Liu and Ruoxue Yan, associate professors at UC Riverside's Marlan and Rosemary Bourns College of Engineering, created this unique instrument. 


Previous research has utilized the technology to examine molecular bond vibrations at 1-nanometer spatial resolution without the need of a focusing lens. 



Liu and Yan improved the method in the current paper to measure signals covering the whole visible wavelength range, which may be used to produce color and portray the object's electrical band structures rather than just molecular vibrations. 

Light from a tungsten lamp is squeezed into a silver nanowire with near-zero scattering or reflection, where it is conveyed by the oscillation wave of free electrons at the silver surface. 

The condensed light travels in a conical route from the silver nanowire tip, which has a radius of only 5 nanometers, similar to a flashlight's light beam. 

The impact of an item on the beam shape and color is detected and recorded as the tip passes over it. 

"It's like controlling the water spray from a hose with your thumb," Liu said. 

"You know how to change the thumb position to acquire the desired spraying pattern, and similarly, in the experiment, we read the light pattern to extract the specifics of the item obstructing the 5 nm-sized light nozzle." The light is then concentrated into a spectrometer, where it takes the shape of a small ring. 


The researchers can colorize absorption and scattering pictures by scanning the probe across an area and capturing two spectra for each pixel. 


The previously grey carbon nanotubes are photographed in color for the first time, and each carbon nanotube may now display its own distinct hue. 

"The imaging is dependent on the atomically clean sharp-tip silver nanowire and its almost scatterless optical coupling and focusing," Yan stated. 

"Otherwise, there would be a lot of stray light in the backdrop, which would sabotage the entire thing." The researchers believe the new approach will be useful in assisting the semiconductor sector in producing homogenous nanomaterials with consistent characteristics for use in electronic devices. 

The new full-color nano-imaging approach should help researchers learn more about catalysis, quantum optics, and nanoelectronics. 

Xuezhi Ma, who worked on the topic as part of his PhD research at UCR Riverside, joined Liu, Yan, and Ma in the study. 


The study is titled "6 nm super-resolution optical transmission and scattering spectroscopic imaging of carbon nanotubes employing a nanometer-scale white light source." 


Although the ability to compress light is impressive in and of itself, the creators believe it will play a significant role in nanotechnology. 

Semiconductor manufacturers may be able to create more consistent nanomaterials for use in chips and other tightly packed devices. 

The constricted light might also help mankind grasp nanoelectronics, quantum optics, and other scientific domains that haven't had this resolution before.


~ Jai Krishna Ponnappan


You May Also Want To Read More About Nano Technology here.


Read the research paper attached  below:

Quantum Computing - Enter Quantinuum!




Honeywell and quantum software company Cambridge Quantum have finalized a transaction in which Honeywell's Quantum Solutions branch was split off and combined with Cambridge Quantum to become Quantinuum. 


The agreement, which was revealed around six months ago, helped to stoke interest in quantum-related investment, IPOs, and acquisitions. 

We'll have to wait and watch what happens once Honeywell sends Quantinuum out into the world with up to $300 million in cash. 

Q-CTRL, a company that focuses on quantum computing control and software solutions, has announced a $25 million fundraising round led by Airbus Ventures and other investors. 


Airbus' involvement is unsurprising, given the aerospace and military industries provide some near-term commercial prospects for quantum computing applications. 



Quantum spin liquids are a type of magnet matter with spinning electrons that, when frozen, becomes a "fluctuating" solid, according to a team of Harvard-led researchers. 

They appear to have nothing to do with liquid as we know it, but are a form of magnet matter with spinning electrons that, when frozen, becomes a "fluctuating" solid. 

This might lead to more durable qubits. 


This week, Google AI highlighted a recent experiment with time crystals. 

A time crystal includes layers of atoms in an oscillating pattern "made in time," while seeming like something comic book heroes could use to travel around the multiverse. 


Google's Sycamore quantum processor was used to prove that these time crystals may be seen. 


According to a blog post, "observing a time crystal reveals how quantum computers might be utilized to examine unique physical phenomena that have perplexed scientists for years." 

"Moving from theory to observation is a fundamental step that forms the basis of each scientific breakthrough. 

This kind of research opens the door to a slew of new experiments, not only in physics, but potentially in a variety of other domains as well..." 

Finland has officially entered the space race for quantum computing. 

Quantum computing start-up and Finland's VTT Technical Research Centre The country's first operational 5-qubit quantum computer, according to IQM, is up and running. 





Artificial Intelligence - AI And Robotics In The Battlefield.

 



Because of the growth of artificial intelligence (AI) and robots and their application to military matters, generals on the contemporary battlefield are seeing a possible tactical and strategic revolution.

Unmanned aerial vehicles (UAVs), also known as drones, and other robotic devices played a key role in the wars in Afghanistan (2001–) and Iraq (2003–2011).

It is possible that future conflicts will be waged without the participation of humans.

Without human control or guidance, autonomous robots will fight in war on land, in the air, and beneath the water.

While this vision remains in the realm of science fiction, battlefield AI and robotics raise a slew of practical, ethical, and legal issues that military leaders, technologists, jurists, and philosophers must address.

When many people think about AI and robotics on the battlefield, the first image that springs to mind is "killer robots," armed machines that indiscriminately destroy everything in their path.

There are, however, a variety of applications for battlefield AI that do not include killing.

In recent wars, the most notable application of such technology has been peaceful in character.

UAVs are often employed for surveillance and reconnaissance.

Other robots, like as iRobot's PackBot (the same firm that makes the vacuum-cleaning Roomba), are employed to locate and assess improvised explosive devices (IEDs), making their safe disposal easier.

Robotic devices can navigate treacherous terrain, such as Afghanistan's caves and mountain crags, as well as areas too dangerous for humans, such as under a vehicle suspected of being rigged with an IED.

Unmanned Underwater Vehicles (UUVs) are also used to detect mines underwater.

IEDs and explosives are so common on today's battlefields that these robotic gadgets are priceless.

Another potential life-saving capacity of battlefield robots that has yet to be realized is in the realm of medicine.

Robots can safely collect injured troops on the battlefield in areas that are inaccessible to their human counterparts, without jeopardizing their own lives.

Robots may also transport medical supplies and medications to troops on the battlefield, as well as conduct basic first aid and other emergency medical operations.

AI and robots have the greatest potential to change the battlefield—whether on land, sea, or in the air—in the arena of deadly power.

The Aegis Combat System (ACS) is an example of an autonomous system used by several militaries across the globe aboard destroyers and other naval combat vessels.

Through radar and sonar, the system can detect approaching threats, such as missiles from the surface or air, mines, or torpedoes from the water.

The system is equipped with a powerful computer system and can use its own munitions to eliminate identified threats.

Despite the fact that Aegis is activated and supervised manually, it has the potential to operate autonomously in order to counter threats faster than humans could.

In addition to partly automated systems like the ACS and UAVs, completely autonomous military robots capable of making judgments and acting on their own may be developed in the future.

The most significant feature of AI-powered robotics is the development of lethal autonomous weapons (LAWs), sometimes known as "killer robots." On a sliding scale, robot autonomy exists.

At one extreme of the spectrum are robots that are designed to operate autonomously, but only in reaction to a specific stimulus and in one direction.

This degree of autonomy is shown by a mine that detonates autonomously when stepped on.

Remotely operated machines, which are unmanned yet controlled remotely by a person, are also available at the lowest end of the range.

Semiautonomous systems occupy the midpoint of the spectrum.

These systems may be able to work without the assistance of a person, but only to a limited extent.

A robot commanded to launch, go to a certain area, and then return at a specific time is an example of such a system.

The machine does not make any "decisions" on its own in this situation.

Semiautonomous devices may also be configured to accomplish part of a task before waiting for further inputs before moving on to the next step.

Full autonomy is the last step.

Fully autonomous robots are designed with a purpose and are capable of achieving it entirely on their own.

This might include the capacity to use deadly force without direct human guidance in warfare circumstances.

Robotic gadgets that are lethally equipped, AI-enhanced, and totally autonomous have the ability to radically transform the current warfare.

Military ground troops made up of both humans and robots, or entirely of robots with no humans, would expand armies.

Small, armed UAVs would not be constrained by the requirement for human operators, and they might be assembled in massive swarms to overwhelm bigger, but less mobile troops.

Such technological advancements will entail equally dramatic shifts in tactics, strategy, and even the notion of combat.

This technology will become less expensive as it becomes more widely accessible.

This might disturb the present military power balance.

Even minor governments, and maybe even non-state organizations like terrorist groups, may be able to develop their own robotic army.

Fully autonomous LAWs bring up a slew of practical, ethical, and legal issues.

One of the most pressing practical considerations is safety.

A completely autonomous robot with deadly armament that malfunctions might represent a major threat to everyone who comes in contact with it.

Fully autonomous missiles might theoretically wander off course and kill innocent people due to a mechanical failure.

Unpredictable technological faults and malfunctions may occur in any kind of apparatus.

Such issues offer a severe safety concern to individuals who deploy deadly robotic gadgets as well as unwitting bystanders.

Even if there are no possible problems, restrictions in program ming may result in potentially disastrous errors.

Programming robots to discriminate between combatants and noncombatants, for example, is a big challenge, and it's simple to envisage misidentification leading to unintentional fatalities.

The greatest concern, though, is that robotic AI may grow too quickly and become independent of human control.

Sentient robots might turn their armament on humans, like in popular science fiction movies and literature, and in fulfillment of eminent scientist Stephen Hawking's grim forecast that the development of AI could end in humanity's annihilation.

Laws may also lead to major legal issues.

The rules of war apply to human beings.

Robots cannot be held accountable for prospective law crimes, whether criminally, civilly, or in any other manner.

As a result, there's a chance that war crimes or other legal violations may go unpunished.

Here are some serious issues to consider: Can the programmer or engineer of a robot be held liable for the machine's actions? Could a person who gave the robot its "command" be held liable for the robot's unpredictability or blunders on a mission that was otherwise self-directed? Such considerations must be thoroughly considered before any completely autonomous deadly equipment is deployed.

Aside from legal issues of duty, a slew of ethical issues must be addressed.

The conduct of war necessitates split-second moral judgments.

Will self-driving robots be able to tell the difference between a kid and a soldier, or between a wounded and helpless soldier and an active combatant? Will a robotic military force always be seen as a cold, brutal, and merciless army of destruction, or can a robot be designed to behave kindly when the situation demands it? Because combat is riddled with moral dilemmas, LAWs involved in war will always be confronted with them.

Experts wonder that dangerous autonomous robots can ever be trusted to do the right thing.

Moral action requires not just rationality—which robots may be capable of—but also emotions, empathy, and wisdom.

These later items are much more difficult to implement in code.

Many individuals have called for an absolute ban on research in this field because to legal, ethical, and practical problems highlighted by the potential of ever more powerful AI-powered robotic military hardware.

Others, on the other hand, believe that scientific advancement cannot be halted.

Rather than prohibiting such study, they argue that scientists and society as a whole should seek realistic answers to the difficulties.

Some argue that keeping continual human supervision and control over robotic military units may address many of the ethical and legal issues.

Others argue that direct supervision is unlikely in the long term because human intellect will be unable to match the pace with which computers think and act.

As the side that gives its robotic troops more autonomy gains an overwhelming advantage over those who strive to preserve human control, there will be an inevitable trend toward more and more autonomy.

They warn that fully autonomous forces will always triumph.

Despite the fact that it is still in its early stages, the introduction of more complex AI and robotic equipment to the battlefield has already resulted in significant change.

AI and robotics on the battlefield have the potential to drastically transform the future of warfare.

It remains to be seen if and how this technology's technical, practical, legal, and ethical limits can be addressed.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Autonomous Weapons Systems, Ethics of; Lethal Autonomous Weapons 
Systems.


Further Reading

Borenstein, Jason. 2008. “The Ethics of Autonomous Military Robots.” Studies in Ethics, 
Law, and Technology 2, no. 1: n.p. https://www.degruyter.com/view/journals/selt/2/1/article-selt.2008.2.1.1036.xml.xml.

Morris, Zachary L. 2018. “Developing a Light Infantry-Robotic Company as a System.” 
Military Review 98, no. 4 (July–August): 18–29.

Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New 
York: W. W. Norton.

Singer, Peter W. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st 
Century. London: Penguin.

Sparrow, Robert. 2007. “Killer Robots,” Journal of Applied Philosophy 24, no. 1: 62–77.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...