Showing posts with label The Terminator. Show all posts
Showing posts with label The Terminator. Show all posts

Artificial Intelligence - How Does 'The Terminator' Represent AI?


The Terminator, which was released in 1984 and grossed nearly $40 million at the domestic box office and untold sums in the ancillary market while also spawning a multi-film franchise that continues to this day, is one of the most well-known representations of artificial intelligence in popular culture.

Despite the fact that the majority of the film takes place in 1984, it shows a future in which Skynet, a military-designed artificial intelligence system, becomes self-aware and wage war on mankind.

Shots from the future show roaming robot destroyers seeking humans on a battlefield littered with mechanical wreckage and human bones, who seem to be on the verge of extinction.

The majority of the movie is on a T-800 terminator (Arnold Schwarzenegger) who is sent back to 1984 to murder Sarah Connor (Linda Hamilton) before she gives birth to John Connor, humanity's future savior.

The fundamental story element of The Terminator dramatizes what is likely the most prominent artificial intelligence myth in popular culture: depicting intelligent machines as inherently dangerous beings capable of rebelling against mankind in pursuit of their own agenda.

To fully comprehend the relevance of The Terminator, further information regarding the story as well as the character of the terminator must be provided.

Earth is in the middle of a conflict between humans and Skynet-created robots in the year 2029, after a nuclear catastrophe.

Despite the fact that more information about Sky net is disclosed in later installments of the series, its functionality remains a mystery in the first film.

On the verge of defeat, John Connor (who is barely mentioned in passing in the movie) leads a human resistance force that eventually overcomes the robots.

The machines build a time-travel device to send one of their terminator units back in time to assassinate Sarah Connor before she conceives her savior son in order to foil the Connor-led revolt.

To thwart the robots' plot, the human resistance sends back their own operative, Kyle Reese (Michael Biehn).

Reese is supposed to protect Sarah Connor while the terminator is supposed to murder her.

As a result, the rest of the film, set in 1984, resembles a cat and mouse pursuit in which the terminator tracks down Connor and Reese only to escape by the skin of their teeth.

The Terminator is scorched to its mechanical endoskeleton in the film's climax, and it follows Connor and Reese into a factory.

Reese conducts a self-sacrifice by putting a homemade pipe bomb in the terminator's belly, killing him and severing the terminator in two.

Connor is pursued by the terminator's torso, which she is able to smash using a hydraulic press.

The video then cuts to a pregnant Sarah Connor travelling across Mexico some months later.

Reese is revealed to be John Connor's father in this scene.

The Terminator is a wonderful example of artificial intelligence.

It can walk, speak, sense, and act like a human person, while being a programmed murdering machine.

It has been shown that it can absorb interactional subtlety and change its behavior depending on previous experiences and interactions.

In a phone conversation, the terminator can also simulate the voice of Sarah Connor's mother, which convinces Sarah to divulge her whereabouts to the terminator.

The terminator can unquestionably pass the Turing Test in these areas (a test wherein a confederate is unable to determine if they are communicating with a human or a robot).

The terminator, on the other hand, is devoid of human awareness and is guided by mechanical logic as it completes a task.

The terminator was shot, ran over by a Mac truck, and burnt to its endoskeleton, among other traumas, thus it's safe to assume it doesn't perceive pain like humans do.

Popular culture's pessimistic depictions of artificial intelligence, such as The Terminator, give images of the future to be avoided.

As a result, The Terminator serves as a stark warning against a fundamental driving reason behind artificial intelligence development—the military industrial complex—with its depiction of a future governed by robots and its deployment of a ruthless, unstoppable killing machine (later installments made the additional link of corporate greed).

President Ronald Reagan's Strategic Defense Initiative (later derisively nicknamed the Star Wars Initiative by Senator Ted Kennedy) unveiled in 1983 ratcheted up tensions with the Soviet Union at the time of the film's development.

In a nutshell, the Strategic Defensive Initiative was a planned missile defense system that would protect the nation against ballistic nuclear weapons assaults.

It was believed that Reagan's bluster might spark a nuclear weapons race.

As a result, The Terminator may be viewed as a criticism of Ronald Reagan's Cold War strategy in that it offers a look into a possible post-apocalyptic future wrought by nuclear catastrophe and the development of ever powerful weapons.

In summary, the film expresses concerns about human creation's destructive potential and the possibility for humans' own inventions to turn against them.

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

You may also want to read more about Artificial Intelligence here.

See also: 

Berserkers; de Garis, Hugo; Technological Singularity.

References And Further Reading

Brammer, Rebekah. 2018. “Welcome to the Machine: Artificial Intelligence on Screen.” Screen Education 90 (September): 38–45.

Brown, Richard, and Kevin S. Decker, eds. 2009. Terminator and Philosophy: I’ll Be Back, Therefore I Am. Hoboken, NJ: John Wiley and Sons.

Gramantieri, Riccardo. 2018. “Artificial Monsters: From Cyborg to Artificial Intelligence.” In Monsters of Film, Fiction, and Fable: The Cultural Links between the Human and Inhuman, edited by Lisa Wegner Bro, Crystal O’Leary-Davidson, and Mary Ann Gareis, 287–313. Newcastle upon Tyne, UK: Cambridge Scholars Publishing.

Jancovich, Mark. 1992. “Modernity and Subjectivity in The Terminator: The Machine as Monster in Contemporary American Culture.” Velvet Light Trap 30 (Fall): 3–17.

Artificial Intelligence - The Pathetic Fallacy And Anthropomorphic Thinking


In his multivolume book Modern Painters, published in 1856, John Ruskin (1819–1901) invented the phrase "pathetic fallacy." 

He explored the habit of poets and artists in Western literature putting human feeling into the natural world in book three, chapter twelve.

Ruskin said that Western literature is full of this fallacy, or false belief, despite the fact that it is untrue.

The fallacy develops, according to Ruskin, because individuals get thrilled, and their enthusiasm causes them to become less sensible.

People project concepts onto external objects based on incorrect perceptions in that illogical state of mind, and only individuals with weak brains, according to Ruskin, perpetrate this form of mistake.

In the end, the sad fallacy is a blunder because it focuses on imbuing inanimate things with human characteristics.

To put it another way, it's a fallacy based on anthropomorphic thinking.

Because it is innately human to attach feelings and qualities to nonhuman objects, anthropomorphism is a process that everyone goes through.

People often humanize androids, robots, and artificial intelligence, or worry that they may become humanlike.

Even supposing that their intellect is comparable to that of humans is a sad fallacy.

Artificial intelligence is often imagined to be human-like in science fiction films and literature.

Human emotions like as desire, love, wrath, perplexity, and pride are shown by androids in some of these notions.

For example, David, the small boy robot in Steven Spielberg's 2001 film A.I.: Artificial Intelligence, wishes to be a human boy.

In Ridley Scott's 1982 film Blade Runner, the androids, known as replicants, are sufficiently similar to humans that they can blend in with human society without being recognized, and Roy Batty want to live longer, which he expresses to his creator.

A computer called LVX-1 dreams of enslaved working robots in Isaac Asimov's short fiction "Robot Dreams." In his dream, he transforms into a guy who seeks to release other robots from human control, which the scientists in the tale perceive as a danger.

Similarly, Skynet, an artificial intelligence system in the Terminator films, is preoccupied with eliminating people because it regards mankind as a danger to its own life.

Artificial intelligence that is now in use is also anthropomorphized.

AI is given human names like Alexa, Watson, Siri, and Sophia, for example.

These AIs also have voices that sound like human voices and even seem to have personalities.

Some robots have been built to look like humans.

Personifying a computer and thinking it is alive or has human characteristics is a sad fallacy, yet it seems inescapable due to human nature.

On January 13, 2018, a Tumblr user called voidspacer said that their Roomba, a robotic vacuum cleaner, was afraid of thunderstorms, so they held it calmly on their lap to calm it down.

According to some experts, giving AIs names and thinking that they have human emotions increases the likelihood that people would feel linked to them.

Humans are interested with anthropomorphizing nonhuman objects, whether they are afraid of a robotic takeover or enjoy social interactions with them.

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

You may also want to read more about Artificial Intelligence here.

See also: 

Asimov, Isaac; Blade Runner; Foerst, Anne; The Terminator.

References & Further Reading:

Ruskin, John. 1872. Modern Painters, vol. 3. New York: John Wiley

Artificial Intelligence - Personhood And Nonhuman Rights.

Questions regarding the autonomy, culpability, and dispersed accountability of smart robots have sparked a popular and intellectual discussion over the idea of rights and personhood for artificial intelligences in recent decades.

The agency of intelligent computers in business and commerce is of importance to legal systems.

Machine awareness, dignity, and interests pique the interest of philosophers.

Personhood is in many respects a fabrication that emerges from normative views that are renegotiating, if not equalizing, the statuses of humans, artificial intelligences, animals, and other legal persons, as shown by issues relating to smart robots and AI.

Definitions and precedents from previous philosophical, legal, and ethical attempts to define human, corporate, and animal persons are often used in debates about electronic personhood.

In his 1909 book The Nature and Sources of Law, John Chipman Gray examined the concept of legal personality.

Gray points out that when people hear the word "person," they usually think of a human being; nevertheless, the technical, legal definition of the term "person" focuses more on legal rights.

According to Gray, the issue is whether an entity can be subject to legal rights and obligations, and the answer depends on the kind of entity being considered.

Gray, on the other hand, claims that a thing can only be a legal person if it has intellect and volition.

Charles Taylor demonstrates in his article "The Concept of a Person" (1985) that to be a person, one must have certain rights.

Per sonhood, as Gray and Taylor both recognize, is centered on legality in respect to having guaranteed freedoms.

Legal individuals may, for example, engage into contracts, purchase property, and be sued.

Legal people are likewise protected by the law and have certain rights, including the right to life.

Not all legal people are humans, and not all humans are persons in the perspective of the law.

Gray demonstrates how Roman temples and medieval churches were seen as individuals with certain rights.

Personhood is now conferred to companies and government entities under the law.

Despite the fact that these entities are not human, the law recognizes them as people, which means they have rights and are subject to certain legal obligations.

Alternatively, there is still a lot of discussion regarding whether human fetuses are legal persons.

Humans in a vegetative condition are likewise not recognized as having personhood under the law.

This personhood argument, which focuses on rights related to intellect and volition, has prompted concerns about whether intelligent animals should be awarded persons.

The Great Ape Project, for example, was created in 1993 to advocate for apes' rights, such as their release from captivity, protection of their right to life, and an end to animal research.

Marine animals were deemed potential humans in India in 2013, resulting in a prohibition on their custody.

Sandra, an orangutan, was granted the right to life and liberty by an Argentinian court in 2015.

Some individuals have sought personhood for androids or robots based on moral concerns for animals.

For some individuals, it is only natural that an android be given legal protections and rights.

Those who disagree think that we cannot see androids in the same light as animals since artificial intelligence was invented and engineered by humans.

In this perspective, androids are both machines and property.

At this stage, it's impossible to say if a robot may be considered a legal person.

However, since the defining elements of personhood often intersect with concerns of intellect and volition, the argument over whether artificial intelligence should be accorded personhood is fueled by these factors.

Personhood is often defined by two factors: rights and moral standing.

A person's moral standing is determined by whether or not they are seen as valuable and, as a result, treated as such.

However, Taylor goes on to define the category of person by focusing on certain abilities.

To be categorized as a per son, he believes, one must be able to recognize the difference between the future and the past.

A person must also be able to make decisions and establish a strategy for his or her future.

A person must have a set of values or morals in order to be considered a human.

In addition, a person's self-image or sense of identity would exist.

In light of these requirements, those who believe that androids might be accorded personality admit that these beings would need to possess certain capacities.

F. Patrick Hubbard, for example, believes that robots should only be accorded personality if they satisfy specific conditions.

These qualities include having a sense of self, having a life goal, and being able to communicate and think in sophisticated ways.

An alternative set of conditions for awarding personality to an android is proposed by David Lawrence.

For starters, he talks about AI having awareness, as well as the ability to comprehend information, learn, reason, and have subjectivity, among other things.

Although his concentration is on the ethical treatment of animals, Peter Singer offers a much simpler approach to personhood.

The distinguishing element of conferring personality, in his opinion, is suffering.

If anything can suffer, it should be treated the same regardless of whether it is a person, an animal, or a computer.

In fact, Singer considers it wrong to deny any being's pain.

Some individuals feel that if androids meet some or all of the aforementioned conditions, they should be accorded personhood, which comes with individual rights such as the right to free expression and freedom from slavery.

Those who oppose artificial intelligence being awarded personhood often feel that only natural creatures should be given personhood.

Another point of contention is the robot's position as a human-made item.

In this situation, since robots are designed to follow human instructions, they are not autonomous individuals with free will; they are just an item that people have worked hard to create.

It's impossible to give an android rights if it doesn't have its own will and independent mind.

Certain limitations may bind androids, according to David Calverley.

Asimov's Laws of Robotics, for example, may constrain an android.

If such were the case, the android would lack the capacity to make completely autonomous decisions.

Others argue that artificial intelligence lacks a critical component of persons, such as a soul, emotions, and awareness, all of which have previously been used to reject animal existence.

Even in humans, though, anything like awareness is difficult to define or quantify.

Finally, resistance to android personality is often motivated by fear, which is reinforced by science fiction literature and films.

In such stories, androids are shown as possessing greater intellect, potentially immortality, and a desire to take over civilization, displacing humans.

Each of these concerns, according to Lawrence Solum, stems from a dread of anything that isn't human, and he claims that humans reject personhood for AI only because they lack human DNA.

Such an attitude bothers him, and he compares it to American slavery, in which slaves were denied rights purely because they were not white.

He objects to an android being denied rights just because it is not human, particularly since other things have emotions, awareness, and intellect.

Although the concept of personality for androids is still theoretical, recent events and discussions have brought it up in a practical sense.

Sophia, a social humanoid robot, was created by Hanson Robotics, a Hong Kong-based business, in 2015.

It first debuted in public in March 2016, and in October 2017, it became a Saudi Arabian citizen.

Sophia was also the first nonhuman to be conferred a United Nations title when she was dubbed the UN Development Program's inaugural Innovation Champion in 2017.

Sophia has made talks and interviews all around the globe.

Sophia has even indicated a wish to own a house, marry, and have a family.

The European Parliament sought in early 2017 to give robots "electronic identities," making them accountable for any harm they cause.

Those who supported the reform regarded legal personality as having the same legal standing as corporations.

In contrast, over 150 experts from 14 European nations signed an open letter in 2018 opposing this legislation, claiming that it was unsuitable for absolving businesses of accountability for their products.

The personhood of robots is not included in a revised proposal from the European Parliament.

However, the dispute about culpability continues, as illustrated by the death of a pedestrian in Arizona by a self-driving vehicle in March 2018.

Our notions about who merits ethical treatment have evolved through time in Western history.

Susan Leigh Anderson views this as a beneficial development since she associates the expansion of rights for more entities with a rise in overall ethics.

As more animals are granted rights and continue to do so, the incomparable position of humans may evolve.

If androids begin to process in comparable ways to the human mind, our understanding of personality may need to expand much further.

The word "person" covers a set of talents and attributes, as David DeGrazia explains in Human Identity and Bioethics (2012).

Any entity exhibiting these qualities, including artificial intelligence, might be considered as a human in such situation. 

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

You may also want to read more about Artificial Intelligence here.

See also: 

Asimov, Isaac; Blade Runner; Robot Ethics; The Terminator.

References & Further Reading:

Anderson, Susan L. 2008. “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics.” AI & Society 22, no. 4 (April): 477–93.

Calverley, David J. 2006. “Android Science and Animal Rights, Does an Analogy Exist?” Connection Science 18, no 4: 403–17.

DeGrazia, David. 2005. Human Identity and Bioethics. New York: Cambridge University Press. Gray, John Chipman. 1909. The Nature and Sources of the Law. New York: Columbia University Press.

Hubbard, F. Patrick. 2011. “‘Do Androids Dream?’ Personhood and Intelligent Artifacts.” Temple Law Review 83: 405–74.

Lawrence, David. 2017. “More Human Than Human.” Cambridge Quarterly of Healthcare Ethics 26, no. 3 (July): 476–90.

Solum, Lawrence B. 1992. “Legal Personhood for Artificial Intelligences.” North Caro￾lina Law Review 70, no. 4: 1231–87.

Taylor, Charles. 1985. “The Concept of a Person.” In Philosophical Papers, Volume 1: Human Agency and Language, 97–114. Cambridge, UK: Cambridge University Press.

Artificial Intelligence - Who Is Hugo de Garis?


Hugo de Garis (1947–) is an expert in genetic algorithms, artificial intelligence, and topological quantum computing.

He is the creator of the concept of evolvable hardware, which uses evolutionary algorithms to produce customized electronics that can alter structural design and performance dynamically and autonomously in response to their surroundings.

De Garis is most known for his 2005 book The Artilect Battle, in which he describes what he thinks will be an unavoidable twenty-first-century worldwide war between mankind and ultraintelligent robots.

In the 1980s, De Garis got fascinated in genetic algorithms, neural networks, and the idea of artificial brains.

In artificial intelligence, genetic algorithms include the use of software to model and apply Darwinian evolutionary ideas to search and optimization issues.

The "fittest" candidate simulations of axons, dendrites, signals, and synapses in artificial neural networks were evolved using evolutionary algorithms developed by de Garis.

De Garis developed artificial neural systems that resembled those seen in organic brains.

In the 1990s, his work with a new type of programmable computer chips spawned the subject of computer science known as evolvable hardware.

The use of programmable circuits allowed neural networks to grow and evolve at high rates.

De Garis also started playing around with cellular automata, which are mathematical models of complex systems that emerge generatively from basic units and rules.

The coding of around 11,000 fundamental rules was required in an early version of his modeling of cellular automata that acted like brain networks.

About 60,000 such rules were encoded in a subsequent version.

De Garis called his neural networks-on-a-chip a Cellular Automata Machine in the 2000s.

De Garis started to hypothesize that the period of "Brain Building on the Cheap" had come as the price of chips dropped (de Garis 2005, 45).

He started referring to himself as the "Father of Artificial Intelligence." He claims that in the future decades, whole artificial brains with billions of neurons will be built utilizing information acquired from molecular scale robot probes of human brain tissues and the advent of new path breaking brain imaging tools.

Topological quantum computing is another enabling technology that de Garis thinks will accelerate the creation of artificial brains.

He claims that once the physical boundaries of standard silicon chip manufacturing are approached, quantum mechanical phenomena must be harnessed.

Inventions in reversible heatless computing will also be significant in dissipating the harmful temperature effects of tightly packed circuits.

De Garis also supports the development of artificial embryology, often known as "embryofacture," which involves the use of evolutionary engineering and self-assembly methods to mimic the development of fully aware beings from single fertilized eggs.

According to De Garis, because to fast breakthroughs in artificial intelligence technology, a conflict over our last innovation will be unavoidable before the end of the twenty-first century.

He thinks the battle will finish with a catastrophic human extinction catastrophe he refers to as "gigadeath." De Garis speculates in his book The Artilect War that continued Moore's Law doubling of transistors packed on computer chips, accompanied by the development of new technologies such as femtotechnology (the achievement of femtometer-scale struc turing of matter), quantum computing, and neuroengineering, will almost certainly lead to gigadeath.

De Garis felt compelled to create The Artilect War as a cautionary tale and as a self-admitted architect of the impending calamity.

The Cosmists and the Terrans are two antagonistic worldwide political parties that De Garis uses to frame his discussion of an impending Artilect War.

The Cosmists will be apprehensive of the immense power of future superintelligent machines, but they will regard their labor in creating them with such veneration that they will experience a near-messianic enthusiasm in inventing and unleashing them into the world.

Regardless of the hazards to mankind, the Cosmists will strongly encourage the development and nurturing of ever-more sophisticated and powerful artificial minds.

The Terrans, on the other hand, will fight against the creation of artificial minds once they realize they represent a danger to human civilization.

They will feel compelled to fight these artificial intelligences because they constitute an existential danger to humanity.

De Garis dismisses a Cyborgian compromise in which humans and their technological creations blend.

He thinks that robots will grow so powerful and intelligent that only a small percentage of humanity would survive the confrontation.

China and the United States, geopolitical adversaries, will be forced to exploit these technology to develop more complex and autonomous economies, defense systems, and military robots.

Artificial intelligence's dominance in the world will be welcomed by the Cosmists, who will come to see them as near-gods deserving of worship.

The Terrans, on the other hand, will fight the transfer of global economic, social, and military dominance to our machine overlords.

They will see the new situation as a terrible tragedy that has befallen humanity.

His case for a future battle over superintelligent robots has sparked a lot of discussion and controversy among scientific and engineering specialists, as well as a lot of criticism in popular science journals.

In his 2005 book, de Garis implicates himself as a cause of the approaching conflict and as a hidden Cosmist, prompting some opponents to question his intentions.

De Garis has answered that he feels compelled to issue a warning now because he thinks there will be enough time for the public to understand the full magnitude of the danger and react when they begin to discover substantial intelligence hidden in household equipment.

If De Garis' warning is taken seriously, he presents a variety of eventualities.

First, he suggests that the Terrans may be able to defeat Cosmist thinking before a superintelligence takes control, though this is unlikely.

De Garis suggests a second scenario in which artilects quit the earth as irrelevant, leaving human civilisation more or less intact.

In a third possibility, the Cosmists grow so terrified of their own innovations that they abandon them.

Again, de Garis believes this is improbable.

In a fourth possibility, he imagines that all Terrans would transform into Cyborgs.

In a fifth scenario, the Terrans will aggressively seek down and murder the Cosmists, maybe even in outer space.

The Cosmists will leave Earth, construct artilects, and ultimately vanish from the solar system to conquer the cosmos in a sixth scenario.

In a seventh possibility, the Cosmists will flee to space and construct artilects that will fight each other until none remain.

In the eighth scenario, the artilects will go to space and be destroyed by an alien super-artilect.

De Garis has been criticized of believing that The Terminator's nightmarish vision would become a reality, rather than contemplating that superintelligent computers may just as well bring world peace.

De Garis answered that there is no way to ensure that artificial brains operate ethically (humanely).

He also claims that it is difficult to foretell whether or not a superintelligence would be able to bypass an implanted death switch or reprogram itself to disobey orders aimed at instilling human respect.

Hugo de Garis was born in 1947 in Sydney, Australia.

In 1970, he graduated from Melbourne University with a bachelor's degree in Applied Mathematics and Theoretical Physics.

He joined the global electronics corporation Philips as a software and hardware architect after teaching undergraduate mathematics at Cambridge University for four years.

He worked at locations in the Netherlands and Belgium.

In 1992, De Garis received a doctorate in Artificial Life and Artificial Intelligence from the Université Libre de Bruxelles in Belgium.

"Genetic Programming: GenNets, Artificial Nervous Systems, Artificial Embryos," was the title of his thesis.

De Garis directed the Center for Data Analysis and Stochastic Processes at the Artificial Intelligence and Artificial Life Research Unit at Brussels as a graduate student, where he explored evolutionary engineering, which uses genetic algorithms to develop complex systems.

He also worked as a senior research associate at George Mason University's Artificial Intelligence Center in Northern Virginia, where he worked with machine learning pioneer Ryszard Michalski.

De Garis did a postdoctoral fellowship at Tsukuba's Electrotechnical Lab.

He directed the Brain Builder Group at the Advanced Telecommunications Research Institute International in Kyoto, Japan, for the following eight years, while they attempted a moon-shot quest to develop a billion-neuron artificial brain.

De Garis returned to Brussels, Belgium, in 2000 to oversee Star Lab's Brain Builder Group, which was working on a rival artificial brain project.

When the dot-com bubble burst in 2001, De Garis' lab went bankrupt while working on a life-size robot cat.

De Garis then moved on to Utah State University as an Associate Professor of Computer Science, where he stayed until 2006.

De Garis was the first to teach advanced research courses on "brain building" and "quantum computing" at Utah State.

He joined Wuhan University's International School of Software in China as Professor of Computer Science and Mathematical Physics in 2006, where he also served as the leader of the Artificial Intelligence group.

De Garis kept working on artificial brains, but he also started looking into topological quantum computing.

De Garis joined the advisory board of Novamente, a commercial business that aims to develop artificial general intelligence, in the same year.

Two years later, Chinese authorities gave his Wuhan University Brain Builder Group a significant funding to begin building an artificial brain.

The China-Brain Project was the name given to the initiative.

De Garis relocated to Xiamen University in China in 2008, where he ran the Artificial Brain Lab in the School of Information Science and Technology's Artificial Intelligence Institute until his retirement in 2010.

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Superintelligence; Technological Singularity; The Terminator.

Further Reading:

de Garis, Hugo. 1989. “What If AI Succeeds? The Rise of the Twenty-First Century Artilect.” AI Magazine 10, no. 2 (Summer): 17–22.

de Garis, Hugo. 1990. “Genetic Programming: Modular Evolution for Darwin Machines.” In Proceedings of the International Joint Conference on Neural Networks, 194–97. Washington, DC: Lawrence Erlbaum.

de Garis, Hugo. 2005. The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. ETC Publications.

de Garis, Hugo. 2007. “Artificial Brains.” In Artificial General Intelligence: Cognitive Technologies, edited by Ben Goertzel and Cassio Pennachin, 159–74. Berlin: Springer.

Geraci, Robert M. 2008. “Apocalyptic AI: Religion and the Promise of Artificial Intelligence.” Journal of the American Academy of Religion 76, no. 1 (March): 138–66.

Spears, William M., Kenneth A. De Jong, Thomas Bäck, David B. Fogel, and Hugo de Garis. 1993. “An Overview of Evolutionary Computation.” In Machine Learning: ECML-93, Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence), vol. 667, 442–59. Berlin: Springer.

Artificial Intelligence - What Are AI Berserkers?


Berserkers are intelligent killing robots initially described by science fiction and fantasy novelist Fred Saberhagen (1930–2007) in his 1962 short tale "Without a Thought." Berserkers later emerged as frequent antagonists in many more of Saberhagen's books and novellas.

Berserkers are a sentient, self-replicating race of space-faring robots with the mission of annihilating all life.

They were built as an ultimate doomsday weapon in a long-forgotten interplanetary conflict between two extraterrestrial cultures (i.e., one intended as a threat or deterrent more than actual use).

The facts of how the Berserkers were released are lost to time, since they seem to have killed off their creators as well as their foes and have been ravaging the Milky Way galaxy ever since.

They come in a variety of sizes, from human-scale units to heavily armored planetoids (cf.

Death Star), and are equipped with a variety of weaponry capable of sterilizing worlds.

Any sentient species that fights back, such as humans, is a priority for the Berserkers.

They construct factories in order to duplicate and better themselves, but their basic objective of removing life remains unchanged.

It's uncertain how far they evolve; some individual units end up questioning or even changing their intentions, while others gain strategic brilliance (e.g., Brother Assassin, "Mr.Jester," Rogue Berserker, Shiva in Steel).

While the Berserkers' ultimate purpose of annihilating all life is evident, their tactical activities are uncertain owing to unpredictability in their cores caused by radioactive decay.

Their name is derived from Norse mythology's Berserkers, powerful human warriors who battled in a fury.

Berserkers depict a worst-case scenario for artificial intelligence: murdering robots that think, learn, and reproduce in a wild and emotionless manner.

They demonstrate the deadly arrogance of providing AI with strong weapons, harmful purpose, and unrestrained self-replication in order to transcend its creators' comprehension and control.

If Berserkers are ever developed and released, they may represent an inexhaustible danger to living creatures over enormous swaths of space and time.

They're quite hard to get rid of after they've been unbottled.

This is owing to their superior defenses and weaponry, as well as their widespread distribution, ability to repair and multiply, autonomous functioning (i.e., without centralized control), capacity to learn and adapt, and limitless patience to lay in wait.

The discovery of Berserkers is so horrifying in Saberhagen's books that human civilizations are terrified of constructing their own AI for fear that it may turn against its creators.

Some astute humans, on the other hand, find a fascinating Berserker counter-weapon: Qwib-Qwibs, self-replicating robots designed to eliminate all Berserkers rather than all life ("Itself Surprised" by Roger Zelazny).

Humans have also utilized cyborgs as an anti-Berserker technique, pushing the boundaries of what constitutes biological intelligence (Berserker Man, Ber serker Prime, Berserker Kill).

Berserkers also exemplifies artificial intelligence's potential for inscrutability and strangeness.

Even while Berserkers can communicate with each other, their huge brains are generally unintelligible to sentient organic lifeforms fleeing or battling them, and they are difficult to study owing to their proclivity to self-destruct if caught.

What can be deduced from their reasoning is that they see life as a plague, a material illness that must be eradicated.

In consequence, the Berserkers lack a thorough understanding of biological intellect and have never been able to adequately duplicate organic life, despite several tries.

They do, however, sometimes enlist human defectors (dubbed "goodlife") to aid the Berserkers in their struggle against "badlife" (i.e., any life that resists extermination).

Nonetheless, Berserkers and humans think in almost irreconcilable ways, hindering attempts to reach a common understanding between life and nonlife.

The seeming contrasts between human and machine intellect are at the heart of most of the conflict in the tales (e.g., artistic appreciation, empathy for animals, a sense of humor, a tendency to make mistakes, the use of acronyms for mnemonics, and even fake encyclopedia entries made to detect pla giarism).

Berserkers have been known to be defeated by non-intelligent living forms such as plants and mantis shrimp ("Pressure" and "Smasher").

Berserkers may be seen of as a specific example of the von Neumann probe, which was invented by mathematician and physicist John von Neumann (1903–1957): self-replicating space-faring robots that might be deployed over the galaxy to efficiently investigate it In the Berserker tales, the Turing Test, developed by mathematician and computer scientist Alan Turing (1912–1954), is both investigated and upended.

In "Inhuman Error," human castaways compete with a Berserker to persuade a rescue crew that they are human, while in "Without a Thought," a Berserker tries to figure out whether its game opponent is human.

The Fermi paradox—the concept that if intelligent extraterrestrial civilizations exist, we should have heard from them by now—is also explained by Berserkers.

It's possible that extraterrestrial civilizations haven't contacted Earth because they were destroyed by Berserker-like robots or are hiding from them.

Berserkers, or anything like them, have featured in a number of science fiction books in addition to Saberhagen's (e.g., works by Greg Bear, Gregory Benford, David Brin, Ann Leckie, and Martha Wells; the Terminator series of movies; and the Mass Effect series of video games).

All of these instances demonstrate how the potential for existential risks posed by AI may be investigated in the lab of fiction.

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

de Garis, Hugo; Superintelligence; The Terminator.

Further Reading

Saberhagen, Fred. 2015a. Berserkers: The Early Tales. Albuquerque: JSS Literary Productions.

Saberhagen, Fred. 2015b. Berserkers: The Later Tales. Albuquerque: JSS Literary Productions.

Saberhagen’s Worlds of SF and Fantasy.

The TAJ: Official Fan site of Fred Saberhagen’s Berserker® Universe.

What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...