Showing posts with label Pathetic Fallacy. Show all posts
Showing posts with label Pathetic Fallacy. Show all posts

Artificial Intelligence - Who Is Anne Foerst?



Anne Foerst (1966–) is a Lutheran clergyman, theologian, author, and computer science professor at Allegany, New York's St. Bonaventure University.

In 1996, Foerst earned a doctorate in theology from the Ruhr-University of Bochum in Germany.

She has worked as a research associate at Harvard Divinity School, a project director at MIT, and a research scientist at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory.

She supervised the God and Computers Project at MIT, which encouraged people to talk about existential questions brought by scientific research.

Foerst has written several scientific and popular pieces on the need for improved conversation between religion and science, as well as shifting concepts of personhood in the light of robotics research.

God in the Machine, published in 2004, details her work as a theological counselor to the MIT Cog and Kismet robotics teams.

Foerst's study has been influenced by her work as a hospital counselor, her years at MIT collecting ethnographic data, and the writings of German-American Lutheran philosopher and theologian Paul Tillich.

As a medical counselor, she started to rethink what it meant to be a "normal" human being.

Foerst was inspired to investigate the circumstances under which individuals are believed to be persons after seeing variations in physical and mental capabilities in patients.

In her work, Foerst contrasts between the terms "human" and "person," with human referring to members of our biological species and per son referring to a person who has earned a form of reversible social inclusion.

Foerst uses the Holocaust as an illustration of how personhood must be conferred but may also be revoked.

As a result, personhood is always vulnerable.

Foerst may explore the inclusion of robots as individuals using this personhood schematic—something people bestow to each other.

Tillich's ideas on sin, alienation, and relationality are extended to the connections between humans and robots, as well as robots and other robots, in her work on robots as potential people.

  • People become alienated, according to Tillich, when they ignore opposing polarities in their life, such as the need for safety and novelty or freedom.
  • People reject reality, which is fundamentally ambiguous, when they refuse to recognize and interact with these opposing forces, cutting out or neglecting one side in order to concentrate entirely on the other.
  • People are alienated from their lives, from the people around them, and (for Tillich) from God if they do not accept the complicated conflicts of existence.

The threat of reducing all things to items or data that can be measured and studied, as well as the possibility to enhance people's capacity to create connections and impart identity, are therefore opposites of danger and opportunity in AI research.

Foerst has attempted to establish a dialogue between theology and other structured fields of inquiry, following Tillich's paradigm.

Despite being highly welcomed in labs and classrooms, Foerst's work has been met with skepticism and pushback from some concerned that she is bringing counter-factual notions into the realm of science.

These concerns are crucial data for Foerst, who argues for a mutualistic approach in which AI researchers and theologians accept strongly held preconceptions about the universe and the human condition in order to have fruitful discussions.

Many valuable discoveries come from these dialogues, according to Foerst's study, as long as the parties have the humility to admit that neither side has a perfect grasp of the universe or human existence.

Foerst's work on AI is marked by humility, as she claims that researchers are startled by the vast complexity of the human person while seeking to duplicate human cognition, function, and form in the figure of the robot.

The way people are socially rooted, socially conditioned, and socially accountable adds to the complexity of any particular person.

Because human beings' embedded complexity is intrinsically physical, Foerst emphasizes the significance of an embodied approach to AI.

Foerst explored this embodied technique while at MIT, where having a physical body capable of interaction is essential for robotic research and development.

When addressing the evolution of artificial intelligence, Foerst emphasizes a clear difference between robots and computers in her work (AI).

Robots have bodies, and those bodies are an important aspect of their learning and interaction abilities.

Although supercomputers can accomplish amazing analytic jobs and participate in certain forms of communication, they lack the ability to learn through experience and interact with others.

Foerst is dismissive of research that assumes intelligent computers may be created by re-creating the human brain.

Rather, she contends that bodies are an important part of intellect.

Foerst proposes for growing robots in a way similar to human child upbringing, in which robots are given opportunities to interact with and learn from the environment.

This process is costly and time-consuming, just as it is for human children, and Foerst reports that funding for creative and time-intensive AI research has vanished, replaced by results-driven and military-focused research that justifies itself through immediate applications, especially since the terrorist attacks of September 11, 2001.

Foerst's work incorporates a broad variety of sources, including religious texts, popular films and television programs, science fiction, and examples from the disciplines of philosophy and computer science.

Loneliness, according to Foerst, is a fundamental motivator for humans' desire of artificial life.

Both fictional imaginings of the construction of a mechanical companion species and con actual robotics and AI research are driven by feelings of alienation, which Foerst ties to the theological position of a lost contact with God.

Academic opponents of Foerst believe that she has replicated a paradigm initially proposed by German theologian and scholar Rudolph Otto in his book The Idea of the Holy (1917).

The heavenly experience, according to Otto, may be discovered in a moment of attraction and dread, which he refers to as the numinous.

Critics contend that Foerst used this concept when she claimed that humans sense attraction and dread in the figure of the robot.

Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Embodiment, AI and; Nonhuman Rights and Personhood; Pathetic Fallacy; Robot Ethics; Spiritual Robots.

Further Reading:

Foerst, Anne. 2005. God in the Machine: What Robots Teach Us About Humanity and God. New York: Plume.

Geraci, Robert M. 2007. “Robots and the Sacred in Science Fiction: Theological Implications of Artificial Intelligence.” Zygon 42, no. 4 (December): 961–80.

Gerhart, Mary, and Allan Melvin Russell. 2004. “Cog Is to Us as We Are to God: A Response to Anne Foerst.” Zygon 33, no. 2: 263–69.

Groks Science Radio Show and Podcast with guest Anne Foerst. Audio available online at Transcript available at

Reich, Helmut K. 2004. “Cog and God: A Response to Anne Foerst.” Zygon 33, no. 2: 255–62.

Artificial Intelligence - How Has The Blade Runner (1982) Film Envisioned AI Androids?


Do Androids Dream of Electric Sheep? by Philip Dick was first published in 1968 and is set in post-industrial San Francisco in the year 2020.

In 1982, the book was renamed Blade Runner for a cinematic adaption set in Los Angeles in the year 2019.

While the texts vary significantly, both recount the narrative of bounty hunter Rick Deckard, who is entrusted with locating (and executing) escaping replicants/androids (six in the novel, four in the film).

The setting for both novels is a future in which cities have grown overcrowded and polluted.

Natural nonhuman life has virtually vanished (due to radiation sickness) and been replaced by synthetic and artificial life.

Natural life has become a valued commodity in the future.

Replicants are meant to perform a variety of industrial functions in this environment, most notably as labor for off-world colonies.

The replicants are an exploited race that was created to serve human masters.

When they are no longer useful, they are discarded, and when they struggle against their circumstances, they are retired.

Blade runners are specialist law enforcement operatives tasked with apprehending and killing renegade replicants.

Rick Deckard, a former Blade Runner, returns from retirement to track down the sophisticated Nexus-6 replicant models.

These replicants have escaped to Earth after rebelling against the slave-like conditions on Mars.

In both texts, the treatment of artificial intelligence serves as an implicit critique of capitalism.

The Rosen Association in the book and the Tyrell Corporation in the film develop replicants to create a more docile labor, implying that capitalism converts people into robots.

Eldon Rosen (who is called Tyrell in the film) emphasizes these obnoxious commercial imperatives: "We provided what the colonists wanted...." Every commercial venture is founded on a time-honored principle.

Other corporations would have developed these progressive more human kinds if our company hadn't." 

There are two types of replicants in the movie: those who are designed to be unaware that they are androids and are filled with implanted memories (like Rachael Tyrell), and those who are aware that they are androids and live by that knowledge (the Nexus-6 fugitives).

Rachael in the film is a new Nexus-7 model that has been implanted with the memories of Eldon Tyrell's niece, Lilith. Deckard is sent to murder her, but instead falls in love with her. The two depart the city together at the conclusion of the film.

Rachael's character is handled differently in the book.

Deckard makes an effort to recruit Rachael's assistance in locating the runaway androids. Rachael agrees to meet Deckard in a hotel room in the hopes of persuading him to drop the case.

Rachael explains during their encounter that one of the runaway androids (Pris Stratton) is a carbon copy of her (making Rachael a Nexus-6 model in the novel).

Deckard and Rachael actually have sex and profess their love for each other.

Rachael, on the other hand, is discovered to have slept with other blade runners.

She is designed to do just that in order to keep them from fulfilling their tasks.

Deckard threatens to murder Rachael but decides to leave the hotel rather than carry out his threat.

The replicants are undetectable in both the literature and the movies.

Even under a microscope, they seem to be totally human.

The Voigt-Kampff test, which separates humans from androids based on emotional reactions to a variety of questions, is the sole method to identify them.

The exam is conducted with the use of a machine that monitors blush reaction, heart rate, and eye movement in response to empathy-related questions.

Deckard's identity as a human or a replicant is unknown at this time.

Rachael even inquires as to whether he has completed the Voigt-Kampff exam.

In the movie, Deckard's position is unclear.

Despite the fact that the audience is free to make their own choice, filmmaker Ridley Scott has hinted that Deckard is a replicant.

Deckard takes and passes the exam at the conclusion of the book, although he starts to doubt the effectiveness of blade running.

More than the movie, the book explores what it means to be human in the face of technological advancements.

The book depicts the fragility of the human experience and how it can be easily harmed by the technology that is supposed to help it.

Individuals with Penfield mood organs, for example, can use them to control their emotions.

All that is required is for a person to look up an emotion in a manual, dial the appropriate number, and then experience whatever emotion they desire.

The device's usage and the generation of artificial sensations implies that people may become robotic, as Deckard's wife Iran points out: My first response was to express gratitude for the fact that we could afford a Penfield mood organ.

But then I understood how harmful it was to sense the lack of vitality everywhere, not only in this building - do you know what I mean? I'm assuming you don't.

That, however, was formerly thought to be an indication of mental disease, referred to as "lack of proper emotion." The argument made by Dick is that the mood organ inhibits humans from feeling the right emotional elements of life, which is precisely what the Voigt-Kampff test reveals replicants are incapable of.

Philip Dick was known for his hazy and maybe gloomy vision of artificial intelligence.

His androids and robots are distinctly ambiguous.

They desire to be like humans, yet they lack empathy and emotions.

Do Androids Dream of Electric Sheep is heavily influenced by this uncertainty, which also appears onscreen in Blade Runner.

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Nonhuman Rights and Personhood; Pathetic Fallacy; Turing Test.

Further Reading

Brammer, Rebekah. 2018. “Welcome to the Machine: Artificial Intelligence on Screen.” Screen Education 90 (September): 38–45.

Fitting, Peter. 1987. “Futurecop: The Neutralization of Revolt in Blade Runner.” Science Fiction Studies 14, no. 3: 340–54.

Sammon, Paul S. 2017. Future Noir: The Making of Blade Runner. New York: Dey Street Books.

Wheale, Nigel. 1991. “Recognising a ‘Human-Thing’: Cyborgs, Robots, and Replicants in Philip K. Dick’s Do Androids Dream of Electric Sheep? and Ridley Scott’s Blade Runner.” Critical Survey 3, no. 3: 297–304.

Artificial Intelligence - What Has Been Isaac Asimov's Influence On AI?

(c. 1920–1992) Isaac Asimov was a professor of biochemistry at Boston University and a well-known science fiction novelist.

Asimov was a prolific writer in a variety of genres, and his corpus of science fiction has had a major impact on not just the genre, but also on ethical concerns surrounding science and technology.

Asimov was born in the Russian Federation.

He celebrated his birthday on January 2, 1920, despite not knowing his official birth date.

In 1923, his family moved to New York City.

At the age of sixteen, Asimov applied to Columbia College, the undergraduate school of Columbia University, but was refused admission owing to anti-Semitic restrictions on the number of Jewish students.

He finally enrolled in Seth Low Junior College, a connected undergraduate institution.

Asimov switched to Columbia College when Seth Low closed its doors, but obtained a Bachelor of Science rather than a Bachelor of Arts, which he regarded as "a gesture of second-class citizenship" (Asimov 1994, n.p.).

Asimov grew interested in science fiction about this time and started writing letters to science fiction periodicals, ultimately attempting to write his own short tales.

His debut short story, "Marooned off Vesta," was published in Amazing Stories in 1938.

His early works placed him in the company of science fiction pioneers like Robert Heinlein.

After graduation, Asimov attempted, but failed, to enroll in medical school.

Instead, at the age of nineteen, he enrolled in graduate school for chemistry.

World War II halted Asimov's graduate studies, and at Heinlein's recommendation, he completed his military duty by working at the Naval Air Experimental Station in Philadelphia.

He created short tales while stationed there, which constituted the foundation for Foundation (1951), one of his most well-known works and the first of a multi-volume series that he would eventually tie to numerous of his other pieces.

He earned his doctorate from Columbia University in 1948.

The pioneering Robots series by Isaac Asimov (1950s–1990s) has served as a foundation for ethical norms to alleviate human worries about technology gone awry.

The Three Laws of Robotics, for example, are often mentioned as guiding principles for artificial intelligence and robotics.

The Three Laws were initially mentioned in the short tale "Run about" (1942), which was eventually collected in I, Robot (1950):

1. A robot may not damage a human being or enable a human being to come to danger as a result of its inactivity.

2. Except when such commands contradict with the First Law, a robot shall follow orders provided to it by humans.

3. A robot must defend its own existence as long as this does not violate the First or Second Laws.

A "zeroth rule" is devised in Robots and Empire (1985) in order for robots to prevent a scheme to destroy Earth: "A robot may not damage mankind, or enable humanity to come to danger via inactivity." The original Three Laws are superseded by this statute.

Characters in Asimov's Robot series of books and short stories are often tasked with solving a mystery in which a robot seems to have broken one of the Three Laws.

In "Runaround," for example, two field experts with U.S. Robots and Mechanical Men, Inc. discover they're in danger of being stuck on Mercury since their robot "Speedy" hasn't returned with selenium required to power a protective shield in an abandoned base to screen them from the sun.

Speedy has malfunctioned because he is stuck in a conflict between the Second and Third Laws: when the robot approaches the selenium, it is obliged to recede in order to defend itself from a corrosive quantity of carbon monoxide near the selenium.

The humans must discover out how to apply the Three Laws to free Speedy from a conflict-induced feedback cycle.

More intricate arguments concerning the application of the Three Laws appear in later tales and books.

The Machines manage the world's economy in "The Evitable Conflict" (1950), and "robopsychologist" Susan Calvin notices that they have changed the First Law into a predecessor of Asimov's zeroth law: "the Machines work not for any one human being, but for all humanity" (Asimov 2004b, 222).

Calvin is concerned that the Machines are guiding mankind toward "the ultimate good of humanity" (Asimov 2004b, 222), even if humanity is unaware of what it is.

Furthermore, Asimov's Basis trilogy (1940s–1990s) coined the word "psychohistory," which may be interpreted as foreshadowing the algorithms that provide the foundation for artificial intelligence today.

In Foundation, the main character Hari Seldon creates psychohistory as a method of making broad predictions about the future behavior of extremely large groups of people, such as the breakdown of civilization (here, the Galactic Empire) and the ensuing Dark Ages.

Seldon, on the other hand, claims that using psychohistory may shorten the era of anarchy: Psychohistory, which has the ability to foretell the fall, may make pronouncements about the coming dark times.

The Empire has been in existence for almost a thousand years.

The next dark ages will last thirty thousand years, not twelve.

A Second Empire will develop, but there will be a thousand generations of suffering humanity between it and our civilization...

If my party is permitted to operate immediately, it is conceivable to cut the period of anarchy to a single millennium.

30–31 (Asimov, 2004a) Psychohistory produces "a mathematical prediction" (Asimov, 2004a, 30), similar to how artificial intelligence would make a forecast.

Seldon established the Basis in the Foundation trilogy, a hidden collection of people enacting humanity's collective knowledge and so serving as the physical foundation for a hypothetical second Galactic Empire.

The Foundation is threatened by the Mule in later parts of the Foundation series, a mutant and consequently an aberration that was not predicted by psychohistory's predictive research.

Although Seldon's thousand-year plan depends on macro conceptions—"the future isn't nebulous," the friction between large-scale theories and individual actions is a crucial factor driving Foundation.

Seldon has computed and plotted it" (Asimov 2004a, 100)—individual acts may save or destroy the scheme.

Asimov's works were frequently foreshadowing, prompting some to label his work as "future history" or "speculative fiction." Asimov's ethical challenges are often cited in legal, political, and policy arguments years after they were published.

For example, in 2007, the South Korean Ministry of Commerce, Industry, and Energy established a Robot Ethics Charter based on the Three Laws, predicting that by 2020, every Korean household will have a robot.

The British House of Lords' Artificial Intelligence Committee adopted a set of guidelines in 2017 that are similar to the Three Laws.

The Three Laws' utility has been questioned by others.

First, some opponents point out that robots are often employed for military purposes, and that the Three Laws would restrict this usage, which Asimov would have supported given his anti-war short tales like "The Gentle Vultures" (1957).

Second, some argue that today's robots and AI applications vary significantly from those shown in the Robot series.

Asimov's imaginary robots are still powered by a "positronic brain," which is still science fiction and beyond current computer capacity.

Third, the Three Laws are clearly fiction, and Asimov's Robot series is founded on misinterpretations in order to advance ethical concerns and for dramatic impact.

Critics claim that the Three Laws cannot serve as a true moral framework for controlling AI or robotics since they may be misunderstood just like any other legislation.

Finally, some argue that these ethical principles should be applied to all people.

Asimov died in 1992 from symptoms related to AIDS, which he caught after receiving a tainted blood transfusion during a heart bypass operation in 1983.

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: Beneficial AI, Asilomar Meeting on; Pathetic Fallacy; Robot Ethics.

Further Reading

Asimov, Isaac. 1994. I, Asimov: A Memoir. New York: Doubleday.

Asimov, Isaac. 2002. It’s Been a Good Life. Amherst: Prometheus Books.

Asimov, Isaac. 2004a. The Foundation Novels. New York: Bantam Dell.

Asimov, Isaac. 2004b. I, Robot. New York: Bantam Dell.

What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...