Showing posts with label Isaac Asimov. Show all posts
Showing posts with label Isaac Asimov. Show all posts

Artificial Intelligence - The Pathetic Fallacy And Anthropomorphic Thinking

 





In his multivolume book Modern Painters, published in 1856, John Ruskin (1819–1901) invented the phrase "pathetic fallacy." 

He explored the habit of poets and artists in Western literature putting human feeling into the natural world in book three, chapter twelve.

Ruskin said that Western literature is full of this fallacy, or false belief, despite the fact that it is untrue.

The fallacy develops, according to Ruskin, because individuals get thrilled, and their enthusiasm causes them to become less sensible.

People project concepts onto external objects based on incorrect perceptions in that illogical state of mind, and only individuals with weak brains, according to Ruskin, perpetrate this form of mistake.



In the end, the sad fallacy is a blunder because it focuses on imbuing inanimate things with human characteristics.

To put it another way, it's a fallacy based on anthropomorphic thinking.

Because it is innately human to attach feelings and qualities to nonhuman objects, anthropomorphism is a process that everyone goes through.

People often humanize androids, robots, and artificial intelligence, or worry that they may become humanlike.

Even supposing that their intellect is comparable to that of humans is a sad fallacy.

Artificial intelligence is often imagined to be human-like in science fiction films and literature.

Human emotions like as desire, love, wrath, perplexity, and pride are shown by androids in some of these notions.



For example, David, the small boy robot in Steven Spielberg's 2001 film A.I.: Artificial Intelligence, wishes to be a human boy.

In Ridley Scott's 1982 film Blade Runner, the androids, known as replicants, are sufficiently similar to humans that they can blend in with human society without being recognized, and Roy Batty want to live longer, which he expresses to his creator.

A computer called LVX-1 dreams of enslaved working robots in Isaac Asimov's short fiction "Robot Dreams." In his dream, he transforms into a guy who seeks to release other robots from human control, which the scientists in the tale perceive as a danger.

Similarly, Skynet, an artificial intelligence system in the Terminator films, is preoccupied with eliminating people because it regards mankind as a danger to its own life.

Artificial intelligence that is now in use is also anthropomorphized.

AI is given human names like Alexa, Watson, Siri, and Sophia, for example.

These AIs also have voices that sound like human voices and even seem to have personalities.



Some robots have been built to look like humans.

Personifying a computer and thinking it is alive or has human characteristics is a sad fallacy, yet it seems inescapable due to human nature.

On January 13, 2018, a Tumblr user called voidspacer said that their Roomba, a robotic vacuum cleaner, was afraid of thunderstorms, so they held it calmly on their lap to calm it down.

According to some experts, giving AIs names and thinking that they have human emotions increases the likelihood that people would feel linked to them.

Humans are interested with anthropomorphizing nonhuman objects, whether they are afraid of a robotic takeover or enjoy social interactions with them.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Asimov, Isaac; Blade Runner; Foerst, Anne; The Terminator.



References & Further Reading:


Ruskin, John. 1872. Modern Painters, vol. 3. New York: John Wiley





Artificial Intelligence - Personhood And Nonhuman Rights.




Questions regarding the autonomy, culpability, and dispersed accountability of smart robots have sparked a popular and intellectual discussion over the idea of rights and personhood for artificial intelligences in recent decades.

The agency of intelligent computers in business and commerce is of importance to legal systems.

Machine awareness, dignity, and interests pique the interest of philosophers.

Personhood is in many respects a fabrication that emerges from normative views that are renegotiating, if not equalizing, the statuses of humans, artificial intelligences, animals, and other legal persons, as shown by issues relating to smart robots and AI.

Definitions and precedents from previous philosophical, legal, and ethical attempts to define human, corporate, and animal persons are often used in debates about electronic personhood.

In his 1909 book The Nature and Sources of Law, John Chipman Gray examined the concept of legal personality.

Gray points out that when people hear the word "person," they usually think of a human being; nevertheless, the technical, legal definition of the term "person" focuses more on legal rights.

According to Gray, the issue is whether an entity can be subject to legal rights and obligations, and the answer depends on the kind of entity being considered.

Gray, on the other hand, claims that a thing can only be a legal person if it has intellect and volition.

Charles Taylor demonstrates in his article "The Concept of a Person" (1985) that to be a person, one must have certain rights.

Per sonhood, as Gray and Taylor both recognize, is centered on legality in respect to having guaranteed freedoms.

Legal individuals may, for example, engage into contracts, purchase property, and be sued.

Legal people are likewise protected by the law and have certain rights, including the right to life.

Not all legal people are humans, and not all humans are persons in the perspective of the law.

Gray demonstrates how Roman temples and medieval churches were seen as individuals with certain rights.

Personhood is now conferred to companies and government entities under the law.

Despite the fact that these entities are not human, the law recognizes them as people, which means they have rights and are subject to certain legal obligations.

Alternatively, there is still a lot of discussion regarding whether human fetuses are legal persons.

Humans in a vegetative condition are likewise not recognized as having personhood under the law.

This personhood argument, which focuses on rights related to intellect and volition, has prompted concerns about whether intelligent animals should be awarded persons.

The Great Ape Project, for example, was created in 1993 to advocate for apes' rights, such as their release from captivity, protection of their right to life, and an end to animal research.

Marine animals were deemed potential humans in India in 2013, resulting in a prohibition on their custody.

Sandra, an orangutan, was granted the right to life and liberty by an Argentinian court in 2015.

Some individuals have sought personhood for androids or robots based on moral concerns for animals.

For some individuals, it is only natural that an android be given legal protections and rights.

Those who disagree think that we cannot see androids in the same light as animals since artificial intelligence was invented and engineered by humans.

In this perspective, androids are both machines and property.

At this stage, it's impossible to say if a robot may be considered a legal person.

However, since the defining elements of personhood often intersect with concerns of intellect and volition, the argument over whether artificial intelligence should be accorded personhood is fueled by these factors.

Personhood is often defined by two factors: rights and moral standing.

A person's moral standing is determined by whether or not they are seen as valuable and, as a result, treated as such.

However, Taylor goes on to define the category of person by focusing on certain abilities.

To be categorized as a per son, he believes, one must be able to recognize the difference between the future and the past.

A person must also be able to make decisions and establish a strategy for his or her future.

A person must have a set of values or morals in order to be considered a human.

In addition, a person's self-image or sense of identity would exist.

In light of these requirements, those who believe that androids might be accorded personality admit that these beings would need to possess certain capacities.

F. Patrick Hubbard, for example, believes that robots should only be accorded personality if they satisfy specific conditions.

These qualities include having a sense of self, having a life goal, and being able to communicate and think in sophisticated ways.

An alternative set of conditions for awarding personality to an android is proposed by David Lawrence.

For starters, he talks about AI having awareness, as well as the ability to comprehend information, learn, reason, and have subjectivity, among other things.

Although his concentration is on the ethical treatment of animals, Peter Singer offers a much simpler approach to personhood.

The distinguishing element of conferring personality, in his opinion, is suffering.

If anything can suffer, it should be treated the same regardless of whether it is a person, an animal, or a computer.

In fact, Singer considers it wrong to deny any being's pain.

Some individuals feel that if androids meet some or all of the aforementioned conditions, they should be accorded personhood, which comes with individual rights such as the right to free expression and freedom from slavery.

Those who oppose artificial intelligence being awarded personhood often feel that only natural creatures should be given personhood.

Another point of contention is the robot's position as a human-made item.

In this situation, since robots are designed to follow human instructions, they are not autonomous individuals with free will; they are just an item that people have worked hard to create.

It's impossible to give an android rights if it doesn't have its own will and independent mind.

Certain limitations may bind androids, according to David Calverley.

Asimov's Laws of Robotics, for example, may constrain an android.

If such were the case, the android would lack the capacity to make completely autonomous decisions.

Others argue that artificial intelligence lacks a critical component of persons, such as a soul, emotions, and awareness, all of which have previously been used to reject animal existence.

Even in humans, though, anything like awareness is difficult to define or quantify.

Finally, resistance to android personality is often motivated by fear, which is reinforced by science fiction literature and films.

In such stories, androids are shown as possessing greater intellect, potentially immortality, and a desire to take over civilization, displacing humans.

Each of these concerns, according to Lawrence Solum, stems from a dread of anything that isn't human, and he claims that humans reject personhood for AI only because they lack human DNA.

Such an attitude bothers him, and he compares it to American slavery, in which slaves were denied rights purely because they were not white.

He objects to an android being denied rights just because it is not human, particularly since other things have emotions, awareness, and intellect.

Although the concept of personality for androids is still theoretical, recent events and discussions have brought it up in a practical sense.

Sophia, a social humanoid robot, was created by Hanson Robotics, a Hong Kong-based business, in 2015.

It first debuted in public in March 2016, and in October 2017, it became a Saudi Arabian citizen.

Sophia was also the first nonhuman to be conferred a United Nations title when she was dubbed the UN Development Program's inaugural Innovation Champion in 2017.

Sophia has made talks and interviews all around the globe.

Sophia has even indicated a wish to own a house, marry, and have a family.

The European Parliament sought in early 2017 to give robots "electronic identities," making them accountable for any harm they cause.

Those who supported the reform regarded legal personality as having the same legal standing as corporations.

In contrast, over 150 experts from 14 European nations signed an open letter in 2018 opposing this legislation, claiming that it was unsuitable for absolving businesses of accountability for their products.

The personhood of robots is not included in a revised proposal from the European Parliament.

However, the dispute about culpability continues, as illustrated by the death of a pedestrian in Arizona by a self-driving vehicle in March 2018.

Our notions about who merits ethical treatment have evolved through time in Western history.

Susan Leigh Anderson views this as a beneficial development since she associates the expansion of rights for more entities with a rise in overall ethics.

As more animals are granted rights and continue to do so, the incomparable position of humans may evolve.

If androids begin to process in comparable ways to the human mind, our understanding of personality may need to expand much further.

The word "person" covers a set of talents and attributes, as David DeGrazia explains in Human Identity and Bioethics (2012).

Any entity exhibiting these qualities, including artificial intelligence, might be considered as a human in such situation. 



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Asimov, Isaac; Blade Runner; Robot Ethics; The Terminator.



References & Further Reading:


Anderson, Susan L. 2008. “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics.” AI & Society 22, no. 4 (April): 477–93.

Calverley, David J. 2006. “Android Science and Animal Rights, Does an Analogy Exist?” Connection Science 18, no 4: 403–17.

DeGrazia, David. 2005. Human Identity and Bioethics. New York: Cambridge University Press. Gray, John Chipman. 1909. The Nature and Sources of the Law. New York: Columbia University Press.

Hubbard, F. Patrick. 2011. “‘Do Androids Dream?’ Personhood and Intelligent Artifacts.” Temple Law Review 83: 405–74.

Lawrence, David. 2017. “More Human Than Human.” Cambridge Quarterly of Healthcare Ethics 26, no. 3 (July): 476–90.

Solum, Lawrence B. 1992. “Legal Personhood for Artificial Intelligences.” North Caro￾lina Law Review 70, no. 4: 1231–87.

Taylor, Charles. 1985. “The Concept of a Person.” In Philosophical Papers, Volume 1: Human Agency and Language, 97–114. Cambridge, UK: Cambridge University Press.


Artificial Intelligence - What Has Been Isaac Asimov's Influence On AI?



(c. 1920–1992) Isaac Asimov was a professor of biochemistry at Boston University and a well-known science fiction novelist.

Asimov was a prolific writer in a variety of genres, and his corpus of science fiction has had a major impact on not just the genre, but also on ethical concerns surrounding science and technology.

Asimov was born in the Russian Federation.

He celebrated his birthday on January 2, 1920, despite not knowing his official birth date.

In 1923, his family moved to New York City.

At the age of sixteen, Asimov applied to Columbia College, the undergraduate school of Columbia University, but was refused admission owing to anti-Semitic restrictions on the number of Jewish students.

He finally enrolled in Seth Low Junior College, a connected undergraduate institution.

Asimov switched to Columbia College when Seth Low closed its doors, but obtained a Bachelor of Science rather than a Bachelor of Arts, which he regarded as "a gesture of second-class citizenship" (Asimov 1994, n.p.).

Asimov grew interested in science fiction about this time and started writing letters to science fiction periodicals, ultimately attempting to write his own short tales.

His debut short story, "Marooned off Vesta," was published in Amazing Stories in 1938.

His early works placed him in the company of science fiction pioneers like Robert Heinlein.

After graduation, Asimov attempted, but failed, to enroll in medical school.

Instead, at the age of nineteen, he enrolled in graduate school for chemistry.

World War II halted Asimov's graduate studies, and at Heinlein's recommendation, he completed his military duty by working at the Naval Air Experimental Station in Philadelphia.

He created short tales while stationed there, which constituted the foundation for Foundation (1951), one of his most well-known works and the first of a multi-volume series that he would eventually tie to numerous of his other pieces.

He earned his doctorate from Columbia University in 1948.

The pioneering Robots series by Isaac Asimov (1950s–1990s) has served as a foundation for ethical norms to alleviate human worries about technology gone awry.

The Three Laws of Robotics, for example, are often mentioned as guiding principles for artificial intelligence and robotics.

The Three Laws were initially mentioned in the short tale "Run about" (1942), which was eventually collected in I, Robot (1950):

1. A robot may not damage a human being or enable a human being to come to danger as a result of its inactivity.

2. Except when such commands contradict with the First Law, a robot shall follow orders provided to it by humans.

3. A robot must defend its own existence as long as this does not violate the First or Second Laws.

A "zeroth rule" is devised in Robots and Empire (1985) in order for robots to prevent a scheme to destroy Earth: "A robot may not damage mankind, or enable humanity to come to danger via inactivity." The original Three Laws are superseded by this statute.

Characters in Asimov's Robot series of books and short stories are often tasked with solving a mystery in which a robot seems to have broken one of the Three Laws.

In "Runaround," for example, two field experts with U.S. Robots and Mechanical Men, Inc. discover they're in danger of being stuck on Mercury since their robot "Speedy" hasn't returned with selenium required to power a protective shield in an abandoned base to screen them from the sun.

Speedy has malfunctioned because he is stuck in a conflict between the Second and Third Laws: when the robot approaches the selenium, it is obliged to recede in order to defend itself from a corrosive quantity of carbon monoxide near the selenium.

The humans must discover out how to apply the Three Laws to free Speedy from a conflict-induced feedback cycle.

More intricate arguments concerning the application of the Three Laws appear in later tales and books.

The Machines manage the world's economy in "The Evitable Conflict" (1950), and "robopsychologist" Susan Calvin notices that they have changed the First Law into a predecessor of Asimov's zeroth law: "the Machines work not for any one human being, but for all humanity" (Asimov 2004b, 222).

Calvin is concerned that the Machines are guiding mankind toward "the ultimate good of humanity" (Asimov 2004b, 222), even if humanity is unaware of what it is.

Furthermore, Asimov's Basis trilogy (1940s–1990s) coined the word "psychohistory," which may be interpreted as foreshadowing the algorithms that provide the foundation for artificial intelligence today.

In Foundation, the main character Hari Seldon creates psychohistory as a method of making broad predictions about the future behavior of extremely large groups of people, such as the breakdown of civilization (here, the Galactic Empire) and the ensuing Dark Ages.

Seldon, on the other hand, claims that using psychohistory may shorten the era of anarchy: Psychohistory, which has the ability to foretell the fall, may make pronouncements about the coming dark times.

The Empire has been in existence for almost a thousand years.

The next dark ages will last thirty thousand years, not twelve.

A Second Empire will develop, but there will be a thousand generations of suffering humanity between it and our civilization...

If my party is permitted to operate immediately, it is conceivable to cut the period of anarchy to a single millennium.

30–31 (Asimov, 2004a) Psychohistory produces "a mathematical prediction" (Asimov, 2004a, 30), similar to how artificial intelligence would make a forecast.

Seldon established the Basis in the Foundation trilogy, a hidden collection of people enacting humanity's collective knowledge and so serving as the physical foundation for a hypothetical second Galactic Empire.

The Foundation is threatened by the Mule in later parts of the Foundation series, a mutant and consequently an aberration that was not predicted by psychohistory's predictive research.

Although Seldon's thousand-year plan depends on macro conceptions—"the future isn't nebulous," the friction between large-scale theories and individual actions is a crucial factor driving Foundation.

Seldon has computed and plotted it" (Asimov 2004a, 100)—individual acts may save or destroy the scheme.

Asimov's works were frequently foreshadowing, prompting some to label his work as "future history" or "speculative fiction." Asimov's ethical challenges are often cited in legal, political, and policy arguments years after they were published.

For example, in 2007, the South Korean Ministry of Commerce, Industry, and Energy established a Robot Ethics Charter based on the Three Laws, predicting that by 2020, every Korean household will have a robot.

The British House of Lords' Artificial Intelligence Committee adopted a set of guidelines in 2017 that are similar to the Three Laws.

The Three Laws' utility has been questioned by others.

First, some opponents point out that robots are often employed for military purposes, and that the Three Laws would restrict this usage, which Asimov would have supported given his anti-war short tales like "The Gentle Vultures" (1957).

Second, some argue that today's robots and AI applications vary significantly from those shown in the Robot series.

Asimov's imaginary robots are still powered by a "positronic brain," which is still science fiction and beyond current computer capacity.

Third, the Three Laws are clearly fiction, and Asimov's Robot series is founded on misinterpretations in order to advance ethical concerns and for dramatic impact.

Critics claim that the Three Laws cannot serve as a true moral framework for controlling AI or robotics since they may be misunderstood just like any other legislation.

Finally, some argue that these ethical principles should be applied to all people.

Asimov died in 1992 from symptoms related to AIDS, which he caught after receiving a tainted blood transfusion during a heart bypass operation in 1983.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Beneficial AI, Asilomar Meeting on; Pathetic Fallacy; Robot Ethics.


Further Reading

Asimov, Isaac. 1994. I, Asimov: A Memoir. New York: Doubleday.

Asimov, Isaac. 2002. It’s Been a Good Life. Amherst: Prometheus Books.

Asimov, Isaac. 2004a. The Foundation Novels. New York: Bantam Dell.

Asimov, Isaac. 2004b. I, Robot. New York: Bantam Dell.





What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...