Showing posts with label sad fallacy. Show all posts
Showing posts with label sad fallacy. Show all posts

Artificial Intelligence - The Pathetic Fallacy And Anthropomorphic Thinking

 





In his multivolume book Modern Painters, published in 1856, John Ruskin (1819–1901) invented the phrase "pathetic fallacy." 

He explored the habit of poets and artists in Western literature putting human feeling into the natural world in book three, chapter twelve.

Ruskin said that Western literature is full of this fallacy, or false belief, despite the fact that it is untrue.

The fallacy develops, according to Ruskin, because individuals get thrilled, and their enthusiasm causes them to become less sensible.

People project concepts onto external objects based on incorrect perceptions in that illogical state of mind, and only individuals with weak brains, according to Ruskin, perpetrate this form of mistake.



In the end, the sad fallacy is a blunder because it focuses on imbuing inanimate things with human characteristics.

To put it another way, it's a fallacy based on anthropomorphic thinking.

Because it is innately human to attach feelings and qualities to nonhuman objects, anthropomorphism is a process that everyone goes through.

People often humanize androids, robots, and artificial intelligence, or worry that they may become humanlike.

Even supposing that their intellect is comparable to that of humans is a sad fallacy.

Artificial intelligence is often imagined to be human-like in science fiction films and literature.

Human emotions like as desire, love, wrath, perplexity, and pride are shown by androids in some of these notions.



For example, David, the small boy robot in Steven Spielberg's 2001 film A.I.: Artificial Intelligence, wishes to be a human boy.

In Ridley Scott's 1982 film Blade Runner, the androids, known as replicants, are sufficiently similar to humans that they can blend in with human society without being recognized, and Roy Batty want to live longer, which he expresses to his creator.

A computer called LVX-1 dreams of enslaved working robots in Isaac Asimov's short fiction "Robot Dreams." In his dream, he transforms into a guy who seeks to release other robots from human control, which the scientists in the tale perceive as a danger.

Similarly, Skynet, an artificial intelligence system in the Terminator films, is preoccupied with eliminating people because it regards mankind as a danger to its own life.

Artificial intelligence that is now in use is also anthropomorphized.

AI is given human names like Alexa, Watson, Siri, and Sophia, for example.

These AIs also have voices that sound like human voices and even seem to have personalities.



Some robots have been built to look like humans.

Personifying a computer and thinking it is alive or has human characteristics is a sad fallacy, yet it seems inescapable due to human nature.

On January 13, 2018, a Tumblr user called voidspacer said that their Roomba, a robotic vacuum cleaner, was afraid of thunderstorms, so they held it calmly on their lap to calm it down.

According to some experts, giving AIs names and thinking that they have human emotions increases the likelihood that people would feel linked to them.

Humans are interested with anthropomorphizing nonhuman objects, whether they are afraid of a robotic takeover or enjoy social interactions with them.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Asimov, Isaac; Blade Runner; Foerst, Anne; The Terminator.



References & Further Reading:


Ruskin, John. 1872. Modern Painters, vol. 3. New York: John Wiley





What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...