Showing posts sorted by date for query Artificial General Intelligence. Sort by relevance Show all posts
Showing posts sorted by date for query Artificial General Intelligence. Sort by relevance Show all posts

AI - Technological Singularity

 




The emergence of technologies that could fundamentally change humans' role in society, challenge human epistemic agency and ontological status, and trigger unprecedented and unforeseen developments in all aspects of life, whether biological, social, cultural, or technological, is referred to as the Technological Singularity.

The Singularity of Technology is most often connected with artificial intelligence, particularly artificial general intelligence (AGI).

As a result, it's frequently depicted as an intelligence explosion that's pushing advancements in fields like biotechnology, nanotechnology, and information technologies, as well as inventing new innovations.

The Technological Singularity is sometimes referred to as the Singularity, however it should not be confused with a mathematical singularity, since it has only a passing similarity.

This singularity, on the other hand, is a loosely defined term that may be interpreted in a variety of ways, each highlighting distinct elements of the technological advances.

The thoughts and writings of John von Neumann (1903–1957), Irving John Good (1916–2009), and Vernor Vinge (1944–) are commonly connected with the Technological Singularity notion, which dates back to the second half of the twentieth century.

Several universities, as well as governmental and corporate research institutes, have financed current Technological Singularity research in order to better understand the future of technology and society.

Despite the fact that it is the topic of profound philosophical and technical arguments, the Technological Singularity remains a hypothesis, a guess, and a pretty open hypothetical idea.

While numerous scholars think that the Technological Singularity is unavoidable, the date of its occurrence is continuously pushed back.

Nonetheless, many studies agree that the issue is not whether or whether the Technological Singularity will occur, but rather when and how it will occur.

Ray Kurzweil proposed a more exact timeline for the emergence of the Technological Singularity in the mid-twentieth century.

Others have sought to give a date to this event, but there are no well-founded grounds in support of any such proposal.

Furthermore, without applicable measures or signs, mankind would have no way of knowing when the Technological Singularity has occurred.

The history of artificial intelligence's unmet promises exemplifies the dangers of attempting to predict the future of technology.

The themes of superintelligence, acceleration, and discontinuity are often used to describe the Technological Singularity.

The term "superintelligence" refers to a quantitative jump in artificial systems' cognitive abilities, putting them much beyond the capabilities of typical human cognition (as measured by standard IQ tests).

Superintelligence, on the other hand, may not be restricted to AI and computer technology.

Through genetic engineering, biological computing systems, or hybrid artificial–natural systems, it may manifest in human agents.

Superintelligence, according to some academics, has boundless intellectual capabilities.

The curvature of the time curve for the advent of certain key events is referred to as acceleration.

Stone tools, the pottery wheel, the steam engine, electricity, atomic power, computers, and the internet are all examples of technological advancement portrayed as a curve across time emphasizing the discovery of major innovations.

Moore's law, which is more precisely an observation that has been viewed as a law, represents the increase in computer capacity.

"Every two years, the number of transistors in a dense integrated circuit doubles," it says.

People think that the emergence of key technical advances and new technological and scientific paradigms will follow a super-exponential curve in the event of the Technological Singularity.

One prediction regarding the Technological Singularity, for example, is that superintelligent systems would be able to self-improve (and self-replicate) in previously unimaginable ways at an unprecedented pace, pushing the technological development curve far beyond what has ever been witnessed.

The Technological Singularity discontinuity is referred to as an event horizon, and it is similar to a physical idea linked with black holes.

The analogy to this physical phenomena, on the other hand, should be used with care rather than being used to credit the physical world's regularity and predictability to technological singularity.

The limit of our knowledge about physical occurrences beyond a specific point in time is defined by an event horizon (also known as a prediction horizon).

It signifies that there is no way of knowing what will happen beyond the event horizon.

The discontinuity or event horizon in the context of technological singularity suggests that the technologies that precipitate technological singularity would cause disruptive changes in all areas of human life, developments about which experts cannot even conjecture.

The end of humanity and the end of human civilization are often related with technological singularity.

According to some research, social order will collapse, people will cease to be major actors, and epistemic agency and primacy would be lost.

Humans, it seems, will not be required by superintelligent systems.

These systems will be able to self-replicate, develop, and build their own living places, and humans will be seen as either barriers or unimportant, outdated things, similar to how humans now consider lesser species.

One such situation is represented by Nick Bostrom's Paperclip Maximizer.

AI is included as a possible danger to humanity's existence in the Global Catastrophic Risks Survey, with a reasonably high likelihood of human extinction, placing it on par with global pandemics, nuclear war, and global nanotech catastrophes.

However, the AI-related apocalyptic scenario is not a foregone conclusion of the Technological Singularity.

In other more utopian scenarios, technology singularity would usher in a new period of endless bliss by opening up new opportunities for humanity's infinite expansion.

Another element of technological singularity that requires serious consideration is how the arrival of superintelligence may imply the emergence of superethical capabilities in an all-knowing ethical agent.

Nobody knows, however, what superethical abilities might entail.

The fundamental problem, however, is that superintelligent entities' higher intellectual abilities do not ensure a high degree of ethical probity, or even any level of ethical probity.

As a result, having a superintelligent machine with almost infinite (but not quite) capacities but no ethics seems to be dangerous to say the least.

A sizable number of scholars are skeptical about the development of the Technological Singularity, notably of superintelligence.

They rule out the possibility of developing artificial systems with superhuman cognitive abilities, either on philosophical or scientific grounds.

Some contend that while artificial intelligence is often at the heart of technological singularity claims, achieving human-level intelligence in artificial systems is impossible, and hence superintelligence, and thus the Technological Singularity, is a dream.

Such barriers, however, do not exclude the development of superhuman brains via the genetic modification of regular people, paving the door for transhumans, human-machine hybrids, and superhuman agents.

More scholars question the validity of the notion of the Technological Singularity, pointing out that such forecasts about future civilizations are based on speculation and guesswork.

Others argue that the promises of unrestrained technological advancement and limitless intellectual capacities made by the Technological Singularity legend are unfounded, since physical and informational processing resources are plainly limited in the cosmos, particularly on Earth.

Any promises of self-replicating, self-improving artificial agents capable of super-exponential technological advancement are false, since such systems will lack the creativity, will, and incentive to drive their own evolution.

Meanwhile, social opponents point out that superintelligence's boundless technological advancement would not alleviate issues like overpopulation, environmental degradation, poverty, and unparalleled inequality.

Indeed, the widespread unemployment projected as a consequence of AI-assisted mass automation of labor, barring significant segments of the population from contributing to society, would result in unparalleled social upheaval, delaying the development of new technologies.

As a result, rather than speeding up, political or societal pressures will stifle technological advancement.

While technological singularity cannot be ruled out on logical grounds, the technical hurdles that it faces, even if limited to those that can presently be determined, are considerable.

Nobody expects the technological singularity to happen with today's computers and other technology, but proponents of the concept consider these obstacles as "technical challenges to be overcome" rather than possible show-stoppers.

However, there is a large list of technological issues to be overcome, and Murray Shanahan's The Technological Singularity (2015) gives a fair overview of some of them.

There are also some significant nontechnical issues, such as the problem of superintelligent system training, the ontology of artificial or machine consciousness and self-aware artificial systems, the embodiment of artificial minds or vicarious embodiment processes, and the rights granted to superintelligent systems, as well as their role in society and any limitations placed on their actions, if this is even possible.

These issues are currently confined to the realms of technological and philosophical discussion.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Bostrom, Nick; de Garis, Hugo; Diamandis, Peter; Digital Immortality; Goertzel, Ben; Kurzweil, Ray; Moravec, Hans; Post-Scarcity, AI and; Superintelligence.


References And Further Reading


Bostrom, Nick. 2014. Superintelligence: Path, Dangers, Strategies. Oxford, UK: Oxford University Press.

Chalmers, David. 2010. “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17: 7–65.

Eden, Amnon H. 2016. The Singularity Controversy. Sapience Project. Technical Report STR 2016-1. January 2016.

Eden, Amnon H., Eric Steinhart, David Pearce, and James H. Moor. 2012. “Singularity Hypotheses: An Overview.” In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart, 1–12. Heidelberg, Germany: Springer.

Good, I. J. 1966. “Speculations Concerning the First Ultraintelligent Machine.” Advances in Computers 6: 31–88.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Sandberg, Anders, and Nick Bostrom. 2008. Global Catastrophic Risks Survey. Technical Report #2008/1. Oxford University, Future of Humanity Institute.

Shanahan, Murray. 2015. The Technological Singularity. Cambridge, MA: The MIT Press.

Ulam, Stanislaw. 1958. “Tribute to John von Neumann.” Bulletin of the American Mathematical Society 64, no. 3, pt. 2 (May): 1–49.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. Cleveland, OH: NASA Lewis Research Center.


AI - Symbolic Logic

 





In mathematical and philosophical reasoning, symbolic logic entails the use of symbols to express concepts, relations, and positions.

Symbolic logic varies from (Aristotelian) syllogistic logic in that it employs ideographs or a particular notation to "symbolize exactly the item discussed" (Newman 1956, 1852), and it may be modified according to precise rules.

Traditional logic investigated the truth and falsehood of assertions, as well as their relationships, using terminology derived from natural language.

Unlike nouns and verbs, symbols do not need interpretation.

Because symbol operations are mechanical, they may be delegated to computers.

Symbolic logic eliminates any ambiguity in logical analysis by codifying it entirely inside a defined notational framework.

Gottfried Wilhelm Leibniz (1646–1716) is widely regarded as the founding father of symbolic logic.

Leibniz proposed the use of ideographic symbols instead of natural language in the seventeenth century as part of his goal to revolutionize scientific thinking.

Leibniz hoped that by combining such concise universal symbols (characteristica universalis) with a set of scientific reasoning rules, he could create an alphabet of human thought that would promote the growth and dissemination of scientific knowledge, as well as a corpus containing all human knowledge.

Boolean logic, the logical underpinnings of mathematics, and decision issues are all topics of symbolic logic that may be broken down into subcategories.

George Boole, Alfred North Whitehead, and Bertrand Russell, as well as Kurt Gödel, wrote important contributions in each of these fields.

George Boole published The Mathematical Analysis of Logic (1847) and An Investigation of the Laws of Thought in the mid-nineteenth century (1854).




Boole zoomed down on a calculus of deductive reasoning, which led him to three essential operations in a logical mathematical language known as Boolean algebra: AND, OR, and NOT.

The use of symbols and operators greatly aided the creation of logical formulations.

Claude Shannon (1916–2001) employed electromechanical relay circuits and switches to reproduce Boolean algebra in the twentieth century, laying crucial foundations in the development of electronic digital computing and computer science in general.

Alfred North Whitehead and Bertrand Russell established their seminal work in the subject of symbolic logic in the early twentieth century.

Their Principia Mathematica (1910, 1912, 1913) demonstrated how all of mathematics may be reduced to symbolic logic.

Whitehead and Russell developed a logical system from a handful of logical concepts and a set of postulates derived from those ideas in the first book of their work.

Whitehead and Russell established all mathematical concepts, including number, zero, successor of, addition, and multiplication, using fundamental logical terminology and operational principles like proposition, negation, and either-or in the second book of the Principia.



In the last and third volumes, Whitehead and Russell were able to demonstrate that the nature and reality of all mathematics is built on logical concepts and connections.

The Principia showed how every mathematical postulate might be inferred from previously explained symbolic logical facts.

Only a few decades later, Kurt Gödel's On Formally Undecidable Propositions in the Principia Mathematica and Related Systems (1931) critically analyzed the Principia's strong and deep claims, demonstrating that Whitehead and Russell's axiomatic system could not be consistent and complete at the same time.

Even so, it required another important book in symbolic logic, Ernst Nagel and James Newman's Gödel's Proof (1958), to spread Gödel's message to a larger audience, including some artificial intelligence practitioners.

Each of these seminal works in symbolic logic had a different influence on the development of computing and programming, as well as our understanding of a computer's capabilities as a result.

Boolean logic has made its way into the design of logic circuits.

The Logic Theorist program by Simon and Newell provided logical arguments that matched those found in the Principia Mathematica, and was therefore seen as evidence that a computer could be programmed to do intelligent tasks via symbol manipulation.

Gödel's incompleteness theorem raises intriguing issues regarding how programmed machine intelligence, particularly strong AI, will be realized in the end.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.


See also: 

Symbol Manipulation.



References And Further Reading


Boole, George. 1854. Investigation of the Laws of Thought on Which Are Founded the Mathematical Theories of Logic and Probabilities. London: Walton.

Lewis, Clarence Irving. 1932. Symbolic Logic. New York: The Century Co.

Nagel, Ernst, and James R. Newman. 1958. Gödel’s Proof. New York: New York University Press.

Newman, James R., ed. 1956. The World of Mathematics, vol. 3. New York: Simon and Schuster.

Whitehead, Alfred N., and Bertrand Russell. 1910–1913. Principia Mathematica. Cambridge, UK: Cambridge University Press.



AI - Symbol Manipulation.

 



The broad information-processing skills of a digital stored program computer are referred to as symbol manipulation.

From the 1960s through the 1980s, seeing the computer as fundamentally a symbol manipulator became the norm, leading to the scientific study of symbolic artificial intelligence, now known as Good Old-Fashioned AI (GOFAI).

In the 1960s, the emergence of stored-program computers sparked a renewed interest in a computer's programming flexibility.

Symbol manipulation became a comprehensive theory of intelligent behavior as well as a research guideline for AI.

The Logic Theorist, created by Herbert Simon, Allen Newell, and Cliff Shaw in 1956, was one of the first computer programs to mimic intelligent symbol manipulation.

The Logic Theorist was able to prove theorems from Bertrand Russell's Principia Mathematica (1910–1913) and Alfred North Whitehead's Principia Mathematica (1910–1913).

It was presented at Dartmouth's Artificial Intelligence Summer Research Project in 1956. (the Dartmouth Conference).


John McCarthy, a Dartmouth mathematics professor who invented the phrase "artificial intelligence," convened this symposium.


The Dartmouth Conference might be dubbed the genesis of AI since it was there that the Logic Theorist first appeared, and many of the participants went on to become pioneering AI researchers.

The features of symbol manipulation, as a generic process that underpins all types of intelligent problem-solving behavior, were thoroughly explicated and provided a foundation for most of the early work in AI only in the early 1960s, when Simon and Newell had built their General Problem Solver (GPS).

In 1961, Simon and Newell took their knowledge of AI and their work on GPS to a wider audience.


"A computer is not a number-manipulating device; it is a symbol-manipulating device," they wrote in Science, "and the symbols it manipulates may represent numbers, letters, phrases, or even nonnumerical, nonverbal patterns" (Newell and Simon 1961, 2012).





Reading "symbols or patterns presented by appropriate input devices, storing symbols in memory, copying symbols from one memory location to another, erasing symbols, comparing symbols for identity, detecting specific differences between their patterns, and behaving in a manner conditional on the results of its processes," Simon and Newell continued (Newell and Simon 1961, 2012).


The growth of symbol manipulation in the 1960s was also influenced by breakthroughs in cognitive psychology and symbolic logic prior to WWII.


Starting in the 1930s, experimental psychologists like Edwin Boring at Harvard University began to advance their profession away from philosophical and behavioralist methods.





Boring challenged his colleagues to break the mind open and create testable explanations for diverse cognitive mental operations (an approach that was adopted by Kenneth Colby in his work on PARRY in the 1960s).

Simon and Newell also emphasized their debt to pre-World War II developments in formal logic and abstract mathematics in their historical addendum to Human Problem Solving—not because all thought is logical or follows the rules of deductive logic, but because formal logic treated symbols as tangible objects.

"The formalization of logic proved that symbols can be copied, compared, rearranged, and concatenated with just as much definiteness of procedure as [wooden] boards can be sawed, planed, measured, and glued [in a carpenter shop]," Simon and Newell noted (Newell and Simon 1973, 877).



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Expert Systems; Newell, Allen; PARRY; Simon, Herbert A.


References & Further Reading:


Boring, Edwin G. 1946. “Mind and Mechanism.” American Journal of Psychology 59, no. 2 (April): 173–92.

Feigenbaum, Edward A., and Julian Feldman. 1963. Computers and Thought. New York: McGraw-Hill.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman and Company

Newell, Allen, and Herbert A. Simon. 1961. “Computer Simulation of Human Thinking.” Science 134, no. 3495 (December 22): 2011–17.

Newell, Allen, and Herbert A. Simon. 1972. Human Problem Solving. Englewood Cliffs, NJ: Prentice Hall.

Schank, Roger, and Kenneth Colby, eds. 1973. Computer Models of Thought and Language. San Francisco: W. H. Freeman and Company.


AI - What Is Superintelligence AI? Is Artificial Superintelligence Possible?

 


 

In its most common use, the phrase "superintelligence" refers to any degree of intelligence that at least equals, if not always exceeds, human intellect, in a broad sense.


Though computer intelligence has long outperformed natural human cognitive capacity in specific tasks—for example, a calculator's ability to swiftly interpret algorithms—these are not often considered examples of superintelligence in the strict sense due to their limited functional range.


In this sense, superintelligence would necessitate, in addition to artificial mastery of specific theoretical tasks, some kind of additional mastery of what has traditionally been referred to as practical intelligence: a generalized sense of how to subsume particulars into universal categories that are in some way worthwhile.


To this day, no such generalized superintelligence has manifested, and hence all discussions of superintelligence remain speculative to some degree.


Whereas traditional theories of superintelligence have been limited to theoretical metaphysics and theology, recent advancements in computer science and biotechnology have opened up the prospect of superintelligence being materialized.

Although the timing of such evolution is hotly discussed, a rising body of evidence implies that material superintelligence is both possible and likely.


If this hypothesis is proved right, it will very certainly be the result of advances in one of two major areas of AI research


  1. Bioengineering 
  2. Computer science





The former involves efforts to not only map out and manipulate the human DNA, but also to exactly copy the human brain electronically through full brain emulation, also known as mind uploading.


The first of these bioengineering efforts is not new, with eugenics programs reaching back to the seventeenth century at the very least.

Despite the major ethical and legal issues that always emerge as a result of such efforts, the discovery of DNA in the twentieth century, together with advances in genome mapping, has rekindled interest in eugenics.

Much of this study is aimed at gaining a better understanding of the human brain's genetic composition in order to manipulate DNA code in the direction of superhuman intelligence.



Uploading is a somewhat different, but still biologically based, approach to superintelligence that aims to map out neural networks in order to successfully transfer human intelligence onto computer interfaces.


  • The brains of insects and tiny animals are micro-dissected and then scanned for thorough computer analysis in this relatively new area of study.
  • The underlying premise of whole brain emulation is that if the brain's structure is better known and mapped, it may be able to copy it with or without organic brain tissue.



Despite the fast growth of both genetic mapping and whole brain emulation, both techniques have significant limits, making it less likely that any of these biological approaches will be the first to attain superintelligence.





The genetic alteration of the human genome, for example, is constrained by generational constraints.

Even if it were now feasible to artificially boost cognitive functioning by modifying the DNA of a human embryo (which is still a long way off), it would take an entire generation for the changed embryo to evolve into a fully fledged, superintelligent human person.

This would also imply that there are no legal or moral barriers to manipulating the human DNA, which is far from the fact.

Even the comparatively minor genetic manipulation of human embryos carried done by a Chinese physician as recently as November 2018 sparked international outrage (Ramzy and Wee 2019).



Whole brain emulation, on the other hand, is still a long way off, owing to biotechnology's limits.


Given the current medical technology, the extreme levels of accuracy necessary at every step of the uploading process are impossible to achieve.

Science and technology currently lack the capacity to dissect and scan human brain tissue with sufficient precision to produce full brain simulation results.

Furthermore, even if such first steps are feasible, researchers would face significant challenges in analyzing and digitally replicating the human brain using cutting-edge computer technology.




Many analysts believe that such constraints will be overcome, although the timeline for such realizations is unknown.



Apart from biotechnology, the area of AI, which is strictly defined as any type of nonorganic (particularly computer-based) intelligence, is the second major path to superintelligence.

Of course, the work of creating a superintelligent AI from the ground up is complicated by a number of elements, not all of which are purely logistical in nature, such as processing speed, hardware/software design, finance, and so on.

In addition to such practical challenges, there is a significant philosophical issue: human programmers are unable to know, and so cannot program, that which is superior to their own intelligence.





Much contemporary research on computer learning and interest in the notion of a seed AI is motivated in part by this worry.


Any machine capable of changing reactions to stimuli based on an examination of how well it performs in relation to a predetermined objective is defined as the latter.

Importantly, the concept of a seed AI entails not only the capacity to change its replies by extending its base of content knowledge (stored information), but also the ability to change the structure of its programming to better fit a specific job (Bostrom 2017, 29).

Indeed, it is this latter capability that would give a seed AI what Nick Bostrom refers to as "recursive self-improvement," or the ability to evolve iteratively (Bostrom 2017, 29).

This would eliminate the requirement for programmers to have an a priori vision of super intelligence since the seed AI would constantly enhance its own programming, with each more intelligent iteration writing a superior version of itself (beyond the human level).

Such a machine would undoubtedly cast doubt on the conventional philosophical assumption that robots are incapable of self-awareness.

This perspective's proponents may be traced all the way back to Descartes, but they also include more current thinkers like John Haugeland and John Searle.



Machine intelligence, in this perspective, is defined as the successful correlation of inputs with outputs according to a predefined program.




As a result, robots differ from humans in type, the latter being characterized only by conscious self-awareness.

Humans are supposed to comprehend the activities they execute, but robots are thought to carry out functions mindlessly—that is, without knowing how they work.

Should it be able to construct a successful seed AI, this core idea would be forced to be challenged.

The seed AI would demonstrate a level of self-awareness and autonomy not readily explained by the Cartesian philosophical paradigm by upgrading its own programming in ways that surprise and defy the forecasts of its human programmers.

Indeed, although it is still speculative (for the time being), the increasingly possible result of superintelligent AI poses a slew of moral and legal dilemmas that have sparked a lot of philosophical discussion in this subject.

The main worries are about the human species' security in the case of what Bostrom refers to as a "intelligence explosion"—that is, the creation of a seed AI followed by a possibly exponential growth in intellect (Bostrom 2017).



One of the key problems is the inherently unexpected character of such a result.


Humans will not be able to totally foresee how superintelligent AI would act due to the autonomy entailed by superintelligence in a definitional sense.

Even in the few cases of specialized superintelligence that humans have been able to construct and study so far—for example, robots that have surpassed humans in strategic games like chess and Go—human forecasts for AI have shown to be very unreliable.

For many critics, such unpredictability is a significant indicator that, should more generic types of superintelligent AI emerge, humans would swiftly lose their capacity to manage them (Kissinger 2018).





Of all, such a loss of control does not automatically imply an adversarial relationship between humans and superintelligence.


Indeed, although most of the literature on superintelligence portrays this relationship as adversarial, some new work claims that this perspective reveals a prejudice against machines that is particularly prevalent in Western cultures (Knight 2014).

Nonetheless, there are compelling grounds to believe that superintelligent AI would at the very least consider human goals as incompatible with their own, and may even regard humans as existential dangers.

For example, computer scientist Steve Omohundro has claimed that even a relatively basic kind of superintelligent AI like a chess bot would have motive to want the extinction of humanity as a whole—and may be able to build the tools to do it (Omohundro 2014).

Similarly, Bostrom has claimed that a superintelligence explosion would most certainly result in, if not the extinction of the human race, then at the very least a gloomy future (Bostrom 2017).

Whatever the benefits of such theories, the great uncertainty entailed by superintelligence is obvious.

If there is one point of agreement in this large and diverse literature, it is that if AI research is to continue, the global community must take great care to protect its interests.





Hardened determinists who claim that technological advancement is so tightly connected to inflexible market forces that it is simply impossible to change its pace or direction in any major manner may find this statement contentious.


According to this determinist viewpoint, if AI can deliver cost-cutting solutions for industry and commerce (as it has already started to do), its growth will proceed into the realm of superintelligence, regardless of any unexpected negative repercussions.

Many skeptics argue that growing societal awareness of the potential risks of AI, as well as thorough political monitoring of its development, are necessary counterpoints to such viewpoints.


Bostrom highlights various examples of effective worldwide cooperation in science and technology as crucial precedents that challenge the determinist approach, including CERN, the Human Genome Project, and the International Space Station (Bostrom 2017, 253).

To this, one may add examples from the worldwide environmental movement, which began in the 1960s and 1970s and has imposed significant restrictions on pollution committed in the name of uncontrolled capitalism (Feenberg 2006).



Given the speculative nature of superintelligence research, it is hard to predict what the future holds.

However, if superintelligence poses an existential danger to human existence, caution would dictate that a worldwide collaborative strategy rather than a free market approach to AI be used.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Berserkers; Bostrom, Nick; de Garis, Hugo; General and Narrow AI; Goertzel, Ben; Kurzweil, Ray; Moravec, Hans; Musk, Elon; Technological Singularity; Yudkowsky, Eliezer.



References & Further Reading:


  • Bostrom, Nick. 2017. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.
  • Feenberg, Andrew. 2006. “Environmentalism and the Politics of Technology.” In Questioning Technology, 45–73. New York: Routledge.
  • Kissinger, Henry. 2018. “How the Enlightenment Ends.” The Atlantic, June 2018. https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/.
  • Knight, Heather. 2014. How Humans Respond to Robots: Building Public Policy Through Good Design. Washington, DC: The Project on Civilian Robotics. Brookings Institution.
  • Omohundro, Steve. 2014. “Autonomous Technology and the Greater Human Good.” Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 303–15.
  • Ramzy, Austin, and Sui-Lee Wee. 2019. “Scientist Who Edited Babies’ Genes Is Likely to Face Charges in China.” The New York Times, January 21, 2019



AI - Spiritual Robots.

 




In April 2000, Indiana University cognitive scientist Douglas Hofstadter arranged a symposium called "Will Spiritual Robots Replace Humanity by 2100?" at Stanford University.


Frank Drake, astronomer and SETI director, John Holland, creator of genetic algorithms, Bill Joy of Sun Microsystems, computer scientist John Koza, futurist Ray Kurzweil, public key cryptography architect Ralph Merkle, and roboticist Hans Moravec were among the panelists.


Several of the panelists gave their thoughts on the conference's theme based on their own writings.


  • Kurzweil's optimistic futurist account of artificial intelligence, The Age of Spiritual Machines, had just been published (1999).
  • In Robot: Mere Machine to Transcendent Mind, Moravec presented a positive picture of machine superintelligence (1999).
  • Bill Joy had just written a story for Wired magazine called "Why the Future Doesn't Need Us" on the triple technological danger posed by robots, genetic engineering, and nanotechnology (2000).
  • Only Hofstadter believed that Moore's Law doublings of transistors on integrated circuits may lead to spiritual robots as a consequence of the tremendous increase in artificial intelligence technologies.



Is it possible for robots to have souls? 


Can they exercise free will and separate themselves from humanity? 


What does it mean to have a soul for an artificial intelligence? 


Questions like these have been asked since the days of golems, Pinocchio, and the Tin Man, but they are becoming more prevalent in modern writing on religion, artificial intelligence, and the Technological Singularity.



Japan's robotics leadership started with puppetry.


Takemoto Giday and playwright Chikamatsu Monzaemon founded the Takemoto-za in Osaka's Dotonbori district in 1684 to perform bunraku, a theatrical extravaganza involving one-half life-size wooden puppets dressed in elaborate costumes, each controlled by three black-clad onstage performers: a principal puppeteer and two assistants.

Bunraku exemplifies Japan's long-standing fascination in bringing inanimate items to life.

Japan is a world leader in robotics and artificial intelligence today, thanks to a grueling postwar rebuilding effort known as gijutsu rikkoku (nation building via technology).


Television was one of the first technologies to be widely used under technonationalism.

The Japanese government hoped that print and electronic media would encourage people to dream of an electronic lifestyle and reconnect with the global economy by encouraging them to employ innovative technology to do so.

As a result, Japan has become a major culture rival to the United States.

Manga and anime, which feature intelligent and humanlike robots, mecha, and cyborgs, are two of Japan's most recognizable entertainment exports.


The notion of spiritual machinery is widely accepted in Japan's Buddhist and Shinto worldviews.


Masahiro Mori, a roboticist at Tokyo Institute of Technology, has proposed that a sufficiently powerful artificial intelligence may one day become a Buddha.

Mindar, a robot based on the Goddess of Mercy Kannon Bodhisattva, is a new priest at Kyoto's Kodaiji temple.

Mindar is capable of presenting a sermon on the popular Heart Sutra ("form is empty, emptiness is form") while moving arms, head, and torso, and costs a million dollars.

Robot partners are accepted because they are among the things thought to be endowed with kami, the spirit or divinity shared by the gods, nature, objects, and people in the Shinto faith.

In Japan, Shinto priests are still periodically summoned to consecrate or bless new and abandoned electronic equipment.

The Kanda-Myokin Shrine, which overlooks Tokyo's Akihabara electronics retail area, provides prayer, rituals, and talismans aimed at purifying or conferring heavenly protection on items like smart phones, computer operating systems, and hard drives.



Americans, on the other hand, are just now starting to grapple with issues of robot identity and spirituality.


This is partly due to the fact that America's leading faiths have their roots in Christian rites and practices, which have traditionally been adverse to science and technology.


However, the histories of Christianity and robotics are intertwined.

In the 1560s, Philip II of Spain, for example, commissioned the first mechanical monk.


Mechanical automata, according to Stanford University historian Jessica Riskin (2010), are uniquely Catholic in origin.


They allowed for computerized reenactments of biblical tales in churches and cathedrals, as well as artificial equivalents of real humans and celestial entities like as angels for study and contemplation.

They also aided Renaissance and early modern Christian thinkers and theologians in contemplating conceptions of motion, life, and the incorporeal soul.

By the middle of the eighteenth century, "There was no dichotomy between machinery and divinity or vitality in the culture of living machinery that surrounded these machines," Riskin writes.



"On the contrary, the automata symbolized spirit in all of its bodily manifestations, as well as life at its most vibrant" (Riskin 2010, 43).

That spirit is still alive and well today.


SanTO, described as a robot with "divine qualities" and "the first Catholic robot," was unveiled at a conference of the Institute of Electrical and Electronics Engineers in New Delhi in 2019. (Trovato et al. 2019).


In reformist churches, robots are also present.

To commemorate the 500th anniversary of the Reformation, the Protestant churches of Hesse and Nassau unveiled the interactive, multilingual BlessU-2 robot in 2017.

The robot, as its name indicates, selects specific blessings for particular attendees.

The Massachusetts Institute of Technology's God and Computers Project intended to establish a conversation between academics developing artificial intelligence and religious experts.


She characterized herself as a "theological counselor" to MIT's Humanoid Robotics Group's emotional AI experimental robots Cog and Kismet.


Foerst concluded that embodied AI becomes engaged in the divine image of God, develops human capabilities and emotional sociability, and shares equal dignity as a creature in the universe via exercises in machine-man connection, intersubjectivity, and ambiguity.

"Victor Frankenstein and his creation may now be pals." 

Frankenstein will be able to accept that his creation, which he saw as a machine and an objective entity, had evolved into a human person" (Foerst 1996, 692).



Deep existential concerns about Christian thinking and conduct are being raised by robots and artificial intelligence.


Since the 1980s, according to theologian Michael DeLashmutt of the Episcopal Church's General Theological Seminary, "proliferating digital technologies have given birth to a cultural mythology that presents a rival theological paradigm to the one presented by kerygmatic Christian theology" (DeLashmutt 2006, i).



DeLashmutt opposes techno-theology for two reasons.


First, technology is not inherently immutable, and as such, it should not be reified or given autonomy, but rather examined.

Second, information technology isn't the most reliable tool for comprehending the world and ourselves.


In the United States, smart robots are often considered as harbingers of economic disruption, AI domination, and even doomsday.

Several times, Pope Francis has brought up the subject of artificial intelligence ethics.

He discussed the matter with Microsoft President Brad Smith in 2019.

The Vatican and Microsoft have teamed together to award a prize for the finest PhD dissertation on AI for social benefit.

In 2014, creationist academics at Matthews, North Carolina's Southern Evangelical Seminary & Bible College bought an Aldebaran Nao humanoid robot to much fanfare.

The seminarians wanted to learn about self-driving cars and think about the ethics of new intelligent technology in the perspective of Christian theology.



The Ethics and Religious Liberty Commission of the Southern Baptist Convention produced the study "Artificial Intelligence: An Evangelical Statement of Principles" in 2019, rejecting any AI's intrinsic "identity, value, dignity, or moral agency" (Southern Baptist Convention 2019).



Jim Daly of Focus on the Family, Mark Galli of Christianity Today, and theologians Wayne Grudem and Richard Mouw were among the signatories.

Some evangelicals argue that transhumanist ideas regarding humanity's perfectibility via technology are incompatible with faith in Jesus Christ's perfection.

The Christian Transhumanist Association and the Mormon Transhumanist Association both oppose this viewpoint.

Both organizations acknowledge that science, technology, and Christian fellowship all contribute to affirming and exalting humanity as beings created in the image of God.


Robert Geraci, a religious studies professor at Manhattan College, wonders if people "could really think that robots are aware if none of them exercise any religion" (Geraci 2007).


He observes that in the United States, Christian sentiment favors virtual, immaterial artificial intelligence software over materialist robot bodies.

He compares Christian faith in the immortality of the soul to transhumanists' desire for entire brain emulation or mind uploading into a computer.

Mind, according to neuroscientists, is an emergent characteristic of the human brain's 86 billion neurons' networking.

Christian longing for transcendence have similarities to this intellectual construct.



Artificial intelligence's eschatology also contains a concept of freedom from death or agony; in this instance, the afterlife is cyberspatial.


New faiths, at least in part inspired by artificial intelligence, are gaining popularity.

The Church of Perpetual Life, based in Hollywood, Florida, is a transhumanist worship institution dedicated to the advancement of life-extension technology.

Cryonics pioneers Saul Kent and Bill Faloon launched the church in 2013.

Artificial intelligence serial entrepreneur Peter Voss and Transhumanist Party presidential candidate Zoltan Istvan are among the professionals in artificial intelligence and transhumanism who have visited the center.

Martine and Gabriel Rothblatt formed the Terasem Movement, a religion related with cryonics and transhumanism.



"Life is intentional, death is voluntary, god is technical, and love is fundamental," the faith's basic doctrines state (Truths of Terasem 2012).


The realistic Bina48 robot, created by Hanson Robotics and modeled after Martine's husband, is in part a demonstration of Terasem's mindfile-based algorithm, which Terasem believes could one day allow legitimate mind uploading into an artificial substrate (and maybe even bring about everlasting life).

Heaven, according to Gabriel Rothblatt, is similar to a virtual reality simulation.

Anthony Levandowski, an engineer who oversaw the teams that produced Google and Uber's self-driving vehicles, launched The Way of the Future, an AI-based religion.



Levandowski is driven by a desire to build a superintelligent, artificial god with Christian morals.


"If anything becomes much, much smarter in the future," he continues, "there will be a changeover as to who is truly in command." 

"What we want is for the planet's control to pass peacefully and peacefully from people to whoever." 

And to make sure that 'whatever' understands who assisted it in getting along" (Harris 2017).

He is driven to ensure that artificial intelligences have legal rights and are fully integrated into human society.



Spiritual robots have become a popular science fiction motif.


Cutie (QT-1) convinces other robots that human people are too mediocre to be their creators in Isaac Asimov's short tale "Reason" (1941).


Instead, Cutie (QT-1) encourages them to worship the power plant on their space station, calling it the Master of both machines and mankind.

The Mission for Saint Aquin (1951), by Anthony Boucher, is a postapocalyptic novelette that pays tribute to Asimov's "Reason."


It follows a priest called Thomas on a postapocalyptic quest to find the famous evangelist Saint Aquin's last resting place (Boucher patterns Saint Aquin after St. Thomas Aquinas, who used Aristotelian logic to prove the existence of God).


Saint Aquin's corpse is said to have never decayed.

The priest rides a robass (robot donkey) with artificial intelligence; the robass is an atheist and tempter who can engage in theological debate with the priest.

When Saint Aquin is finally discovered after many trials, he is revealed to be an incorruptible android theologian.

Thomas is certain of the accomplishment of his quest—he has discovered a robot with a logical brain that, although manufactured by a human, believes in God.


In Stanislaw Lem’s novella “Trurl and the Construction of Happy Worlds” (1965), a box-dwelling robot race created by a robot engineer is persuaded that their habitat is a paradise to which all other creatures should aspire.


The robots form a religion and begin making preparations to drill a hole in the box in order to bring everyone outside the box into their paradise, willingly or unwillingly.

The constructor of the robots is enraged by this idea, and he destroys them.

Clifford D. Simak, a science fiction grandmaster, is also known for his spiritual robots.



Hezekiel is a robot abbot who leads a Christian congregation of other robots in a monastery in A Choice of Gods (1972).


The group has received a communication from The Principle, a god-like creature, although Hezekiel believes that "God must always be a pleasant old (human) gentleman with a long, white, flowing beard" (Simak 1972, 158).

The robot monks in Project Pope (1981) are on the lookout for paradise and the meaning of the cosmos.

John, a mechanical gardener, tells the Pope that he believes he has a soul.

The Pope, on the other hand, is not so sure.

Because humans refuse to let robots to their churches, the robots establish their own Vatican-17 on a faraway planet.

A massive computer serves as the Pope of the Robots.

Androids idolize their creator Simeon Krug in Robert Silverberg's Hugo-nominated novel Tower of Glass (1970), hoping that he would one day free them from harsh slavery.

They leave faith and rebel when they learn Krug is uninterested in their freedom.

Silverberg's Nebula award-winning short story "Good News from the Vatican" (1971) is about an artificially intelligent robot who is elected Pope Sixtus the Seventh as a compromise candidate.


"If he's elected," Rabbi Mueller continues, "he wants an instant time-sharing arrangement with the Dalai Lama, as well as a reciprocal plug-in with the chief programmer of the Greek Orthodox church, just to start" (Silverberg 1976, 269).

Television shows often include spiritual robots.


In the British science fiction comedy Red Dwarf (1988–1999), sentient computers are equipped with belief chips, which convince them of the existence of silicon paradise.


At the animated television series Futurama (1999–2003, 2008–2013), robots worship in the Temple of Robotology, where Reverend Lionel Preacherbot delivers sermons.

The artificial Cylons are monotheists in the popular reboot and reinterpretation of the Battlestar Galactica television series (2003–2009), whereas the humans of the Twelve Colonies are polytheists.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Foerst, Anne; Nonhuman Rights and Personhood; Robot Ethics; Technological Singularity.


References & Further Reading:


DeLashmutt, Michael W. 2006. “Sketches Towards a Theology of Technology: Theological Confession in a Technological Age.” Ph.D. diss., University of Glasgow.

Foerst, Anne. 1996. “Artificial Intelligence: Walking the Boundary.” Zygon 31, no. 4: 681–93.

Geraci, Robert M. 2007. “Religion for the Robots.” Sightings, June 14, 2007. https://web.archive.org/web/20100610170048/http://divinity.uchicago.edu/martycenter/publications/sightings/archive_2007/0614.shtml.

Harris, Mark. 2017. “Inside the First Church of Artificial Intelligence.” Wired, November 15, 2017. https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/.

Riskin, Jessica. 2010. “Machines in the Garden.” Arcade: A Digital Salon 1, no. 2 (April 30): 16–43.

Silverberg, Robert. 1970. Tower of Glass. New York: Charles Scribner’s Sons.

Simak, Clifford D. 1972. A Choice of Gods. New York: Ballantine.

Southern Baptist Convention. Ethics and Religious Liberty Commission. 2019. “Artificial Intelligence: An Evangelical Statement of Principles.” https://erlc.com/resource-library/statements/artificial-intelligence-an-evangelical-statement-of-principles/

Trovato, Gabriele, Franco Pariasca, Renzo Ramirez, Javier Cerna, Vadim Reutskiy, Laureano Rodriguez, and Francisco Cuellar. 2019. “Communicating with SanTO: The First Catholic Robot.” In 28th IEEE International Conference on Robot and Human Interactive Communication, 1–6. New Delhi, India, October 14–18.

Truths of Terasem. 2012. https://terasemfaith.net/beliefs/.

 

What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...