Showing posts with label Hugo de Garis. Show all posts
Showing posts with label Hugo de Garis. Show all posts

Artificial Intelligence - How Does 'The Terminator' Represent AI?

 


The Terminator, which was released in 1984 and grossed nearly $40 million at the domestic box office and untold sums in the ancillary market while also spawning a multi-film franchise that continues to this day, is one of the most well-known representations of artificial intelligence in popular culture.



Despite the fact that the majority of the film takes place in 1984, it shows a future in which Skynet, a military-designed artificial intelligence system, becomes self-aware and wage war on mankind.

Shots from the future show roaming robot destroyers seeking humans on a battlefield littered with mechanical wreckage and human bones, who seem to be on the verge of extinction.



The majority of the movie is on a T-800 terminator (Arnold Schwarzenegger) who is sent back to 1984 to murder Sarah Connor (Linda Hamilton) before she gives birth to John Connor, humanity's future savior.

The fundamental story element of The Terminator dramatizes what is likely the most prominent artificial intelligence myth in popular culture: depicting intelligent machines as inherently dangerous beings capable of rebelling against mankind in pursuit of their own agenda.



To fully comprehend the relevance of The Terminator, further information regarding the story as well as the character of the terminator must be provided.

Earth is in the middle of a conflict between humans and Skynet-created robots in the year 2029, after a nuclear catastrophe.

Despite the fact that more information about Sky net is disclosed in later installments of the series, its functionality remains a mystery in the first film.

On the verge of defeat, John Connor (who is barely mentioned in passing in the movie) leads a human resistance force that eventually overcomes the robots.

The machines build a time-travel device to send one of their terminator units back in time to assassinate Sarah Connor before she conceives her savior son in order to foil the Connor-led revolt.

To thwart the robots' plot, the human resistance sends back their own operative, Kyle Reese (Michael Biehn).

Reese is supposed to protect Sarah Connor while the terminator is supposed to murder her.

As a result, the rest of the film, set in 1984, resembles a cat and mouse pursuit in which the terminator tracks down Connor and Reese only to escape by the skin of their teeth.




The Terminator is scorched to its mechanical endoskeleton in the film's climax, and it follows Connor and Reese into a factory.

Reese conducts a self-sacrifice by putting a homemade pipe bomb in the terminator's belly, killing him and severing the terminator in two.

Connor is pursued by the terminator's torso, which she is able to smash using a hydraulic press.

The video then cuts to a pregnant Sarah Connor travelling across Mexico some months later.

Reese is revealed to be John Connor's father in this scene.

The Terminator is a wonderful example of artificial intelligence.

It can walk, speak, sense, and act like a human person, while being a programmed murdering machine.



It has been shown that it can absorb interactional subtlety and change its behavior depending on previous experiences and interactions.

In a phone conversation, the terminator can also simulate the voice of Sarah Connor's mother, which convinces Sarah to divulge her whereabouts to the terminator.

The terminator can unquestionably pass the Turing Test in these areas (a test wherein a confederate is unable to determine if they are communicating with a human or a robot).



The terminator, on the other hand, is devoid of human awareness and is guided by mechanical logic as it completes a task.

The terminator was shot, ran over by a Mac truck, and burnt to its endoskeleton, among other traumas, thus it's safe to assume it doesn't perceive pain like humans do.

Popular culture's pessimistic depictions of artificial intelligence, such as The Terminator, give images of the future to be avoided.



As a result, The Terminator serves as a stark warning against a fundamental driving reason behind artificial intelligence development—the military industrial complex—with its depiction of a future governed by robots and its deployment of a ruthless, unstoppable killing machine (later installments made the additional link of corporate greed).


President Ronald Reagan's Strategic Defense Initiative (later derisively nicknamed the Star Wars Initiative by Senator Ted Kennedy) unveiled in 1983 ratcheted up tensions with the Soviet Union at the time of the film's development.


In a nutshell, the Strategic Defensive Initiative was a planned missile defense system that would protect the nation against ballistic nuclear weapons assaults.

It was believed that Reagan's bluster might spark a nuclear weapons race.

As a result, The Terminator may be viewed as a criticism of Ronald Reagan's Cold War strategy in that it offers a look into a possible post-apocalyptic future wrought by nuclear catastrophe and the development of ever powerful weapons.

In summary, the film expresses concerns about human creation's destructive potential and the possibility for humans' own inventions to turn against them.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Berserkers; de Garis, Hugo; Technological Singularity.


References And Further Reading

Brammer, Rebekah. 2018. “Welcome to the Machine: Artificial Intelligence on Screen.” Screen Education 90 (September): 38–45.

Brown, Richard, and Kevin S. Decker, eds. 2009. Terminator and Philosophy: I’ll Be Back, Therefore I Am. Hoboken, NJ: John Wiley and Sons.

Gramantieri, Riccardo. 2018. “Artificial Monsters: From Cyborg to Artificial Intelligence.” In Monsters of Film, Fiction, and Fable: The Cultural Links between the Human and Inhuman, edited by Lisa Wegner Bro, Crystal O’Leary-Davidson, and Mary Ann Gareis, 287–313. Newcastle upon Tyne, UK: Cambridge Scholars Publishing.

Jancovich, Mark. 1992. “Modernity and Subjectivity in The Terminator: The Machine as Monster in Contemporary American Culture.” Velvet Light Trap 30 (Fall): 3–17.



AI - Technological Singularity

 




The emergence of technologies that could fundamentally change humans' role in society, challenge human epistemic agency and ontological status, and trigger unprecedented and unforeseen developments in all aspects of life, whether biological, social, cultural, or technological, is referred to as the Technological Singularity.

The Singularity of Technology is most often connected with artificial intelligence, particularly artificial general intelligence (AGI).

As a result, it's frequently depicted as an intelligence explosion that's pushing advancements in fields like biotechnology, nanotechnology, and information technologies, as well as inventing new innovations.

The Technological Singularity is sometimes referred to as the Singularity, however it should not be confused with a mathematical singularity, since it has only a passing similarity.

This singularity, on the other hand, is a loosely defined term that may be interpreted in a variety of ways, each highlighting distinct elements of the technological advances.

The thoughts and writings of John von Neumann (1903–1957), Irving John Good (1916–2009), and Vernor Vinge (1944–) are commonly connected with the Technological Singularity notion, which dates back to the second half of the twentieth century.

Several universities, as well as governmental and corporate research institutes, have financed current Technological Singularity research in order to better understand the future of technology and society.

Despite the fact that it is the topic of profound philosophical and technical arguments, the Technological Singularity remains a hypothesis, a guess, and a pretty open hypothetical idea.

While numerous scholars think that the Technological Singularity is unavoidable, the date of its occurrence is continuously pushed back.

Nonetheless, many studies agree that the issue is not whether or whether the Technological Singularity will occur, but rather when and how it will occur.

Ray Kurzweil proposed a more exact timeline for the emergence of the Technological Singularity in the mid-twentieth century.

Others have sought to give a date to this event, but there are no well-founded grounds in support of any such proposal.

Furthermore, without applicable measures or signs, mankind would have no way of knowing when the Technological Singularity has occurred.

The history of artificial intelligence's unmet promises exemplifies the dangers of attempting to predict the future of technology.

The themes of superintelligence, acceleration, and discontinuity are often used to describe the Technological Singularity.

The term "superintelligence" refers to a quantitative jump in artificial systems' cognitive abilities, putting them much beyond the capabilities of typical human cognition (as measured by standard IQ tests).

Superintelligence, on the other hand, may not be restricted to AI and computer technology.

Through genetic engineering, biological computing systems, or hybrid artificial–natural systems, it may manifest in human agents.

Superintelligence, according to some academics, has boundless intellectual capabilities.

The curvature of the time curve for the advent of certain key events is referred to as acceleration.

Stone tools, the pottery wheel, the steam engine, electricity, atomic power, computers, and the internet are all examples of technological advancement portrayed as a curve across time emphasizing the discovery of major innovations.

Moore's law, which is more precisely an observation that has been viewed as a law, represents the increase in computer capacity.

"Every two years, the number of transistors in a dense integrated circuit doubles," it says.

People think that the emergence of key technical advances and new technological and scientific paradigms will follow a super-exponential curve in the event of the Technological Singularity.

One prediction regarding the Technological Singularity, for example, is that superintelligent systems would be able to self-improve (and self-replicate) in previously unimaginable ways at an unprecedented pace, pushing the technological development curve far beyond what has ever been witnessed.

The Technological Singularity discontinuity is referred to as an event horizon, and it is similar to a physical idea linked with black holes.

The analogy to this physical phenomena, on the other hand, should be used with care rather than being used to credit the physical world's regularity and predictability to technological singularity.

The limit of our knowledge about physical occurrences beyond a specific point in time is defined by an event horizon (also known as a prediction horizon).

It signifies that there is no way of knowing what will happen beyond the event horizon.

The discontinuity or event horizon in the context of technological singularity suggests that the technologies that precipitate technological singularity would cause disruptive changes in all areas of human life, developments about which experts cannot even conjecture.

The end of humanity and the end of human civilization are often related with technological singularity.

According to some research, social order will collapse, people will cease to be major actors, and epistemic agency and primacy would be lost.

Humans, it seems, will not be required by superintelligent systems.

These systems will be able to self-replicate, develop, and build their own living places, and humans will be seen as either barriers or unimportant, outdated things, similar to how humans now consider lesser species.

One such situation is represented by Nick Bostrom's Paperclip Maximizer.

AI is included as a possible danger to humanity's existence in the Global Catastrophic Risks Survey, with a reasonably high likelihood of human extinction, placing it on par with global pandemics, nuclear war, and global nanotech catastrophes.

However, the AI-related apocalyptic scenario is not a foregone conclusion of the Technological Singularity.

In other more utopian scenarios, technology singularity would usher in a new period of endless bliss by opening up new opportunities for humanity's infinite expansion.

Another element of technological singularity that requires serious consideration is how the arrival of superintelligence may imply the emergence of superethical capabilities in an all-knowing ethical agent.

Nobody knows, however, what superethical abilities might entail.

The fundamental problem, however, is that superintelligent entities' higher intellectual abilities do not ensure a high degree of ethical probity, or even any level of ethical probity.

As a result, having a superintelligent machine with almost infinite (but not quite) capacities but no ethics seems to be dangerous to say the least.

A sizable number of scholars are skeptical about the development of the Technological Singularity, notably of superintelligence.

They rule out the possibility of developing artificial systems with superhuman cognitive abilities, either on philosophical or scientific grounds.

Some contend that while artificial intelligence is often at the heart of technological singularity claims, achieving human-level intelligence in artificial systems is impossible, and hence superintelligence, and thus the Technological Singularity, is a dream.

Such barriers, however, do not exclude the development of superhuman brains via the genetic modification of regular people, paving the door for transhumans, human-machine hybrids, and superhuman agents.

More scholars question the validity of the notion of the Technological Singularity, pointing out that such forecasts about future civilizations are based on speculation and guesswork.

Others argue that the promises of unrestrained technological advancement and limitless intellectual capacities made by the Technological Singularity legend are unfounded, since physical and informational processing resources are plainly limited in the cosmos, particularly on Earth.

Any promises of self-replicating, self-improving artificial agents capable of super-exponential technological advancement are false, since such systems will lack the creativity, will, and incentive to drive their own evolution.

Meanwhile, social opponents point out that superintelligence's boundless technological advancement would not alleviate issues like overpopulation, environmental degradation, poverty, and unparalleled inequality.

Indeed, the widespread unemployment projected as a consequence of AI-assisted mass automation of labor, barring significant segments of the population from contributing to society, would result in unparalleled social upheaval, delaying the development of new technologies.

As a result, rather than speeding up, political or societal pressures will stifle technological advancement.

While technological singularity cannot be ruled out on logical grounds, the technical hurdles that it faces, even if limited to those that can presently be determined, are considerable.

Nobody expects the technological singularity to happen with today's computers and other technology, but proponents of the concept consider these obstacles as "technical challenges to be overcome" rather than possible show-stoppers.

However, there is a large list of technological issues to be overcome, and Murray Shanahan's The Technological Singularity (2015) gives a fair overview of some of them.

There are also some significant nontechnical issues, such as the problem of superintelligent system training, the ontology of artificial or machine consciousness and self-aware artificial systems, the embodiment of artificial minds or vicarious embodiment processes, and the rights granted to superintelligent systems, as well as their role in society and any limitations placed on their actions, if this is even possible.

These issues are currently confined to the realms of technological and philosophical discussion.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Bostrom, Nick; de Garis, Hugo; Diamandis, Peter; Digital Immortality; Goertzel, Ben; Kurzweil, Ray; Moravec, Hans; Post-Scarcity, AI and; Superintelligence.


References And Further Reading


Bostrom, Nick. 2014. Superintelligence: Path, Dangers, Strategies. Oxford, UK: Oxford University Press.

Chalmers, David. 2010. “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17: 7–65.

Eden, Amnon H. 2016. The Singularity Controversy. Sapience Project. Technical Report STR 2016-1. January 2016.

Eden, Amnon H., Eric Steinhart, David Pearce, and James H. Moor. 2012. “Singularity Hypotheses: An Overview.” In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart, 1–12. Heidelberg, Germany: Springer.

Good, I. J. 1966. “Speculations Concerning the First Ultraintelligent Machine.” Advances in Computers 6: 31–88.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Sandberg, Anders, and Nick Bostrom. 2008. Global Catastrophic Risks Survey. Technical Report #2008/1. Oxford University, Future of Humanity Institute.

Shanahan, Murray. 2015. The Technological Singularity. Cambridge, MA: The MIT Press.

Ulam, Stanislaw. 1958. “Tribute to John von Neumann.” Bulletin of the American Mathematical Society 64, no. 3, pt. 2 (May): 1–49.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. Cleveland, OH: NASA Lewis Research Center.


AI - What Is Superintelligence AI? Is Artificial Superintelligence Possible?

 


 

In its most common use, the phrase "superintelligence" refers to any degree of intelligence that at least equals, if not always exceeds, human intellect, in a broad sense.


Though computer intelligence has long outperformed natural human cognitive capacity in specific tasks—for example, a calculator's ability to swiftly interpret algorithms—these are not often considered examples of superintelligence in the strict sense due to their limited functional range.


In this sense, superintelligence would necessitate, in addition to artificial mastery of specific theoretical tasks, some kind of additional mastery of what has traditionally been referred to as practical intelligence: a generalized sense of how to subsume particulars into universal categories that are in some way worthwhile.


To this day, no such generalized superintelligence has manifested, and hence all discussions of superintelligence remain speculative to some degree.


Whereas traditional theories of superintelligence have been limited to theoretical metaphysics and theology, recent advancements in computer science and biotechnology have opened up the prospect of superintelligence being materialized.

Although the timing of such evolution is hotly discussed, a rising body of evidence implies that material superintelligence is both possible and likely.


If this hypothesis is proved right, it will very certainly be the result of advances in one of two major areas of AI research


  1. Bioengineering 
  2. Computer science





The former involves efforts to not only map out and manipulate the human DNA, but also to exactly copy the human brain electronically through full brain emulation, also known as mind uploading.


The first of these bioengineering efforts is not new, with eugenics programs reaching back to the seventeenth century at the very least.

Despite the major ethical and legal issues that always emerge as a result of such efforts, the discovery of DNA in the twentieth century, together with advances in genome mapping, has rekindled interest in eugenics.

Much of this study is aimed at gaining a better understanding of the human brain's genetic composition in order to manipulate DNA code in the direction of superhuman intelligence.



Uploading is a somewhat different, but still biologically based, approach to superintelligence that aims to map out neural networks in order to successfully transfer human intelligence onto computer interfaces.


  • The brains of insects and tiny animals are micro-dissected and then scanned for thorough computer analysis in this relatively new area of study.
  • The underlying premise of whole brain emulation is that if the brain's structure is better known and mapped, it may be able to copy it with or without organic brain tissue.



Despite the fast growth of both genetic mapping and whole brain emulation, both techniques have significant limits, making it less likely that any of these biological approaches will be the first to attain superintelligence.





The genetic alteration of the human genome, for example, is constrained by generational constraints.

Even if it were now feasible to artificially boost cognitive functioning by modifying the DNA of a human embryo (which is still a long way off), it would take an entire generation for the changed embryo to evolve into a fully fledged, superintelligent human person.

This would also imply that there are no legal or moral barriers to manipulating the human DNA, which is far from the fact.

Even the comparatively minor genetic manipulation of human embryos carried done by a Chinese physician as recently as November 2018 sparked international outrage (Ramzy and Wee 2019).



Whole brain emulation, on the other hand, is still a long way off, owing to biotechnology's limits.


Given the current medical technology, the extreme levels of accuracy necessary at every step of the uploading process are impossible to achieve.

Science and technology currently lack the capacity to dissect and scan human brain tissue with sufficient precision to produce full brain simulation results.

Furthermore, even if such first steps are feasible, researchers would face significant challenges in analyzing and digitally replicating the human brain using cutting-edge computer technology.




Many analysts believe that such constraints will be overcome, although the timeline for such realizations is unknown.



Apart from biotechnology, the area of AI, which is strictly defined as any type of nonorganic (particularly computer-based) intelligence, is the second major path to superintelligence.

Of course, the work of creating a superintelligent AI from the ground up is complicated by a number of elements, not all of which are purely logistical in nature, such as processing speed, hardware/software design, finance, and so on.

In addition to such practical challenges, there is a significant philosophical issue: human programmers are unable to know, and so cannot program, that which is superior to their own intelligence.





Much contemporary research on computer learning and interest in the notion of a seed AI is motivated in part by this worry.


Any machine capable of changing reactions to stimuli based on an examination of how well it performs in relation to a predetermined objective is defined as the latter.

Importantly, the concept of a seed AI entails not only the capacity to change its replies by extending its base of content knowledge (stored information), but also the ability to change the structure of its programming to better fit a specific job (Bostrom 2017, 29).

Indeed, it is this latter capability that would give a seed AI what Nick Bostrom refers to as "recursive self-improvement," or the ability to evolve iteratively (Bostrom 2017, 29).

This would eliminate the requirement for programmers to have an a priori vision of super intelligence since the seed AI would constantly enhance its own programming, with each more intelligent iteration writing a superior version of itself (beyond the human level).

Such a machine would undoubtedly cast doubt on the conventional philosophical assumption that robots are incapable of self-awareness.

This perspective's proponents may be traced all the way back to Descartes, but they also include more current thinkers like John Haugeland and John Searle.



Machine intelligence, in this perspective, is defined as the successful correlation of inputs with outputs according to a predefined program.




As a result, robots differ from humans in type, the latter being characterized only by conscious self-awareness.

Humans are supposed to comprehend the activities they execute, but robots are thought to carry out functions mindlessly—that is, without knowing how they work.

Should it be able to construct a successful seed AI, this core idea would be forced to be challenged.

The seed AI would demonstrate a level of self-awareness and autonomy not readily explained by the Cartesian philosophical paradigm by upgrading its own programming in ways that surprise and defy the forecasts of its human programmers.

Indeed, although it is still speculative (for the time being), the increasingly possible result of superintelligent AI poses a slew of moral and legal dilemmas that have sparked a lot of philosophical discussion in this subject.

The main worries are about the human species' security in the case of what Bostrom refers to as a "intelligence explosion"—that is, the creation of a seed AI followed by a possibly exponential growth in intellect (Bostrom 2017).



One of the key problems is the inherently unexpected character of such a result.


Humans will not be able to totally foresee how superintelligent AI would act due to the autonomy entailed by superintelligence in a definitional sense.

Even in the few cases of specialized superintelligence that humans have been able to construct and study so far—for example, robots that have surpassed humans in strategic games like chess and Go—human forecasts for AI have shown to be very unreliable.

For many critics, such unpredictability is a significant indicator that, should more generic types of superintelligent AI emerge, humans would swiftly lose their capacity to manage them (Kissinger 2018).





Of all, such a loss of control does not automatically imply an adversarial relationship between humans and superintelligence.


Indeed, although most of the literature on superintelligence portrays this relationship as adversarial, some new work claims that this perspective reveals a prejudice against machines that is particularly prevalent in Western cultures (Knight 2014).

Nonetheless, there are compelling grounds to believe that superintelligent AI would at the very least consider human goals as incompatible with their own, and may even regard humans as existential dangers.

For example, computer scientist Steve Omohundro has claimed that even a relatively basic kind of superintelligent AI like a chess bot would have motive to want the extinction of humanity as a whole—and may be able to build the tools to do it (Omohundro 2014).

Similarly, Bostrom has claimed that a superintelligence explosion would most certainly result in, if not the extinction of the human race, then at the very least a gloomy future (Bostrom 2017).

Whatever the benefits of such theories, the great uncertainty entailed by superintelligence is obvious.

If there is one point of agreement in this large and diverse literature, it is that if AI research is to continue, the global community must take great care to protect its interests.





Hardened determinists who claim that technological advancement is so tightly connected to inflexible market forces that it is simply impossible to change its pace or direction in any major manner may find this statement contentious.


According to this determinist viewpoint, if AI can deliver cost-cutting solutions for industry and commerce (as it has already started to do), its growth will proceed into the realm of superintelligence, regardless of any unexpected negative repercussions.

Many skeptics argue that growing societal awareness of the potential risks of AI, as well as thorough political monitoring of its development, are necessary counterpoints to such viewpoints.


Bostrom highlights various examples of effective worldwide cooperation in science and technology as crucial precedents that challenge the determinist approach, including CERN, the Human Genome Project, and the International Space Station (Bostrom 2017, 253).

To this, one may add examples from the worldwide environmental movement, which began in the 1960s and 1970s and has imposed significant restrictions on pollution committed in the name of uncontrolled capitalism (Feenberg 2006).



Given the speculative nature of superintelligence research, it is hard to predict what the future holds.

However, if superintelligence poses an existential danger to human existence, caution would dictate that a worldwide collaborative strategy rather than a free market approach to AI be used.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Berserkers; Bostrom, Nick; de Garis, Hugo; General and Narrow AI; Goertzel, Ben; Kurzweil, Ray; Moravec, Hans; Musk, Elon; Technological Singularity; Yudkowsky, Eliezer.



References & Further Reading:


  • Bostrom, Nick. 2017. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.
  • Feenberg, Andrew. 2006. “Environmentalism and the Politics of Technology.” In Questioning Technology, 45–73. New York: Routledge.
  • Kissinger, Henry. 2018. “How the Enlightenment Ends.” The Atlantic, June 2018. https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/.
  • Knight, Heather. 2014. How Humans Respond to Robots: Building Public Policy Through Good Design. Washington, DC: The Project on Civilian Robotics. Brookings Institution.
  • Omohundro, Steve. 2014. “Autonomous Technology and the Greater Human Good.” Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 303–15.
  • Ramzy, Austin, and Sui-Lee Wee. 2019. “Scientist Who Edited Babies’ Genes Is Likely to Face Charges in China.” The New York Times, January 21, 2019



Artificial Intelligence - Who Is Hugo de Garis?

 


Hugo de Garis (1947–) is an expert in genetic algorithms, artificial intelligence, and topological quantum computing.

He is the creator of the concept of evolvable hardware, which uses evolutionary algorithms to produce customized electronics that can alter structural design and performance dynamically and autonomously in response to their surroundings.

De Garis is most known for his 2005 book The Artilect Battle, in which he describes what he thinks will be an unavoidable twenty-first-century worldwide war between mankind and ultraintelligent robots.

In the 1980s, De Garis got fascinated in genetic algorithms, neural networks, and the idea of artificial brains.

In artificial intelligence, genetic algorithms include the use of software to model and apply Darwinian evolutionary ideas to search and optimization issues.

The "fittest" candidate simulations of axons, dendrites, signals, and synapses in artificial neural networks were evolved using evolutionary algorithms developed by de Garis.

De Garis developed artificial neural systems that resembled those seen in organic brains.

In the 1990s, his work with a new type of programmable computer chips spawned the subject of computer science known as evolvable hardware.

The use of programmable circuits allowed neural networks to grow and evolve at high rates.

De Garis also started playing around with cellular automata, which are mathematical models of complex systems that emerge generatively from basic units and rules.

The coding of around 11,000 fundamental rules was required in an early version of his modeling of cellular automata that acted like brain networks.

About 60,000 such rules were encoded in a subsequent version.

De Garis called his neural networks-on-a-chip a Cellular Automata Machine in the 2000s.

De Garis started to hypothesize that the period of "Brain Building on the Cheap" had come as the price of chips dropped (de Garis 2005, 45).

He started referring to himself as the "Father of Artificial Intelligence." He claims that in the future decades, whole artificial brains with billions of neurons will be built utilizing information acquired from molecular scale robot probes of human brain tissues and the advent of new path breaking brain imaging tools.

Topological quantum computing is another enabling technology that de Garis thinks will accelerate the creation of artificial brains.

He claims that once the physical boundaries of standard silicon chip manufacturing are approached, quantum mechanical phenomena must be harnessed.

Inventions in reversible heatless computing will also be significant in dissipating the harmful temperature effects of tightly packed circuits.

De Garis also supports the development of artificial embryology, often known as "embryofacture," which involves the use of evolutionary engineering and self-assembly methods to mimic the development of fully aware beings from single fertilized eggs.

According to De Garis, because to fast breakthroughs in artificial intelligence technology, a conflict over our last innovation will be unavoidable before the end of the twenty-first century.

He thinks the battle will finish with a catastrophic human extinction catastrophe he refers to as "gigadeath." De Garis speculates in his book The Artilect War that continued Moore's Law doubling of transistors packed on computer chips, accompanied by the development of new technologies such as femtotechnology (the achievement of femtometer-scale struc turing of matter), quantum computing, and neuroengineering, will almost certainly lead to gigadeath.

De Garis felt compelled to create The Artilect War as a cautionary tale and as a self-admitted architect of the impending calamity.

The Cosmists and the Terrans are two antagonistic worldwide political parties that De Garis uses to frame his discussion of an impending Artilect War.

The Cosmists will be apprehensive of the immense power of future superintelligent machines, but they will regard their labor in creating them with such veneration that they will experience a near-messianic enthusiasm in inventing and unleashing them into the world.

Regardless of the hazards to mankind, the Cosmists will strongly encourage the development and nurturing of ever-more sophisticated and powerful artificial minds.

The Terrans, on the other hand, will fight against the creation of artificial minds once they realize they represent a danger to human civilization.

They will feel compelled to fight these artificial intelligences because they constitute an existential danger to humanity.

De Garis dismisses a Cyborgian compromise in which humans and their technological creations blend.

He thinks that robots will grow so powerful and intelligent that only a small percentage of humanity would survive the confrontation.

China and the United States, geopolitical adversaries, will be forced to exploit these technology to develop more complex and autonomous economies, defense systems, and military robots.

Artificial intelligence's dominance in the world will be welcomed by the Cosmists, who will come to see them as near-gods deserving of worship.

The Terrans, on the other hand, will fight the transfer of global economic, social, and military dominance to our machine overlords.

They will see the new situation as a terrible tragedy that has befallen humanity.

His case for a future battle over superintelligent robots has sparked a lot of discussion and controversy among scientific and engineering specialists, as well as a lot of criticism in popular science journals.

In his 2005 book, de Garis implicates himself as a cause of the approaching conflict and as a hidden Cosmist, prompting some opponents to question his intentions.

De Garis has answered that he feels compelled to issue a warning now because he thinks there will be enough time for the public to understand the full magnitude of the danger and react when they begin to discover substantial intelligence hidden in household equipment.

If De Garis' warning is taken seriously, he presents a variety of eventualities.

First, he suggests that the Terrans may be able to defeat Cosmist thinking before a superintelligence takes control, though this is unlikely.

De Garis suggests a second scenario in which artilects quit the earth as irrelevant, leaving human civilisation more or less intact.

In a third possibility, the Cosmists grow so terrified of their own innovations that they abandon them.

Again, de Garis believes this is improbable.

In a fourth possibility, he imagines that all Terrans would transform into Cyborgs.

In a fifth scenario, the Terrans will aggressively seek down and murder the Cosmists, maybe even in outer space.

The Cosmists will leave Earth, construct artilects, and ultimately vanish from the solar system to conquer the cosmos in a sixth scenario.

In a seventh possibility, the Cosmists will flee to space and construct artilects that will fight each other until none remain.

In the eighth scenario, the artilects will go to space and be destroyed by an alien super-artilect.

De Garis has been criticized of believing that The Terminator's nightmarish vision would become a reality, rather than contemplating that superintelligent computers may just as well bring world peace.

De Garis answered that there is no way to ensure that artificial brains operate ethically (humanely).

He also claims that it is difficult to foretell whether or not a superintelligence would be able to bypass an implanted death switch or reprogram itself to disobey orders aimed at instilling human respect.

Hugo de Garis was born in 1947 in Sydney, Australia.

In 1970, he graduated from Melbourne University with a bachelor's degree in Applied Mathematics and Theoretical Physics.

He joined the global electronics corporation Philips as a software and hardware architect after teaching undergraduate mathematics at Cambridge University for four years.

He worked at locations in the Netherlands and Belgium.

In 1992, De Garis received a doctorate in Artificial Life and Artificial Intelligence from the Université Libre de Bruxelles in Belgium.

"Genetic Programming: GenNets, Artificial Nervous Systems, Artificial Embryos," was the title of his thesis.

De Garis directed the Center for Data Analysis and Stochastic Processes at the Artificial Intelligence and Artificial Life Research Unit at Brussels as a graduate student, where he explored evolutionary engineering, which uses genetic algorithms to develop complex systems.

He also worked as a senior research associate at George Mason University's Artificial Intelligence Center in Northern Virginia, where he worked with machine learning pioneer Ryszard Michalski.

De Garis did a postdoctoral fellowship at Tsukuba's Electrotechnical Lab.

He directed the Brain Builder Group at the Advanced Telecommunications Research Institute International in Kyoto, Japan, for the following eight years, while they attempted a moon-shot quest to develop a billion-neuron artificial brain.

De Garis returned to Brussels, Belgium, in 2000 to oversee Star Lab's Brain Builder Group, which was working on a rival artificial brain project.

When the dot-com bubble burst in 2001, De Garis' lab went bankrupt while working on a life-size robot cat.

De Garis then moved on to Utah State University as an Associate Professor of Computer Science, where he stayed until 2006.

De Garis was the first to teach advanced research courses on "brain building" and "quantum computing" at Utah State.

He joined Wuhan University's International School of Software in China as Professor of Computer Science and Mathematical Physics in 2006, where he also served as the leader of the Artificial Intelligence group.

De Garis kept working on artificial brains, but he also started looking into topological quantum computing.

De Garis joined the advisory board of Novamente, a commercial business that aims to develop artificial general intelligence, in the same year.

Two years later, Chinese authorities gave his Wuhan University Brain Builder Group a significant funding to begin building an artificial brain.

The China-Brain Project was the name given to the initiative.

De Garis relocated to Xiamen University in China in 2008, where he ran the Artificial Brain Lab in the School of Information Science and Technology's Artificial Intelligence Institute until his retirement in 2010.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Superintelligence; Technological Singularity; The Terminator.


Further Reading:


de Garis, Hugo. 1989. “What If AI Succeeds? The Rise of the Twenty-First Century Artilect.” AI Magazine 10, no. 2 (Summer): 17–22.

de Garis, Hugo. 1990. “Genetic Programming: Modular Evolution for Darwin Machines.” In Proceedings of the International Joint Conference on Neural Networks, 194–97. Washington, DC: Lawrence Erlbaum.

de Garis, Hugo. 2005. The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. ETC Publications.

de Garis, Hugo. 2007. “Artificial Brains.” In Artificial General Intelligence: Cognitive Technologies, edited by Ben Goertzel and Cassio Pennachin, 159–74. Berlin: Springer.

Geraci, Robert M. 2008. “Apocalyptic AI: Religion and the Promise of Artificial Intelligence.” Journal of the American Academy of Religion 76, no. 1 (March): 138–66.

Spears, William M., Kenneth A. De Jong, Thomas Bäck, David B. Fogel, and Hugo de Garis. 1993. “An Overview of Evolutionary Computation.” In Machine Learning: ECML-93, Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence), vol. 667, 442–59. Berlin: Springer.


What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...