Showing posts with label Superintelligence. Show all posts
Showing posts with label Superintelligence. Show all posts

Artificial Intelligence - Who Is Ray Kurzweil (1948–)?




Ray Kurzweil is a futurist and inventor from the United States.

He spent the first half of his career developing the first CCD flat-bed scanner, the first omni-font optical character recognition device, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed, large-vocabulary speech recognition machine.

He has earned several awards for his contributions to technology, including the Technical Grammy Award in 2015 and the National Medal of Technology.

Kurzweil is the cofounder and chancellor of Singularity University, as well as the director of engineering at Google, where he leads a team that works on artificial intelligence and natural language processing.

Singularity University is a non-accredited graduate school founded on the premise of tackling great issues like renewable energy and space travel by gaining a deep understanding of the opportunities presented by technology progress's current acceleration.

The university, which is headquartered in Silicon Valley, has evolved to include one hundred chapters in fifty-five countries, delivering seminars, educational programs, and business acceleration programs.

While at Google, Kurzweil published the book How to Create a Mind (2012).

He claims that the neo cortex is a hierarchical structure of pattern recognizers in his Pattern Recognition Theory of Mind.

Kurzweil claims that replicating this design in machines might lead to the creation of artificial superintelligence.

He believes that by doing so, he will be able to bring natural language comprehension to Google.

Kurzweil's popularity stems from his work as a futurist.

Futurists are those who specialize in or are interested in the near-to-long-term future and associated topics.

They use well-established methodologies like scenario planning to carefully examine forecasts and construct future possibilities.

Kurzweil is the author of five national best-selling books, including The Singularity Is Near, which was named a New York Times best-seller (2005).

He has an extensive list of forecasts.

Kurzweil predicted the enormous development of international internet usage in the second part of the decade in his debut book, The Age of Intelligent Machines (1990).

He correctly predicted that computers will soon exceed humans in making the greatest investing choices in his second extremely important book, The Age of Spiritual Machines (where "spiritual" stands for "aware"), published in 1999.

Kurzweil prophesied in the same book that computers would one day "appear to have their own free will" and perhaps have "spiritual experiences" (Kurz weil 1999, 6).

Human-machine barriers will dissolve to the point that they will basically live forever as combined human-machine hybrids.

Scientists and philosophers have slammed Kurzweil's forecast of a sentient computer, claiming that awareness cannot be created by calculations.

Kurzweil tackles the phenome non of the Technological Singularity in his third book, The Singularity Is Near.

John von Neumann, a famous mathematician, created the word singularity.

In a 1950s chat with his colleague Stanislaw Ulam, von Neumann proposed that the ever-accelerating speed of technological progress "appears to be reaching some essential singularity in the history of the race beyond which human activities as we know them could not continue" (Ulam 1958, 5).

To put it another way, technological development would alter the course of human history.

Vernor Vinge, a computer scientist, math professor, and science fiction writer, rediscovered the word in 1993 and utilized it in his article "The Coming Technological Singularity." In Vinge's article, technological progress is more accurately defined as an increase in processing power.

Vinge investigates the idea of a self-improving artificial intelligence agent.

According to this theory, the artificial intelligent agent continues to update itself and grow technologically at an unfathomable pace, eventually resulting in the birth of a superintelligence—that is, an artificial intelligence that far exceeds all human intelligence.

In Vinge's apocalyptic vision, robots first become autonomous, then superintelligent, to the point where humans lose control of technology and machines seize control of their own fate.

Machines will rule the planet because technology is more intelligent than humans.

According to Vinge, the Singularity is the end of the human age.

Kurzweil presents an anti-dystopic Singularity perspective.

Kurzweil's core premise is that humans can develop something smarter than themselves; in fact, exponential advances in computer power make the creation of an intelligent machine all but inevitable, to the point that the machine will surpass humans in intelligence.

Kurzweil believes that machine intelligence and human intellect will converge at this moment.

The subtitle of The Singularity Is Near is When Humans Transcend Biology, which is no coincidence.

Kurzweil's overarching vision is based on discontinuity: no lesson from the past, or even the present, can aid humans in determining the way to the future.

This also explains why new types of education, such as Singularity University, are required.

Every sentimental look back to history, every memory of the past, renders humans more susceptible to technological change.

With the arrival of a new superintelligent, almost immortal race, history as a human construct will soon come to an end.

Posthumans, the next phase in human development, are known as immortals.

Kurzweil believes that posthumanity will be made up of sentient robots rather than people with mechanical bodies.

He claims that the future should be formed on the assumption that mankind is in the midst of an extraordinary period of technological advancement.

The Singularity, he believes, would elevate humanity beyond its wildest dreams.

While Kurzweil claims that artificial intelligence is now outpacing human intellect on certain activities, he also acknowledges that the moment of superintelligence, often known as the Technological Singularity, has not yet arrived.

He believes that individuals who embrace the new age of human-machine synthesis and are daring to go beyond evolution's boundaries would view humanity's future as positive. 




Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


General and Narrow AI; Superintelligence; Technological Singularity.



Further Reading:




Kurzweil, Ray. 1990. The Age of Intelligent Machines. Cambridge, MA: MIT Press.

Kurzweil, Ray. 1999. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Penguin.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Ulam, Stanislaw. 1958. “Tribute to John von Neumann.” Bulletin of the American Mathematical Society 64, no. 3, pt. 2 (May): 1–49.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. Cleveland, OH: NASA Lewis Research Center.



 

Artificial Intelligence - Who Is Ben Goertzel (1966–)?


Ben Goertzel is the founder and CEO of SingularityNET, a blockchain AI company, as well as the chairman of Novamente LLC, a research professor at Xiamen University's Fujian Key Lab for Brain-Like Intelligent Systems, the chief scientist of Mozi Health and Hanson Robotics in Shenzhen, China, and the chair of the OpenCog Foundation, Humanity+, and Artificial General Intelligence Society conference series. 

Goertzel has long wanted to create a good artificial general intelligence and use it in bioinformatics, finance, gaming, and robotics.

He claims that, despite AI's current popularity, it is currently superior than specialists in a number of domains.

Goertzel divides AI advancement into three stages, each of which represents a step toward a global brain (Goertzel 2002, 2): • the intelligent Internet • the full-fledged Singularity Goertzel presented a lecture titled "Decentralized AI: The Power and the Necessity" at TEDxBerkeley in 2019.

He examines artificial intelligence in its present form as well as its future in this discussion.

"The relevance of decentralized control in leading AI to the next stages, the strength of decentralized AI," he emphasizes (Goertzel 2019a).

In the evolution of artificial intelligence, Goertzel distinguishes three types: artificial narrow intelligence, artificial broad intelligence, and artificial superintelligence.

Artificial narrow intelligence refers to machines that can "address extremely specific issues... better than humans" (Goertzel 2019a).

In certain restricted activities, such as chess and Go, this kind of AI has outperformed a human.

Ray Kurzweil, an American futurologist and inventor, coined the phrase "narrow AI." Artificial general intelligence (AGI) refers to intelligent computers that can "generate knowledge" in a variety of fields and have "humanlike autonomy." By 2029, according to Goertzel, this kind of AI will have reached the same level of intellect as humans.

Artificial superintelligence (ASI) is based on both narrow and broad AI, but it can also reprogram itself.



By 2045, he claims, this kind of AI will be smarter than the finest human brains in terms of "scientific innovation, general knowledge, and social abilities" (Goertzel 2019a).

According to Goertzel, Facebook, Google, and a number of colleges and companies are all actively working on AGI.

According to Goertzel, the shift from AI to AGI will occur within the next five to thirty years.

Goertzel is also interested in artificial intelligence-assisted life extension.

He thinks that artificial intelligence's exponential advancement will lead to technologies that will increase human life span and health eternally.

He predicts that by 2045, a singularity featuring a drastic increase in "human health span" would have occurred (Goertzel 2012).

Vernor Vinge popularized the term "singularity" in his 1993 article "The Coming Technological Singularity." Ray Kurzweil coined the phrase in his 2005 book The Singularity is Near.

The Technological Singularity, according to both writers, is the merging of machine and human intellect as a result of a fast development in new technologies, particularly robots and AI.

The thought of an impending singularity excites Goertzel.

SingularityNET is his major current initiative, which entails the construction of a worldwide network of artificial intelligence researchers interested in developing, sharing, and monetizing AI technology, software, and services.

By developing a decentralized protocol that enables a full stack AI solution, Goertzel has made a significant contribution to this endeavor.

SingularityNET, as a decentralized marketplace, provides a variety of AI technologies, including text generation, AI Opinion, iAnswer, Emotion Recognition, Market Trends, OpenCog Pattern Miner, and its own cryptocurrency, AGI token.

SingularityNET is presently cooperating with Domino's Pizza in Malaysia and Singapore (Khan 2019).



Domino's is interested in leveraging SingularityNET technologies to design a marketing plan, with the goal of providing the finest products and services to its consumers via the use of unique algorithms.

Domino's thinks that by incorporating the AGI ecosystem into their operations, they will be able to provide value and service in the food delivery market.

Goertzel has reacted to scientist Stephen Hawking's challenge, which claimed that AI might lead to the extinction of human civilization.

Given the current situation, artificial super intelligence's mental state will be based on past AI generations, thus "selling, spying, murdering, and gambling are the key aims and values in the mind of the first super intelligence," according to Goertzel (Goertzel 2019b).

He acknowledges that if humans desire compassionate AI, they must first improve their own treatment of one another.

With four years, Goertzel worked for Hanson Robotics in Hong Kong.

He collaborated with Sophia, Einstein, and Han, three well-known robots.

"Great platforms for experimenting with AI algorithms, including cognitive architectures like OpenCog that aim at human-level AI," he added of the robots (Goertzel 2018).

Goertzel argues that essential human values may be retained for future generations in Sophia-like robot creatures after the Technological Singularity.

Decentralized networks like SingularityNET and OpenCog, according to Goertzel, provide "AIs with human-like values," reducing AI hazards to humanity (Goertzel 2018).

Because human values are complicated in nature, Goertzel feels that encoding them as a rule list is wasteful.

Brain-computer interfacing (BCI) and emotional interfacing are two ways Goertzel offers.

Humans will become "cyborgs," with their brains physically linked to computational-intelligence modules, and the machine components of the cyborgs will be able to read the moral-value-evaluation structures of the human mind directly from the biological components of the cyborgs (Goertzel 2018).

Goertzel uses Elon Musk's Neuralink as an example.

Because it entails invasive trials with human brains and a lot of unknowns, Goertzel doubts that this strategy will succeed.

"Emotional and spiritual connections between people and AIs, rather than Ethernet cables or Wifi signals, are used to link human and AI brains," according to the second method (Goertzel 2018).

To practice human values, he proposes that AIs participate in emotional and social connection with humans via face expression detection and mirroring, eye contact, and voice-based emotion recognition.

To that end, Goertzel collaborated with SingularityNET, Hanson AI, and Lia Inc on the "Loving AI" research project, which aims to assist artificial intelligences speak and form intimate connections with humans.

A funny video of actor Will Smith on a date with Sophia the Robot is presently available on the Loving AI website.

Sophia can already make sixty facial expressions and understand human language and emotions, according to the video of the date.

When linked to a network like SingularityNET, humanoid robots like Sophia obtain "ethical insights and breakthroughs...

via language," according to Goertzel (Goertzel 2018).

Then, through a shared internet "mindcloud," robots and AIs may share what they've learnt.

Goertzel is also the chair of the Artificial General Intelligence Society's Conference Series on Artificial General Intelligence, which has been conducted yearly since 2008.

The Journal of Artificial General Intelligence is a peer-reviewed open-access academic periodical published by the organization. Goertzel is the editor of the conference proceedings series.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 

General and Narrow AI; Superintelligence; Technological Singularity.


Further Reading:


Goertzel, Ben. 2002. Creating Internet Intelligence: Wild Computing, Distributed Digital Consciousness, and the Emerging Global Brain. New York: Springer.

Goertzel, Ben. 2012. “Radically Expanding the Human Health Span.” TEDxHKUST. https://www.youtube.com/watch?v=IMUbRPvcB54.

Goertzel, Ben. 2017. “Sophia and SingularityNET: Q&A.” H+ Magazine, November 5, 2017. https://hplusmagazine.com/2017/11/05/sophia-singularitynet-qa/.

Goertzel, Ben. 2018. “Emotionally Savvy Robots: Key to a Human-Friendly Singularity.” https://www.hansonrobotics.com/emotionally-savvy-robots-key-to-a-human-friendly-singularity/.

Goertzel, Ben. 2019a. “Decentralized AI: The Power and the Necessity.” TEDxBerkeley, March 9, 2019. https://www.youtube.com/watch?v=r4manxX5U-0.

Goertzel, Ben. 2019b. “Will Artificial Intelligence Kill Us?” July 31, 2019. https://www.youtube.com/watch?v=TDClKEORtko.

Goertzel, Ben, and Stephan Vladimir Bugaj. 2006. The Path to Posthumanity: 21st Century Technology and Its Radical Implications for Mind, Society, and Reality. Bethesda, MD: Academica Press.

Khan, Arif. 2019. “SingularityNET and Domino’s Pizza Announce a Strategic Partnership.” https://blog.singularitynet.io/singularitynet-and-dominos-pizza-announce-a-strategic-partnership-cbbe21f80fc7.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. NASA: Lewis Research Center





Artificial Intelligence - General and Narrow Categories Of AI.






There are two types of artificial intelligence: general (or powerful or complete) and narrow (or limited) (or weak or specialized).

In the actual world, general AI, such as that seen in science fiction, does not yet exist.

Machines with global intelligence would be capable of completing every intellectual endeavor that humans are capable of.

This sort of system would also seem to think in abstract terms, establish connections, and communicate innovative ideas in the same manner that people do, displaying the ability to think abstractly and solve problems.



Such a computer would be capable of thinking, planning, and recalling information from the past.

While the aim of general AI has yet to be achieved, there are more and more instances of narrow AI.

These are machines that perform at human (or even superhuman) levels on certain tasks.

Computers that have learnt to play complicated games have abilities, techniques, and behaviors that are comparable to, if not superior to, those of the most skilled human players.

AI systems that can translate between languages in real time, interpret and respond to natural speech (both spoken and written), and recognize images have also been developed (being able to recognize, identify, and sort photos or images based on the content).

However, the ability to generalize knowledge or skills is still largely a human accomplishment.

Nonetheless, there is a lot of work being done in the field of general AI right now.

It will be difficult to determine when a computer develops human-level intelligence.

Several serious and hilarious tests have been suggested to determine whether a computer has reached the level of General AI.

The Turing Test is arguably the most renowned of these examinations.

A machine and a person speak in the background, as another human listens in.

The human eavesdropper must figure out which speaker is a machine and which is a human.

The machine passes the test if it can fool the human evaluator a prescribed percentage of the time.

The Coffee Test is a more fantastical test in which a machine enters a typical household and brews coffee.



It has to find the coffee machine, look for the coffee, add water, boil the coffee, and pour it into a cup.

Another is the Flat Pack Furniture Test, which involves a machine receiving, unpacking, and assembling a piece of furniture based only on the instructions supplied.

Some scientists, as well as many science fiction writers and fans, believe that once intelligent machines reach a tipping point, they will be able to improve exponentially.

AI-based beings that far exceed human capabilities might be one conceivable result.

The Singularity, or artificial superintelligence, is the point at which AI assumes control of its own self-improvement (ASI).

If ASI is achieved, it will have unforeseeable consequences for human society.

Some pundits worry that ASI would jeopardize humanity's safety and dignity.

It's up for dispute whether the Singularity will ever happen, and how dangerous it may be.

Narrow AI applications are becoming more popular across the globe.

Machine learning (ML) is at the heart of most new applications, and most AI examples in the news are connected to this subset of technology.

Traditional or conventional algorithms are not the same as machine learning programs.

In programs that cannot learn, a computer programmer actively adds code to account for every action of an algorithm.

All of the decisions made along the process are governed by the programmer's guidelines.

This necessitates the programmer imagining and coding for every possible circumstance that an algorithm may face.

This kind of program code is bulky and often inadequate, especially if it is updated frequently to accommodate for new or unanticipated scenarios.

The utility of hard-coded algorithms approaches its limit in cases where the criteria for optimum judgments are unclear or impossible for a human programmer to foresee.

Machine learning is the process of training a computer to detect and identify patterns via examples rather than predefined rules.



This is achieved, according to Google engineer Jason Mayes, by reviewing incredibly huge quantities of training data or participating in some other kind of programmed learning step.

New patterns may be extracted by processing the test data.

The system may then classify newly unknown data based on the patterns it has already found.

Machine learning allows an algorithm to recognize patterns or rules underlying decision-making processes on its own.

Machine learning also allows a system's output to improve over time as it gains more experience (Mayes 2017).

A human programmer continues to play a vital role in this learning process, influencing results by making choices like developing the exact learning algorithm, selecting the training data, and choosing other design elements and settings.

Machine learning is powerful once it's up and running because it can adapt and enhance its ability to categorize new data without the need for direct human interaction.

In other words, the output quality increases as the user gains experience.

Artificial intelligence is a broad word that refers to the science of making computers intelligent.

AI is a computer system that can collect data and utilize it to make judgments or solve issues, according to scientists.

Another popular scientific definition of AI is "a software program paired with hardware that can receive (or sense) inputs from the world around it, evaluate and analyze those inputs, and create outputs and suggestions without the assistance of a person." When programmers claim an AI system can learn, they're referring to the program's ability to change its own processes in order to provide more accurate outputs or predictions.

AI-based systems are now being developed and used in practically every industry, from agriculture to space exploration, and in applications ranging from law enforcement to online banking.

The methods and techniques used in computer science are always evolving, extending, and improving.

Other terminology linked to machine learning, such as reinforcement learning and neural networks, are important components of cutting-edge artificial intelligence systems.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 

Embodiment, AI and; Superintelligence; Turing, Alan; Turing Test.


Further Reading:


Kelnar, David. 2016. “The Fourth Industrial Revolution: A Primer on Artificial Intelligence (AI).” Medium, December 2, 2016. https://medium.com/mmc-writes/the-fourth-industrial-revolution-a-primer-on-artificial-intelligence-ai-ff5e7fffcae1.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Mayes, Jason. 2017. Machine Learning 101. https://docs.google.com/presentation/d/1kSuQyW5DTnkVaZEjGYCkfOxvzCqGEFzWBy4e9Uedd9k/htmlpresent.

Müller, Vincent C., and Nick Bostrom. 2016. “Future Progress in Artificial Intelligence: A Survey of Expert Opinion.” In Fundamental Issues of Artificial Intelligence, edited by Vincent C. Müller, 553–71. New York: Springer.

Russell, Stuart, and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice Hall.

Samuel, Arthur L. 1988. “Some Studies in Machine Learning Using the Game of Checkers I.” In Computer Games I, 335–65. New York: Springer.



Artificial Intelligence - Who Is Hugo de Garis?

 


Hugo de Garis (1947–) is an expert in genetic algorithms, artificial intelligence, and topological quantum computing.

He is the creator of the concept of evolvable hardware, which uses evolutionary algorithms to produce customized electronics that can alter structural design and performance dynamically and autonomously in response to their surroundings.

De Garis is most known for his 2005 book The Artilect Battle, in which he describes what he thinks will be an unavoidable twenty-first-century worldwide war between mankind and ultraintelligent robots.

In the 1980s, De Garis got fascinated in genetic algorithms, neural networks, and the idea of artificial brains.

In artificial intelligence, genetic algorithms include the use of software to model and apply Darwinian evolutionary ideas to search and optimization issues.

The "fittest" candidate simulations of axons, dendrites, signals, and synapses in artificial neural networks were evolved using evolutionary algorithms developed by de Garis.

De Garis developed artificial neural systems that resembled those seen in organic brains.

In the 1990s, his work with a new type of programmable computer chips spawned the subject of computer science known as evolvable hardware.

The use of programmable circuits allowed neural networks to grow and evolve at high rates.

De Garis also started playing around with cellular automata, which are mathematical models of complex systems that emerge generatively from basic units and rules.

The coding of around 11,000 fundamental rules was required in an early version of his modeling of cellular automata that acted like brain networks.

About 60,000 such rules were encoded in a subsequent version.

De Garis called his neural networks-on-a-chip a Cellular Automata Machine in the 2000s.

De Garis started to hypothesize that the period of "Brain Building on the Cheap" had come as the price of chips dropped (de Garis 2005, 45).

He started referring to himself as the "Father of Artificial Intelligence." He claims that in the future decades, whole artificial brains with billions of neurons will be built utilizing information acquired from molecular scale robot probes of human brain tissues and the advent of new path breaking brain imaging tools.

Topological quantum computing is another enabling technology that de Garis thinks will accelerate the creation of artificial brains.

He claims that once the physical boundaries of standard silicon chip manufacturing are approached, quantum mechanical phenomena must be harnessed.

Inventions in reversible heatless computing will also be significant in dissipating the harmful temperature effects of tightly packed circuits.

De Garis also supports the development of artificial embryology, often known as "embryofacture," which involves the use of evolutionary engineering and self-assembly methods to mimic the development of fully aware beings from single fertilized eggs.

According to De Garis, because to fast breakthroughs in artificial intelligence technology, a conflict over our last innovation will be unavoidable before the end of the twenty-first century.

He thinks the battle will finish with a catastrophic human extinction catastrophe he refers to as "gigadeath." De Garis speculates in his book The Artilect War that continued Moore's Law doubling of transistors packed on computer chips, accompanied by the development of new technologies such as femtotechnology (the achievement of femtometer-scale struc turing of matter), quantum computing, and neuroengineering, will almost certainly lead to gigadeath.

De Garis felt compelled to create The Artilect War as a cautionary tale and as a self-admitted architect of the impending calamity.

The Cosmists and the Terrans are two antagonistic worldwide political parties that De Garis uses to frame his discussion of an impending Artilect War.

The Cosmists will be apprehensive of the immense power of future superintelligent machines, but they will regard their labor in creating them with such veneration that they will experience a near-messianic enthusiasm in inventing and unleashing them into the world.

Regardless of the hazards to mankind, the Cosmists will strongly encourage the development and nurturing of ever-more sophisticated and powerful artificial minds.

The Terrans, on the other hand, will fight against the creation of artificial minds once they realize they represent a danger to human civilization.

They will feel compelled to fight these artificial intelligences because they constitute an existential danger to humanity.

De Garis dismisses a Cyborgian compromise in which humans and their technological creations blend.

He thinks that robots will grow so powerful and intelligent that only a small percentage of humanity would survive the confrontation.

China and the United States, geopolitical adversaries, will be forced to exploit these technology to develop more complex and autonomous economies, defense systems, and military robots.

Artificial intelligence's dominance in the world will be welcomed by the Cosmists, who will come to see them as near-gods deserving of worship.

The Terrans, on the other hand, will fight the transfer of global economic, social, and military dominance to our machine overlords.

They will see the new situation as a terrible tragedy that has befallen humanity.

His case for a future battle over superintelligent robots has sparked a lot of discussion and controversy among scientific and engineering specialists, as well as a lot of criticism in popular science journals.

In his 2005 book, de Garis implicates himself as a cause of the approaching conflict and as a hidden Cosmist, prompting some opponents to question his intentions.

De Garis has answered that he feels compelled to issue a warning now because he thinks there will be enough time for the public to understand the full magnitude of the danger and react when they begin to discover substantial intelligence hidden in household equipment.

If De Garis' warning is taken seriously, he presents a variety of eventualities.

First, he suggests that the Terrans may be able to defeat Cosmist thinking before a superintelligence takes control, though this is unlikely.

De Garis suggests a second scenario in which artilects quit the earth as irrelevant, leaving human civilisation more or less intact.

In a third possibility, the Cosmists grow so terrified of their own innovations that they abandon them.

Again, de Garis believes this is improbable.

In a fourth possibility, he imagines that all Terrans would transform into Cyborgs.

In a fifth scenario, the Terrans will aggressively seek down and murder the Cosmists, maybe even in outer space.

The Cosmists will leave Earth, construct artilects, and ultimately vanish from the solar system to conquer the cosmos in a sixth scenario.

In a seventh possibility, the Cosmists will flee to space and construct artilects that will fight each other until none remain.

In the eighth scenario, the artilects will go to space and be destroyed by an alien super-artilect.

De Garis has been criticized of believing that The Terminator's nightmarish vision would become a reality, rather than contemplating that superintelligent computers may just as well bring world peace.

De Garis answered that there is no way to ensure that artificial brains operate ethically (humanely).

He also claims that it is difficult to foretell whether or not a superintelligence would be able to bypass an implanted death switch or reprogram itself to disobey orders aimed at instilling human respect.

Hugo de Garis was born in 1947 in Sydney, Australia.

In 1970, he graduated from Melbourne University with a bachelor's degree in Applied Mathematics and Theoretical Physics.

He joined the global electronics corporation Philips as a software and hardware architect after teaching undergraduate mathematics at Cambridge University for four years.

He worked at locations in the Netherlands and Belgium.

In 1992, De Garis received a doctorate in Artificial Life and Artificial Intelligence from the Université Libre de Bruxelles in Belgium.

"Genetic Programming: GenNets, Artificial Nervous Systems, Artificial Embryos," was the title of his thesis.

De Garis directed the Center for Data Analysis and Stochastic Processes at the Artificial Intelligence and Artificial Life Research Unit at Brussels as a graduate student, where he explored evolutionary engineering, which uses genetic algorithms to develop complex systems.

He also worked as a senior research associate at George Mason University's Artificial Intelligence Center in Northern Virginia, where he worked with machine learning pioneer Ryszard Michalski.

De Garis did a postdoctoral fellowship at Tsukuba's Electrotechnical Lab.

He directed the Brain Builder Group at the Advanced Telecommunications Research Institute International in Kyoto, Japan, for the following eight years, while they attempted a moon-shot quest to develop a billion-neuron artificial brain.

De Garis returned to Brussels, Belgium, in 2000 to oversee Star Lab's Brain Builder Group, which was working on a rival artificial brain project.

When the dot-com bubble burst in 2001, De Garis' lab went bankrupt while working on a life-size robot cat.

De Garis then moved on to Utah State University as an Associate Professor of Computer Science, where he stayed until 2006.

De Garis was the first to teach advanced research courses on "brain building" and "quantum computing" at Utah State.

He joined Wuhan University's International School of Software in China as Professor of Computer Science and Mathematical Physics in 2006, where he also served as the leader of the Artificial Intelligence group.

De Garis kept working on artificial brains, but he also started looking into topological quantum computing.

De Garis joined the advisory board of Novamente, a commercial business that aims to develop artificial general intelligence, in the same year.

Two years later, Chinese authorities gave his Wuhan University Brain Builder Group a significant funding to begin building an artificial brain.

The China-Brain Project was the name given to the initiative.

De Garis relocated to Xiamen University in China in 2008, where he ran the Artificial Brain Lab in the School of Information Science and Technology's Artificial Intelligence Institute until his retirement in 2010.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Superintelligence; Technological Singularity; The Terminator.


Further Reading:


de Garis, Hugo. 1989. “What If AI Succeeds? The Rise of the Twenty-First Century Artilect.” AI Magazine 10, no. 2 (Summer): 17–22.

de Garis, Hugo. 1990. “Genetic Programming: Modular Evolution for Darwin Machines.” In Proceedings of the International Joint Conference on Neural Networks, 194–97. Washington, DC: Lawrence Erlbaum.

de Garis, Hugo. 2005. The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. ETC Publications.

de Garis, Hugo. 2007. “Artificial Brains.” In Artificial General Intelligence: Cognitive Technologies, edited by Ben Goertzel and Cassio Pennachin, 159–74. Berlin: Springer.

Geraci, Robert M. 2008. “Apocalyptic AI: Religion and the Promise of Artificial Intelligence.” Journal of the American Academy of Religion 76, no. 1 (March): 138–66.

Spears, William M., Kenneth A. De Jong, Thomas Bäck, David B. Fogel, and Hugo de Garis. 1993. “An Overview of Evolutionary Computation.” In Machine Learning: ECML-93, Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence), vol. 667, 442–59. Berlin: Springer.


Artificial Intelligence - Who Is Nick Bostrom?

 




Nick Bostrom(1973–) is an Oxford University philosopher with a physics and computational neuroscience multidisciplinary academic background.

He is a cofounder of the World Transhumanist Association and a founding director of the Future of Humanity Institute.

Anthropic Bias (2002), Human Enhancement (2009), Superintelligence: Paths, Dangers, Strategies (2014), and Global Catastrophic Risks (2014) are among the works he has authored or edited (2014).

Bostrom was born in the Swedish city of Helsingborg in 1973.

Despite his dislike of formal education, he enjoyed studying.

Science, literature, art, and anthropology were among his favorite interests.

Bostrom earned bachelor's degrees in philosophy, mathematics, logic, and artificial intelligence from the University of Gothenburg, as well as master's degrees in philosophy and physics from Stockholm University and computational neuroscience from King's College London.

The London School of Economics gave him a PhD in philosophy.

Bostrom is a regular consultant or contributor to the European Commission, the United States President's Council on Bioethics, the CIA, and Cambridge University's Centre for the Study of Existential Risk.

Bostrom is well-known for his contributions to a variety of subjects, and he has proposed or written extensively on a number of well-known philosophical arguments and conjectures, including the simulation hypothesis, existential risk, the future of machine intelligence, and transhumanism.

Bostrom's concerns in the future of technology, as well as his discoveries on the mathematics of the anthropic bias, are combined in the so-called "Simulation Argument." Three propositions make up the argument.

The first hypothesis is that almost all civilizations that attain human levels of knowledge eventually perish before achieving technological maturity.

The second hypothesis is that most civilizations develop "ancestor simulations" of sentient beings, but ultimately abandon them.

The "simulation hypothesis" proposes that mankind is now living in a simulation.

He claims that just one of the three assertions must be true.

If the first hypothesis is false, some proportion of civilizations at the current level of human society will ultimately acquire technological maturity.

If the second premise is incorrect, certain civilizations may be interested in continuing to perform ancestor simulations.

These civilizations' researchers may be performing massive numbers of these simulations.

There would be many times as many simulated humans living in simulated worlds as there would be genuine people living in real universes in that situation.

As a result, mankind is most likely to exist in one of the simulated worlds.

If the second statement is true, the third possibility is also true.

It's even feasible, according to Bostrom, for a civilization inside a simulation to conduct its own simulations.

In the form of an endless regress, simulations may be living within simulated universes, inside their own simulated worlds.

It's also feasible that all civilizations would vanish, maybe as a result of the discovery of a new technology, posing an existential threat beyond human control.

Bostrom's argument implies that humanity is not blind to the truth of the external world, an argument that can be traced back to Plato's conviction in the existence of universals (the "Forms") and the capacity of human senses to see only specific examples of universals.

His thesis also implies that computers' ability to imitate things will continue to improve in terms of power and sophistication.

Computer games and literature, according to Bostrom, are modern instances of natural human fascination with synthetic reality.

The Simulation Argument is sometimes mistaken with the restricted premise that mankind lives in a simulation, which is the third argument.

Humans, according to Bostrom, have a less than 50% probability of living in some kind of artificial matrix.

He also argues that if mankind lived in one, society would be unlikely to notice "glitches" that revealed the simulation's existence since they had total control over the simulation's operation.

Simulator creators, on the other hand, would inform people that they are living in a simulation.

Existential hazards are those that pose a serious threat to humanity's existence.

Humans, rather than natural dangers, pose the biggest existential threat, according to Bostrom (e.g., asteroids, earthquakes, and epidemic disease).

He argues that artificial hazards like synthetic biology, molecular nanotechnology, and artificial intelligence are considerably more threatening.

Bostrom divides dangers into three categories: local, global, and existential.

Local dangers might include the theft of a valuable item of art or an automobile accident.

A military dictator's downfall or the explosion of a supervolcano are both potential global threats.

The extent and intensity of existential hazards vary.

They are cross-generational and long-lasting.

Because of the amount of lives that might be spared, he believes that reducing the danger of existential hazards is the most essential thing that human beings can do; battling against existential risk is also one of humanity's most neglected undertakings.

He also distinguishes between several types of existential peril.

Human extinction, defined as the extinction of a species before it reaches technological maturity; permanent stagnation, defined as the plateauing of human technological achievement; flawed realization, defined as humanity's failure to use advanced technology for an ultimately worthwhile purpose; and subsequent ruination, defined as a society reaching technological maturity but then something goes wrong.

While mankind has not yet harnessed human ingenuity to create a technology that releases existentially destructive power, Bostrom believes it is possible that it may in the future.

Human civilization has yet to produce a technology with such horrific implications that mankind could collectively forget about it.

The objective would be to go on a technical path that is safe, includes global collaboration, and is long-term.

To argue for the prospect of machine superintelligence, Bostrom employs the metaphor of altered brain complexity in the development of humans from apes, which took just a few hundred thousand generations.

Artificial systems that use machine learning (that is, algorithms that learn) are no longer constrained to a single area.

He also points out that computers process information at a far faster pace than human neurons.

Humans will eventually rely on super intelligent robots in the same manner that chimps presently rely on humans for their ultimate survival, according to Bostrom, even in the wild.

By establishing a powerful optimizing process with a poorly stated purpose, super intelligent computers have the potential to cause devastation, or possibly an extinction-level catastrophe.

By subverting humanity to the programmed purpose, a superintelligence may even foresee a human response.

Bostrom recognizes that there are certain algorithmic techniques used by humans that computer scientists do not yet understand.

As they engage in machine learning, he believes it is critical for artificial intelligences to understand human values.

On this point, Bostrom is drawing inspiration from artificial intelligence theorist Eliezer Yudkowsky's concept of "coherent extrapolated volition"—also known as "friendly AI"—which is akin to what is currently accessible in human good will, civil society, and institutions.

A superintelligence should seek to provide pleasure and joy to all of humanity, and it may even make difficult choices that benefit the whole community rather than the individual.

In 2015, Bostrom, along with Stephen Hawking, Elon Musk, Max Tegmark, and many other top AI researchers, published "An Open Letter on Artificial Intelligence" on the Future of Life Institute website, calling for artificial intelligence research that maximizes the benefits to humanity while minimizing "potential pitfalls." Transhumanism is a philosophy or belief in the technological extension and augmentation of the human species' physical, sensory, and cognitive capacity.

In 1998, Bostrom and colleague philosopher David Pearce founded the World Transhumanist Association, now known as Humanity+, to address some of the societal hurdles to the adoption and use of new transhumanist technologies by people of all socioeconomic strata.

Bostrom has said that he is not interested in defending technology, but rather in using modern technologies to address real-world problems and improve people's lives.

Bostrom is particularly concerned in the ethical implications of human enhancement and the long-term implications of major technological changes in human nature.

He claims that transhumanist ideas may be found throughout history and throughout cultures, as shown by ancient quests such as the Gilgamesh Epic and historical hunts for the Fountain of Youth and the Elixir of Immortality.

The transhumanist idea, then, may be regarded fairly ancient, with modern representations in disciplines like artificial intelligence and gene editing.

Bostrom takes a stand against the emergence of strong transhumanist instruments as an activist.

He expects that politicians may act with foresight and command the sequencing of technical breakthroughs in order to decrease the danger of future applications and human extinction.

He believes that everyone should have the chance to become transhuman or posthuman (have capacities beyond human nature and intelligence).

For Bostrom, success would require a worldwide commitment to global security and continued technological progress, as well as widespread access to the benefits of technologies (cryonics, mind uploading, anti-aging drugs, life extension regimens), which hold the most promise for transhumanist change in our lifetime.

Bostrom, however cautious, rejects conventional humility, pointing out that humans have a long history of dealing with potentially catastrophic dangers.

In such things, he is a strong supporter of "individual choice," as well as "morphological freedom," or the ability to transform or reengineer one's body to fulfill specific wishes and requirements.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.




See also: 

Superintelligence; Technological Singularity.


Further Reading

Bostrom, Nick. 2003. “Are You Living in a Computer Simulation?” Philosophical Quarterly 53, no. 211: 243–55.

Bostrom, Nick. 2005. “A History of Transhumanist Thought.” Journal of Evolution and Technology 14, no. 1: 1–25.

Bostrom, Nick, ed. 2008. Global Catastrophic Risks. Oxford, UK: Oxford University Press.

Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.

Savulescu, Julian, and Nick Bostrom, eds. 2009. Human Enhancement. Oxford, UK: Oxford University Press.

Artificial Intelligence - What Are AI Berserkers?

 


Berserkers are intelligent killing robots initially described by science fiction and fantasy novelist Fred Saberhagen (1930–2007) in his 1962 short tale "Without a Thought." Berserkers later emerged as frequent antagonists in many more of Saberhagen's books and novellas.

Berserkers are a sentient, self-replicating race of space-faring robots with the mission of annihilating all life.

They were built as an ultimate doomsday weapon in a long-forgotten interplanetary conflict between two extraterrestrial cultures (i.e., one intended as a threat or deterrent more than actual use).

The facts of how the Berserkers were released are lost to time, since they seem to have killed off their creators as well as their foes and have been ravaging the Milky Way galaxy ever since.

They come in a variety of sizes, from human-scale units to heavily armored planetoids (cf.

Death Star), and are equipped with a variety of weaponry capable of sterilizing worlds.

Any sentient species that fights back, such as humans, is a priority for the Berserkers.

They construct factories in order to duplicate and better themselves, but their basic objective of removing life remains unchanged.

It's uncertain how far they evolve; some individual units end up questioning or even changing their intentions, while others gain strategic brilliance (e.g., Brother Assassin, "Mr.Jester," Rogue Berserker, Shiva in Steel).

While the Berserkers' ultimate purpose of annihilating all life is evident, their tactical activities are uncertain owing to unpredictability in their cores caused by radioactive decay.

Their name is derived from Norse mythology's Berserkers, powerful human warriors who battled in a fury.

Berserkers depict a worst-case scenario for artificial intelligence: murdering robots that think, learn, and reproduce in a wild and emotionless manner.

They demonstrate the deadly arrogance of providing AI with strong weapons, harmful purpose, and unrestrained self-replication in order to transcend its creators' comprehension and control.

If Berserkers are ever developed and released, they may represent an inexhaustible danger to living creatures over enormous swaths of space and time.

They're quite hard to get rid of after they've been unbottled.

This is owing to their superior defenses and weaponry, as well as their widespread distribution, ability to repair and multiply, autonomous functioning (i.e., without centralized control), capacity to learn and adapt, and limitless patience to lay in wait.

The discovery of Berserkers is so horrifying in Saberhagen's books that human civilizations are terrified of constructing their own AI for fear that it may turn against its creators.

Some astute humans, on the other hand, find a fascinating Berserker counter-weapon: Qwib-Qwibs, self-replicating robots designed to eliminate all Berserkers rather than all life ("Itself Surprised" by Roger Zelazny).

Humans have also utilized cyborgs as an anti-Berserker technique, pushing the boundaries of what constitutes biological intelligence (Berserker Man, Ber serker Prime, Berserker Kill).

Berserkers also exemplifies artificial intelligence's potential for inscrutability and strangeness.

Even while Berserkers can communicate with each other, their huge brains are generally unintelligible to sentient organic lifeforms fleeing or battling them, and they are difficult to study owing to their proclivity to self-destruct if caught.

What can be deduced from their reasoning is that they see life as a plague, a material illness that must be eradicated.

In consequence, the Berserkers lack a thorough understanding of biological intellect and have never been able to adequately duplicate organic life, despite several tries.

They do, however, sometimes enlist human defectors (dubbed "goodlife") to aid the Berserkers in their struggle against "badlife" (i.e., any life that resists extermination).

Nonetheless, Berserkers and humans think in almost irreconcilable ways, hindering attempts to reach a common understanding between life and nonlife.

The seeming contrasts between human and machine intellect are at the heart of most of the conflict in the tales (e.g., artistic appreciation, empathy for animals, a sense of humor, a tendency to make mistakes, the use of acronyms for mnemonics, and even fake encyclopedia entries made to detect pla giarism).

Berserkers have been known to be defeated by non-intelligent living forms such as plants and mantis shrimp ("Pressure" and "Smasher").

Berserkers may be seen of as a specific example of the von Neumann probe, which was invented by mathematician and physicist John von Neumann (1903–1957): self-replicating space-faring robots that might be deployed over the galaxy to efficiently investigate it In the Berserker tales, the Turing Test, developed by mathematician and computer scientist Alan Turing (1912–1954), is both investigated and upended.

In "Inhuman Error," human castaways compete with a Berserker to persuade a rescue crew that they are human, while in "Without a Thought," a Berserker tries to figure out whether its game opponent is human.

The Fermi paradox—the concept that if intelligent extraterrestrial civilizations exist, we should have heard from them by now—is also explained by Berserkers.

It's possible that extraterrestrial civilizations haven't contacted Earth because they were destroyed by Berserker-like robots or are hiding from them.

Berserkers, or anything like them, have featured in a number of science fiction books in addition to Saberhagen's (e.g., works by Greg Bear, Gregory Benford, David Brin, Ann Leckie, and Martha Wells; the Terminator series of movies; and the Mass Effect series of video games).

All of these instances demonstrate how the potential for existential risks posed by AI may be investigated in the lab of fiction.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

de Garis, Hugo; Superintelligence; The Terminator.


Further Reading


Saberhagen, Fred. 2015a. Berserkers: The Early Tales. Albuquerque: JSS Literary Productions.

Saberhagen, Fred. 2015b. Berserkers: The Later Tales. Albuquerque: JSS Literary Productions.

Saberhagen’s Worlds of SF and Fantasy. http://www.berserker.com.

The TAJ: Official Fan site of Fred Saberhagen’s Berserker® Universe. http://www.berserkerfan.org.




What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...