Showing posts sorted by relevance for query Elon Musk. Sort by date Show all posts
Showing posts sorted by relevance for query Elon Musk. Sort by date Show all posts

What Is Artificial General Intelligence?



Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the AGI system to solve problems when presented with new tasks. 

In other words, it's AI's capacity to learn similarly to humans.



Strong AI, full AI, and general intelligent action are some names for it. 

The phrase "strong AI," however, is only used in few academic publications to refer to computer systems that are sentient or aware. 

These definitions may change since specialists from many disciplines see human intelligence from various angles. 

For instance, computer scientists often characterize human intelligence as the capacity to accomplish objectives. 

On the other hand, general intelligence is defined by psychologists in terms of survival or adaptation.

Weak or narrow AI, in contrast to strong AI, is made up of programs created to address a single issue and lacks awareness since it is not meant to have broad cognitive capacities. 

Autonomous cars and IBM's Watson supercomputer are two examples. 

Nevertheless, AGI is defined in computer science as an intelligent system having full or comprehensive knowledge as well as cognitive computing skills.



As of right now, there are no real AGI systems; they are still the stuff of science fiction. 

The long-term objective of these systems is to perform as well as humans do. 

However, due to AGI's superior capacity to acquire and analyze massive amounts of data at a far faster rate than the human mind, it may be possible for AGI to be more intelligent than humans.



Artificial intelligence (AI) is now capable of carrying out a wide range of functions, including providing tailored suggestions based on prior web searches. 

Additionally, it can recognize various items for autonomous cars to avoid, recognize malignant cells during medical inspections, and serve as the brain of home automation. 

Additionally, it may be utilized to find possibly habitable planets, act as intelligent assistants, be in charge of security, and more.



Naturally, AGI seems to far beyond such capacities, and some scientists are concerned this may result in a dystopian future

Elon Musk said that sentient AI would be more hazardous than nuclear war, while Stephen Hawking advised against its creation because it would see humanity as a possible threat and act accordingly.


Despite concerns, most scientists agree that genuine AGI is decades or perhaps centuries away from being developed and must first meet a number of requirements (which are always changing) in order to be achieved. 

These include the capacity for logic, tact, puzzle-solving, and making decisions in the face of ambiguity. 



Additionally, it must be able to plan, learn, and communicate in natural language, as well as represent information, including common sense. 

AGI must also have the capacity to detect (hear, see, etc.) and output the ability to act, such as moving items and switching places to explore. 



How far along are we in the process of developing artificial general intelligence, and who is involved?

In accordance with a 2020 study from the Global Catastrophic Risk Institute (GCRI), academic institutions, businesses, and different governmental agencies are presently working on 72 recognized AGI R&D projects. 



According to the poll, projects nowadays are often smaller, more geographically diversified, less open-source, more focused on humanitarian aims than academic ones, and more centered in private firms than projects in 2017. 

The comparison also reveals a decline in projects with academic affiliations, an increase in projects sponsored by corporations, a rise in projects with a humanitarian emphasis, a decline in programs with ties to the military, and a decline in US-based initiatives.


In AGI R&D, particularly military initiatives that are solely focused on fundamental research, governments and organizations have very little roles to play. 

However, recent programs seem to be more varied and are classified using three criteria, including business projects that are engaged in AGI safety and have humanistic end objectives. 

Additionally, it covers tiny private enterprises with a variety of objectives including academic programs that do not concern themselves with AGI safety but rather the progress of knowledge.

One of the most well-known organizations working on AGI is Carnegie Mellon University, which has a project called ACT-R that aims to create a generic cognitive architecture based on the basic cognitive and perceptual functions that support the human mind. 

The project may be thought of as a method of describing how the brain is structured such that different processing modules can result in cognition.


Another pioneering organization testing the limits of AGI is Microsoft Research AI, which has carried out a number of research initiatives, including developing a data set to counter prejudice for machine-learning models. 

The business is also investigating ways to advance moral AI, create a responsible AI standard, and create AI strategies and evaluations to create a framework that emphasizes the advancement of mankind.


The person behind the well-known video game franchises Commander Keen and Doom has launched yet another intriguing endeavor. 

Keen Technologies, John Carmack's most recent business, is an AGI development company that has already raised $20 million in funding from former GitHub CEO Nat Friedman and Cue founder Daniel Gross. 

Carmack is one of the AGI optimists who believes that it would ultimately help mankind and result in the development of an AI mind that acts like a human, which might be used as a universal remote worker.


So what does AGI's future hold? 

The majority of specialists are doubtful that AGI will ever be developed, and others believe that the urge to even develop artificial intelligence comparable to humans will eventually go away. 

Others are working to develop it so that everyone will benefit.

Nevertheless, the creation of AGI is still in the planning stages, and in the next decades, little progress is anticipated. 

Nevertheless, throughout history, scientists have debated whether developing technologies with the potential to change people's lives will benefit society as a whole or endanger it. 

This proposal was considered before to the invention of the vehicle, during the development of AC electricity, and when the atomic bomb was still only a theory.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.

Be sure to refer to the complete & active AI Terms Glossary here.


Artificial Intelligence - Who Is Ben Goertzel (1966–)?


Ben Goertzel is the founder and CEO of SingularityNET, a blockchain AI company, as well as the chairman of Novamente LLC, a research professor at Xiamen University's Fujian Key Lab for Brain-Like Intelligent Systems, the chief scientist of Mozi Health and Hanson Robotics in Shenzhen, China, and the chair of the OpenCog Foundation, Humanity+, and Artificial General Intelligence Society conference series. 

Goertzel has long wanted to create a good artificial general intelligence and use it in bioinformatics, finance, gaming, and robotics.

He claims that, despite AI's current popularity, it is currently superior than specialists in a number of domains.

Goertzel divides AI advancement into three stages, each of which represents a step toward a global brain (Goertzel 2002, 2): • the intelligent Internet • the full-fledged Singularity Goertzel presented a lecture titled "Decentralized AI: The Power and the Necessity" at TEDxBerkeley in 2019.

He examines artificial intelligence in its present form as well as its future in this discussion.

"The relevance of decentralized control in leading AI to the next stages, the strength of decentralized AI," he emphasizes (Goertzel 2019a).

In the evolution of artificial intelligence, Goertzel distinguishes three types: artificial narrow intelligence, artificial broad intelligence, and artificial superintelligence.

Artificial narrow intelligence refers to machines that can "address extremely specific issues... better than humans" (Goertzel 2019a).

In certain restricted activities, such as chess and Go, this kind of AI has outperformed a human.

Ray Kurzweil, an American futurologist and inventor, coined the phrase "narrow AI." Artificial general intelligence (AGI) refers to intelligent computers that can "generate knowledge" in a variety of fields and have "humanlike autonomy." By 2029, according to Goertzel, this kind of AI will have reached the same level of intellect as humans.

Artificial superintelligence (ASI) is based on both narrow and broad AI, but it can also reprogram itself.



By 2045, he claims, this kind of AI will be smarter than the finest human brains in terms of "scientific innovation, general knowledge, and social abilities" (Goertzel 2019a).

According to Goertzel, Facebook, Google, and a number of colleges and companies are all actively working on AGI.

According to Goertzel, the shift from AI to AGI will occur within the next five to thirty years.

Goertzel is also interested in artificial intelligence-assisted life extension.

He thinks that artificial intelligence's exponential advancement will lead to technologies that will increase human life span and health eternally.

He predicts that by 2045, a singularity featuring a drastic increase in "human health span" would have occurred (Goertzel 2012).

Vernor Vinge popularized the term "singularity" in his 1993 article "The Coming Technological Singularity." Ray Kurzweil coined the phrase in his 2005 book The Singularity is Near.

The Technological Singularity, according to both writers, is the merging of machine and human intellect as a result of a fast development in new technologies, particularly robots and AI.

The thought of an impending singularity excites Goertzel.

SingularityNET is his major current initiative, which entails the construction of a worldwide network of artificial intelligence researchers interested in developing, sharing, and monetizing AI technology, software, and services.

By developing a decentralized protocol that enables a full stack AI solution, Goertzel has made a significant contribution to this endeavor.

SingularityNET, as a decentralized marketplace, provides a variety of AI technologies, including text generation, AI Opinion, iAnswer, Emotion Recognition, Market Trends, OpenCog Pattern Miner, and its own cryptocurrency, AGI token.

SingularityNET is presently cooperating with Domino's Pizza in Malaysia and Singapore (Khan 2019).



Domino's is interested in leveraging SingularityNET technologies to design a marketing plan, with the goal of providing the finest products and services to its consumers via the use of unique algorithms.

Domino's thinks that by incorporating the AGI ecosystem into their operations, they will be able to provide value and service in the food delivery market.

Goertzel has reacted to scientist Stephen Hawking's challenge, which claimed that AI might lead to the extinction of human civilization.

Given the current situation, artificial super intelligence's mental state will be based on past AI generations, thus "selling, spying, murdering, and gambling are the key aims and values in the mind of the first super intelligence," according to Goertzel (Goertzel 2019b).

He acknowledges that if humans desire compassionate AI, they must first improve their own treatment of one another.

With four years, Goertzel worked for Hanson Robotics in Hong Kong.

He collaborated with Sophia, Einstein, and Han, three well-known robots.

"Great platforms for experimenting with AI algorithms, including cognitive architectures like OpenCog that aim at human-level AI," he added of the robots (Goertzel 2018).

Goertzel argues that essential human values may be retained for future generations in Sophia-like robot creatures after the Technological Singularity.

Decentralized networks like SingularityNET and OpenCog, according to Goertzel, provide "AIs with human-like values," reducing AI hazards to humanity (Goertzel 2018).

Because human values are complicated in nature, Goertzel feels that encoding them as a rule list is wasteful.

Brain-computer interfacing (BCI) and emotional interfacing are two ways Goertzel offers.

Humans will become "cyborgs," with their brains physically linked to computational-intelligence modules, and the machine components of the cyborgs will be able to read the moral-value-evaluation structures of the human mind directly from the biological components of the cyborgs (Goertzel 2018).

Goertzel uses Elon Musk's Neuralink as an example.

Because it entails invasive trials with human brains and a lot of unknowns, Goertzel doubts that this strategy will succeed.

"Emotional and spiritual connections between people and AIs, rather than Ethernet cables or Wifi signals, are used to link human and AI brains," according to the second method (Goertzel 2018).

To practice human values, he proposes that AIs participate in emotional and social connection with humans via face expression detection and mirroring, eye contact, and voice-based emotion recognition.

To that end, Goertzel collaborated with SingularityNET, Hanson AI, and Lia Inc on the "Loving AI" research project, which aims to assist artificial intelligences speak and form intimate connections with humans.

A funny video of actor Will Smith on a date with Sophia the Robot is presently available on the Loving AI website.

Sophia can already make sixty facial expressions and understand human language and emotions, according to the video of the date.

When linked to a network like SingularityNET, humanoid robots like Sophia obtain "ethical insights and breakthroughs...

via language," according to Goertzel (Goertzel 2018).

Then, through a shared internet "mindcloud," robots and AIs may share what they've learnt.

Goertzel is also the chair of the Artificial General Intelligence Society's Conference Series on Artificial General Intelligence, which has been conducted yearly since 2008.

The Journal of Artificial General Intelligence is a peer-reviewed open-access academic periodical published by the organization. Goertzel is the editor of the conference proceedings series.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 

General and Narrow AI; Superintelligence; Technological Singularity.


Further Reading:


Goertzel, Ben. 2002. Creating Internet Intelligence: Wild Computing, Distributed Digital Consciousness, and the Emerging Global Brain. New York: Springer.

Goertzel, Ben. 2012. “Radically Expanding the Human Health Span.” TEDxHKUST. https://www.youtube.com/watch?v=IMUbRPvcB54.

Goertzel, Ben. 2017. “Sophia and SingularityNET: Q&A.” H+ Magazine, November 5, 2017. https://hplusmagazine.com/2017/11/05/sophia-singularitynet-qa/.

Goertzel, Ben. 2018. “Emotionally Savvy Robots: Key to a Human-Friendly Singularity.” https://www.hansonrobotics.com/emotionally-savvy-robots-key-to-a-human-friendly-singularity/.

Goertzel, Ben. 2019a. “Decentralized AI: The Power and the Necessity.” TEDxBerkeley, March 9, 2019. https://www.youtube.com/watch?v=r4manxX5U-0.

Goertzel, Ben. 2019b. “Will Artificial Intelligence Kill Us?” July 31, 2019. https://www.youtube.com/watch?v=TDClKEORtko.

Goertzel, Ben, and Stephan Vladimir Bugaj. 2006. The Path to Posthumanity: 21st Century Technology and Its Radical Implications for Mind, Society, and Reality. Bethesda, MD: Academica Press.

Khan, Arif. 2019. “SingularityNET and Domino’s Pizza Announce a Strategic Partnership.” https://blog.singularitynet.io/singularitynet-and-dominos-pizza-announce-a-strategic-partnership-cbbe21f80fc7.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. NASA: Lewis Research Center





Artificial Intelligence - Who Is Nick Bostrom?

 




Nick Bostrom(1973–) is an Oxford University philosopher with a physics and computational neuroscience multidisciplinary academic background.

He is a cofounder of the World Transhumanist Association and a founding director of the Future of Humanity Institute.

Anthropic Bias (2002), Human Enhancement (2009), Superintelligence: Paths, Dangers, Strategies (2014), and Global Catastrophic Risks (2014) are among the works he has authored or edited (2014).

Bostrom was born in the Swedish city of Helsingborg in 1973.

Despite his dislike of formal education, he enjoyed studying.

Science, literature, art, and anthropology were among his favorite interests.

Bostrom earned bachelor's degrees in philosophy, mathematics, logic, and artificial intelligence from the University of Gothenburg, as well as master's degrees in philosophy and physics from Stockholm University and computational neuroscience from King's College London.

The London School of Economics gave him a PhD in philosophy.

Bostrom is a regular consultant or contributor to the European Commission, the United States President's Council on Bioethics, the CIA, and Cambridge University's Centre for the Study of Existential Risk.

Bostrom is well-known for his contributions to a variety of subjects, and he has proposed or written extensively on a number of well-known philosophical arguments and conjectures, including the simulation hypothesis, existential risk, the future of machine intelligence, and transhumanism.

Bostrom's concerns in the future of technology, as well as his discoveries on the mathematics of the anthropic bias, are combined in the so-called "Simulation Argument." Three propositions make up the argument.

The first hypothesis is that almost all civilizations that attain human levels of knowledge eventually perish before achieving technological maturity.

The second hypothesis is that most civilizations develop "ancestor simulations" of sentient beings, but ultimately abandon them.

The "simulation hypothesis" proposes that mankind is now living in a simulation.

He claims that just one of the three assertions must be true.

If the first hypothesis is false, some proportion of civilizations at the current level of human society will ultimately acquire technological maturity.

If the second premise is incorrect, certain civilizations may be interested in continuing to perform ancestor simulations.

These civilizations' researchers may be performing massive numbers of these simulations.

There would be many times as many simulated humans living in simulated worlds as there would be genuine people living in real universes in that situation.

As a result, mankind is most likely to exist in one of the simulated worlds.

If the second statement is true, the third possibility is also true.

It's even feasible, according to Bostrom, for a civilization inside a simulation to conduct its own simulations.

In the form of an endless regress, simulations may be living within simulated universes, inside their own simulated worlds.

It's also feasible that all civilizations would vanish, maybe as a result of the discovery of a new technology, posing an existential threat beyond human control.

Bostrom's argument implies that humanity is not blind to the truth of the external world, an argument that can be traced back to Plato's conviction in the existence of universals (the "Forms") and the capacity of human senses to see only specific examples of universals.

His thesis also implies that computers' ability to imitate things will continue to improve in terms of power and sophistication.

Computer games and literature, according to Bostrom, are modern instances of natural human fascination with synthetic reality.

The Simulation Argument is sometimes mistaken with the restricted premise that mankind lives in a simulation, which is the third argument.

Humans, according to Bostrom, have a less than 50% probability of living in some kind of artificial matrix.

He also argues that if mankind lived in one, society would be unlikely to notice "glitches" that revealed the simulation's existence since they had total control over the simulation's operation.

Simulator creators, on the other hand, would inform people that they are living in a simulation.

Existential hazards are those that pose a serious threat to humanity's existence.

Humans, rather than natural dangers, pose the biggest existential threat, according to Bostrom (e.g., asteroids, earthquakes, and epidemic disease).

He argues that artificial hazards like synthetic biology, molecular nanotechnology, and artificial intelligence are considerably more threatening.

Bostrom divides dangers into three categories: local, global, and existential.

Local dangers might include the theft of a valuable item of art or an automobile accident.

A military dictator's downfall or the explosion of a supervolcano are both potential global threats.

The extent and intensity of existential hazards vary.

They are cross-generational and long-lasting.

Because of the amount of lives that might be spared, he believes that reducing the danger of existential hazards is the most essential thing that human beings can do; battling against existential risk is also one of humanity's most neglected undertakings.

He also distinguishes between several types of existential peril.

Human extinction, defined as the extinction of a species before it reaches technological maturity; permanent stagnation, defined as the plateauing of human technological achievement; flawed realization, defined as humanity's failure to use advanced technology for an ultimately worthwhile purpose; and subsequent ruination, defined as a society reaching technological maturity but then something goes wrong.

While mankind has not yet harnessed human ingenuity to create a technology that releases existentially destructive power, Bostrom believes it is possible that it may in the future.

Human civilization has yet to produce a technology with such horrific implications that mankind could collectively forget about it.

The objective would be to go on a technical path that is safe, includes global collaboration, and is long-term.

To argue for the prospect of machine superintelligence, Bostrom employs the metaphor of altered brain complexity in the development of humans from apes, which took just a few hundred thousand generations.

Artificial systems that use machine learning (that is, algorithms that learn) are no longer constrained to a single area.

He also points out that computers process information at a far faster pace than human neurons.

Humans will eventually rely on super intelligent robots in the same manner that chimps presently rely on humans for their ultimate survival, according to Bostrom, even in the wild.

By establishing a powerful optimizing process with a poorly stated purpose, super intelligent computers have the potential to cause devastation, or possibly an extinction-level catastrophe.

By subverting humanity to the programmed purpose, a superintelligence may even foresee a human response.

Bostrom recognizes that there are certain algorithmic techniques used by humans that computer scientists do not yet understand.

As they engage in machine learning, he believes it is critical for artificial intelligences to understand human values.

On this point, Bostrom is drawing inspiration from artificial intelligence theorist Eliezer Yudkowsky's concept of "coherent extrapolated volition"—also known as "friendly AI"—which is akin to what is currently accessible in human good will, civil society, and institutions.

A superintelligence should seek to provide pleasure and joy to all of humanity, and it may even make difficult choices that benefit the whole community rather than the individual.

In 2015, Bostrom, along with Stephen Hawking, Elon Musk, Max Tegmark, and many other top AI researchers, published "An Open Letter on Artificial Intelligence" on the Future of Life Institute website, calling for artificial intelligence research that maximizes the benefits to humanity while minimizing "potential pitfalls." Transhumanism is a philosophy or belief in the technological extension and augmentation of the human species' physical, sensory, and cognitive capacity.

In 1998, Bostrom and colleague philosopher David Pearce founded the World Transhumanist Association, now known as Humanity+, to address some of the societal hurdles to the adoption and use of new transhumanist technologies by people of all socioeconomic strata.

Bostrom has said that he is not interested in defending technology, but rather in using modern technologies to address real-world problems and improve people's lives.

Bostrom is particularly concerned in the ethical implications of human enhancement and the long-term implications of major technological changes in human nature.

He claims that transhumanist ideas may be found throughout history and throughout cultures, as shown by ancient quests such as the Gilgamesh Epic and historical hunts for the Fountain of Youth and the Elixir of Immortality.

The transhumanist idea, then, may be regarded fairly ancient, with modern representations in disciplines like artificial intelligence and gene editing.

Bostrom takes a stand against the emergence of strong transhumanist instruments as an activist.

He expects that politicians may act with foresight and command the sequencing of technical breakthroughs in order to decrease the danger of future applications and human extinction.

He believes that everyone should have the chance to become transhuman or posthuman (have capacities beyond human nature and intelligence).

For Bostrom, success would require a worldwide commitment to global security and continued technological progress, as well as widespread access to the benefits of technologies (cryonics, mind uploading, anti-aging drugs, life extension regimens), which hold the most promise for transhumanist change in our lifetime.

Bostrom, however cautious, rejects conventional humility, pointing out that humans have a long history of dealing with potentially catastrophic dangers.

In such things, he is a strong supporter of "individual choice," as well as "morphological freedom," or the ability to transform or reengineer one's body to fulfill specific wishes and requirements.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.




See also: 

Superintelligence; Technological Singularity.


Further Reading

Bostrom, Nick. 2003. “Are You Living in a Computer Simulation?” Philosophical Quarterly 53, no. 211: 243–55.

Bostrom, Nick. 2005. “A History of Transhumanist Thought.” Journal of Evolution and Technology 14, no. 1: 1–25.

Bostrom, Nick, ed. 2008. Global Catastrophic Risks. Oxford, UK: Oxford University Press.

Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.

Savulescu, Julian, and Nick Bostrom, eds. 2009. Human Enhancement. Oxford, UK: Oxford University Press.

Artificial Intelligence - History And Timeline

     




    1942

    The Three Laws of Robotics by science fiction author Isaac Asimov occur in the short tale "Runaround."


    1943


    Emil Post, a mathematician, talks about "production systems," a notion he adopted for the 1957 General Problem Solver.


    1943


    "A Logical Calculus of the Ideas of Immanent in Nervous Activity," a study by Warren McCulloch and Walter Pitts on a computational theory of neural networks, is published.


    1944


    The Teleological Society was founded by John von Neumann, Norbert Wiener, Warren McCulloch, Walter Pitts, and Howard Aiken to explore, among other things, nervous system communication and control.


    1945


    In his book How to Solve It, George Polya emphasizes the importance of heuristic thinking in issue solving.


    1946


    In New York City, the first of eleven Macy Conferences on Cybernetics gets underway. "Feedback Mechanisms and Circular Causal Systems in Biological and Social Systems" is the focus of the inaugural conference.



    1948


    Norbert Wiener, a mathematician, publishes Cybernetics, or Control and Communication in the Animal and the Machine.


    1949


    In his book The Organization of Behavior, psychologist Donald Hebb provides a theory for brain adaptation in human education: "neurons that fire together connect together."


    1949


    Edmund Berkeley's book Giant Brains, or Machines That Think, is published.


    1950


    Alan Turing's "Computing Machinery and Intelligence" describes the Turing Test, which attributes intelligence to any computer capable of demonstrating intelligent behavior comparable to that of a person.


    1950


    Claude Shannon releases "Programming a Computer for Playing Chess," a groundbreaking technical study that shares search methods and strategies.



    1951


    Marvin Minsky, a math student, and Dean Edmonds, a physics student, create an electronic rat that can learn to navigate a labyrinth using Hebbian theory.


    1951


    John von Neumann, a mathematician, releases "General and Logical Theory of Automata," which reduces the human brain and central nervous system to a computer.


    1951


    For the University of Manchester's Ferranti Mark 1 computer, Christopher Strachey produces a checkers software and Dietrich Prinz creates a chess routine.


    1952


    Cyberneticist W. Edwards wrote Design for a Brain: The Origin of Adaptive Behavior, a book on the logical underpinnings of human brain function. Ross Ashby is a British actor.


    1952


    At Cornell University Medical College, physiologist James Hardy and physician Martin Lipkin begin developing a McBee punched card system for mechanical diagnosis of patients.


    1954


    Science-Fiction Thinking Machines: Robots, Androids, Computers, edited by Groff Conklin, is a theme-based anthology.


    1954


    The Georgetown-IBM project exemplifies the power of text machine translation.


    1955


    Under the direction of economist Herbert Simon and graduate student Allen Newell, artificial intelligence research began at Carnegie Tech (now Carnegie Mellon University).


    1955


    For Scientific American, mathematician John Kemeny wrote "Man as a Machine."


    1955


    In a Rockefeller Foundation proposal for a Dartmouth University meeting, mathematician John McCarthy coined the phrase "artificial intelligence."



    1956


    Allen Newell, Herbert Simon, and Cliff Shaw created Logic Theorist, an artificial intelligence computer software for proving theorems in Alfred North Whitehead and Bertrand Russell's Principia Mathematica.


    1956


    The "Constitutional Convention of AI," a Dartmouth Summer Research Project, brings together specialists in cybernetics, automata, information theory, operations research, and game theory.


    1956


    On television, electrical engineer Arthur Samuel shows off his checkers-playing AI software.


    1957


    Allen Newell and Herbert Simon created the General Problem Solver AI software.


    1957


    The Rockefeller Medical Electronics Center shows how an RCA Bizmac computer application might help doctors distinguish between blood disorders.


    1958


    The Computer and the Brain, an unfinished work by John von Neumann, is published.


    1958


    At the "Mechanisation of Thought Processes" symposium at the UK's Teddington National Physical Laboratory, Firmin Nash delivers the Group Symbol Associator its first public demonstration.


    1958


    For linear data categorization, Frank Rosenblatt develops the single layer perceptron, which includes a neural network and supervised learning algorithm.


    1958


    The high-level programming language LISP is specified by John McCarthy of the Massachusetts Institute of Technology (MIT) for AI research.


    1959


    "The Reasoning Foundations of Medical Diagnosis," written by physicist Robert Ledley and radiologist Lee Lusted, presents Bayesian inference and symbolic logic to medical difficulties.


    1959


    At MIT, John McCarthy and Marvin Minsky create the Artificial Intelligence Laboratory.


    1960


    James L. Adams, an engineering student, built the Stanford Cart, a remote control vehicle with a television camera.


    1962


    In his short novel "Without a Thought," science fiction and fantasy author Fred Saberhagen develops sentient killing robots known as Berserkers.


    1963


    John McCarthy developed the Stanford Artificial Intelligence Laboratory (SAIL).


    1963


    Under Project MAC, the Advanced Research Experiments Agency of the United States Department of Defense began financing artificial intelligence projects at MIT.


    1964


    Joseph Weizenbaum of MIT created ELIZA, the first software allowing natural language conversation with a computer (a "chatbot").


    1965


    I am a statistician from the United Kingdom. J. Good's "Speculations Concerning the First Ultraintelligent Machine," which predicts an impending intelligence explosion, is published.


    1965


    Hubert L. Dreyfus and Stuart E. Dreyfus, philosophers and mathematicians, publish "Alchemy and AI," a study critical of artificial intelligence.


    1965


    Joshua Lederberg and Edward Feigenbaum founded the Stanford Heuristic Programming Project, which aims to model scientific reasoning and create expert systems.


    1965


    Donald Michie is the head of Edinburgh University's Department of Machine Intelligence and Perception.


    1965


    Georg Nees organizes the first generative art exhibition, Computer Graphic, in Stuttgart, West Germany.


    1965


    With the expert system DENDRAL, computer scientist Edward Feigenbaum starts a ten-year endeavor to automate the chemical analysis of organic molecules.


    1966


    The Automatic Language Processing Advisory Committee (ALPAC) issues a cautious assessment on machine translation's present status.


    1967


    On a DEC PDP-6 at MIT, Richard Greenblatt finishes work on Mac Hack, a computer that plays competitive tournament chess.


    1967


    Waseda University's Ichiro Kato begins work on the WABOT project, which culminates in the unveiling of a full-scale humanoid intelligent robot five years later.


    1968


    Stanley Kubrick's adaptation of Arthur C. Clarke's science fiction novel 2001: A Space Odyssey, about the artificially intelligent computer HAL 9000, is one of the most influential and highly praised films of all time.


    1968


    At MIT, Terry Winograd starts work on SHRDLU, a natural language understanding program.


    1969


    Washington, DC hosts the First International Joint Conference on Artificial Intelligence (IJCAI).


    1972


    Artist Harold Cohen develops AARON, an artificial intelligence computer that generates paintings.


    1972


    Ken Colby describes his efforts using the software program PARRY to simulate paranoia.


    1972


    In What Computers Can't Do, Hubert Dreyfus offers his criticism of artificial intelligence's intellectual basis.


    1972


    Ted Shortliffe, a doctorate student at Stanford University, has started work on the MYCIN expert system, which is aimed to identify bacterial illnesses and provide treatment alternatives.


    1972


    The UK Science Research Council releases the Lighthill Report on Artificial Intelligence, which highlights AI technological shortcomings and the challenges of combinatorial explosion.


    1972


    The Assault on Privacy: Computers, Data Banks, and Dossiers, by Arthur Miller, is an early study on the societal implications of computers.


    1972


    INTERNIST-I, an internal medicine expert system, is being developed by University of Pittsburgh physician Jack Myers, medical student Randolph Miller, and computer scientist Harry Pople.


    1974


    Paul Werbos, a social scientist, has completed his dissertation on a backpropagation algorithm that is currently extensively used in artificial neural network training for supervised learning applications.


    1974


    The memo discusses the notion of a frame, a "remembered framework" that fits reality by "changing detail as appropriate." Marvin Minsky distributes MIT AI Lab document 306 on "A Framework for Representing Knowledge."


    1975


    The phrase "genetic algorithm" is used by John Holland to explain evolutionary strategies in natural and artificial systems.


    1976


    In Computer Power and Human Reason, computer scientist Joseph Weizenbaum expresses his mixed feelings on artificial intelligence research.


    1978


    At Rutgers University, EXPERT, a generic knowledge representation technique for constructing expert systems, becomes live.


    1978


    Joshua Lederberg, Douglas Brutlag, Edward Feigenbaum, and Bruce Buchanan started the MOLGEN project at Stanford to solve DNA structures generated from segmentation data in molecular genetics research.


    1979


    Raj Reddy, a computer scientist at Carnegie Mellon University, founded the Robotics Institute.


    1979


    While working with a robot, the first human is slain.


    1979


    Hans Moravec rebuilds and equips the Stanford Cart with a stereoscopic vision system after it has evolved into an autonomous rover over almost two decades.


    1980


    The American Association of Artificial Intelligence (AAAI) holds its first national conference at Stanford University.


    1980


    In his Chinese Room argument, philosopher John Searle claims that a computer's modeling of action does not establish comprehension, intentionality, or awareness.


    1982


    Release of Blade Runner, a science fiction picture based on Philip K. Dick's tale Do Androids Dream of Electric Sheep? (1968).


    1982


    The associative brain network, initially developed by William Little in 1974, is popularized by physicist John Hopfield.


    1984


    In Fortune Magazine, Tom Alexander writes "Why Computers Can't Outthink the Experts."


    1984


    At the Microelectronics and Computer Consortium (MCC) in Austin, TX, computer scientist Doug Lenat launches the Cyc project, which aims to create a vast commonsense knowledge base and artificial intelligence architecture.


    1984


    Orion Pictures releases the first Terminator picture, which features robotic assassins from the future and an AI known as Skynet.


    1986


    Honda establishes a research facility to build humanoid robots that can cohabit and interact with humans.


    1986


    Rodney Brooks, an MIT roboticist, describes the subsumption architecture for behavior-based robots.


    1986


    The Society of Mind is published by Marvin Minsky, who depicts the brain as a collection of collaborating agents.


    1989


    The MIT Artificial Intelligence Lab's Rodney Brooks and Anita Flynn publish "Fast, Cheap, and Out of Control: A Robot Invasion of the Solar System," a paper discussing the possibility of sending small robots on interplanetary exploration missions.


    1993


    The Cog interactive robot project is launched at MIT by Rodney Brooks, Lynn Andrea Stein, Cynthia Breazeal, and others.


    1995


    The phrase "generative music" was used by musician Brian Eno to describe systems that create ever-changing music by modifying parameters over time.


    1995


    The MQ-1 Predator unmanned aerial aircraft from General Atomics has entered US military and reconnaissance duty.


    1997


    Under normal tournament settings, IBM's Deep Blue supercomputer overcomes reigning chess champion Garry Kasparov.


    1997


    In Nagoya, Japan, the inaugural RoboCup, an international tournament featuring over forty teams of robot soccer players, takes place.


    1997


    NaturallySpeaking is Dragon Systems' first commercial voice recognition software product.


    1999


    Sony introduces AIBO, a robotic dog, to the general public.


    2000


    The Advanced Step in Innovative Mobility humanoid robot, ASIMO, is unveiled by Honda.


    2001


    At Super Bowl XXXV, Visage Corporation unveils the FaceFINDER automatic face-recognition technology.


    2002


    The Roomba autonomous household vacuum cleaner is released by the iRobot Corporation, which was created by Rodney Brooks, Colin Angle, and Helen Greiner.


    2004


    In the Mojave Desert near Primm, NV, DARPA hosts its inaugural autonomous vehicle Grand Challenge, but none of the cars complete the 150-mile route.


    2005


    Under the direction of neurologist Henry Markram, the Swiss Blue Brain Project is formed to imitate the human brain.


    2006


    Netflix is awarding a $1 million prize to the first programming team to create the greatest recommender system based on prior user ratings.


    2007


    DARPA has announced the commencement of the Urban Challenge, an autonomous car competition that will test merging, passing, parking, and navigating traffic and junctions.


    2009


    Under the leadership of Sebastian Thrun, Google launches its self-driving car project (now known as Waymo) in the San Francisco Bay Area.


    2009


    Fei-Fei Li of Stanford University describes her work on ImageNet, a library of millions of hand-annotated photographs used to teach AIs to recognize the presence or absence of items visually.


    2010


    Human manipulation of automated trading algorithms causes a "flash collapse" in the US stock market.


    2011


    Demis Hassabis, Shane Legg, and Mustafa Suleyman developed DeepMind in the United Kingdom to educate AIs how to play and succeed at classic video games.


    2011


    Watson, IBM's natural language computer system, has beaten Jeopardy! Ken Jennings and Brad Rutter are the champions.


    2011


    The iPhone 4S comes with Apple's mobile suggestion assistant Siri.


    2011


    Andrew Ng, a computer scientist, and Google colleagues Jeff Dean and Greg Corrado have launched an informal Google Brain deep learning research cooperation.


    2013


    The European Union's Human Brain Project aims to better understand how the human brain functions and to duplicate its computing capabilities.


    2013


    Stop Killer Robots is a campaign launched by Human Rights Watch.


    2013


    Spike Jonze's science fiction drama Her has been released. A guy and his AI mobile suggestion assistant Samantha fall in love in the film.


    2014


    Ian Goodfellow and colleagues at the University of Montreal create Generative Adversarial Networks (GANs) for use in deep neural networks, which are beneficial in making realistic fake human photos.


    2014


    Eugene Goostman, a chatbot that plays a thirteen-year-old kid, is said to have passed a Turing-like test.


    2014


    According to physicist Stephen Hawking, the development of AI might lead to humanity's extinction.


    2015


    DeepFace is a deep learning face recognition system that Facebook has released on its social media platform.


    2016


    In a five-game battle, DeepMind's AlphaGo software beats Lee Sedol, a 9th dan Go player.


    2016


    Tay, a Microsoft AI chatbot, has been put on Twitter, where users may teach it to send abusive and inappropriate posts.


    2017


    The Asilomar Meeting on Beneficial AI is hosted by the Future of Life Institute.


    2017


    Anthony Levandowski, an AI self-driving start-up engineer, formed the Way of the Future church with the goal of creating a superintelligent robot god.


    2018


    Google has announced Duplex, an AI program that uses natural language to schedule appointments over the phone.


    2018


    The General Data Protection Regulation (GDPR) and "Ethics Guidelines for Trustworthy AI" are published by the European Union.


    2019


    A lung cancer screening AI developed by Google AI and Northwestern Medicine in Chicago, IL, surpasses specialized radiologists.


    2019


    Elon Musk cofounded OpenAI, which generates realistic tales and journalism via artificial intelligence text generation. Because of its ability to spread false news, it was previously judged "too risky" to utilize.


    2020


    TensorFlow Quantum, an open-source framework for quantum machine learning, was announced by Google AI in conjunction with the University of Waterloo, the "moonshot faculty" X, and Volkswagen.




    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.










    What Is Artificial General Intelligence?

    Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...