Showing posts with label AI Innovation. Show all posts
Showing posts with label AI Innovation. Show all posts

Artificial Intelligence - History And Timeline

     




    1942

    The Three Laws of Robotics by science fiction author Isaac Asimov occur in the short tale "Runaround."


    1943


    Emil Post, a mathematician, talks about "production systems," a notion he adopted for the 1957 General Problem Solver.


    1943


    "A Logical Calculus of the Ideas of Immanent in Nervous Activity," a study by Warren McCulloch and Walter Pitts on a computational theory of neural networks, is published.


    1944


    The Teleological Society was founded by John von Neumann, Norbert Wiener, Warren McCulloch, Walter Pitts, and Howard Aiken to explore, among other things, nervous system communication and control.


    1945


    In his book How to Solve It, George Polya emphasizes the importance of heuristic thinking in issue solving.


    1946


    In New York City, the first of eleven Macy Conferences on Cybernetics gets underway. "Feedback Mechanisms and Circular Causal Systems in Biological and Social Systems" is the focus of the inaugural conference.



    1948


    Norbert Wiener, a mathematician, publishes Cybernetics, or Control and Communication in the Animal and the Machine.


    1949


    In his book The Organization of Behavior, psychologist Donald Hebb provides a theory for brain adaptation in human education: "neurons that fire together connect together."


    1949


    Edmund Berkeley's book Giant Brains, or Machines That Think, is published.


    1950


    Alan Turing's "Computing Machinery and Intelligence" describes the Turing Test, which attributes intelligence to any computer capable of demonstrating intelligent behavior comparable to that of a person.


    1950


    Claude Shannon releases "Programming a Computer for Playing Chess," a groundbreaking technical study that shares search methods and strategies.



    1951


    Marvin Minsky, a math student, and Dean Edmonds, a physics student, create an electronic rat that can learn to navigate a labyrinth using Hebbian theory.


    1951


    John von Neumann, a mathematician, releases "General and Logical Theory of Automata," which reduces the human brain and central nervous system to a computer.


    1951


    For the University of Manchester's Ferranti Mark 1 computer, Christopher Strachey produces a checkers software and Dietrich Prinz creates a chess routine.


    1952


    Cyberneticist W. Edwards wrote Design for a Brain: The Origin of Adaptive Behavior, a book on the logical underpinnings of human brain function. Ross Ashby is a British actor.


    1952


    At Cornell University Medical College, physiologist James Hardy and physician Martin Lipkin begin developing a McBee punched card system for mechanical diagnosis of patients.


    1954


    Science-Fiction Thinking Machines: Robots, Androids, Computers, edited by Groff Conklin, is a theme-based anthology.


    1954


    The Georgetown-IBM project exemplifies the power of text machine translation.


    1955


    Under the direction of economist Herbert Simon and graduate student Allen Newell, artificial intelligence research began at Carnegie Tech (now Carnegie Mellon University).


    1955


    For Scientific American, mathematician John Kemeny wrote "Man as a Machine."


    1955


    In a Rockefeller Foundation proposal for a Dartmouth University meeting, mathematician John McCarthy coined the phrase "artificial intelligence."



    1956


    Allen Newell, Herbert Simon, and Cliff Shaw created Logic Theorist, an artificial intelligence computer software for proving theorems in Alfred North Whitehead and Bertrand Russell's Principia Mathematica.


    1956


    The "Constitutional Convention of AI," a Dartmouth Summer Research Project, brings together specialists in cybernetics, automata, information theory, operations research, and game theory.


    1956


    On television, electrical engineer Arthur Samuel shows off his checkers-playing AI software.


    1957


    Allen Newell and Herbert Simon created the General Problem Solver AI software.


    1957


    The Rockefeller Medical Electronics Center shows how an RCA Bizmac computer application might help doctors distinguish between blood disorders.


    1958


    The Computer and the Brain, an unfinished work by John von Neumann, is published.


    1958


    At the "Mechanisation of Thought Processes" symposium at the UK's Teddington National Physical Laboratory, Firmin Nash delivers the Group Symbol Associator its first public demonstration.


    1958


    For linear data categorization, Frank Rosenblatt develops the single layer perceptron, which includes a neural network and supervised learning algorithm.


    1958


    The high-level programming language LISP is specified by John McCarthy of the Massachusetts Institute of Technology (MIT) for AI research.


    1959


    "The Reasoning Foundations of Medical Diagnosis," written by physicist Robert Ledley and radiologist Lee Lusted, presents Bayesian inference and symbolic logic to medical difficulties.


    1959


    At MIT, John McCarthy and Marvin Minsky create the Artificial Intelligence Laboratory.


    1960


    James L. Adams, an engineering student, built the Stanford Cart, a remote control vehicle with a television camera.


    1962


    In his short novel "Without a Thought," science fiction and fantasy author Fred Saberhagen develops sentient killing robots known as Berserkers.


    1963


    John McCarthy developed the Stanford Artificial Intelligence Laboratory (SAIL).


    1963


    Under Project MAC, the Advanced Research Experiments Agency of the United States Department of Defense began financing artificial intelligence projects at MIT.


    1964


    Joseph Weizenbaum of MIT created ELIZA, the first software allowing natural language conversation with a computer (a "chatbot").


    1965


    I am a statistician from the United Kingdom. J. Good's "Speculations Concerning the First Ultraintelligent Machine," which predicts an impending intelligence explosion, is published.


    1965


    Hubert L. Dreyfus and Stuart E. Dreyfus, philosophers and mathematicians, publish "Alchemy and AI," a study critical of artificial intelligence.


    1965


    Joshua Lederberg and Edward Feigenbaum founded the Stanford Heuristic Programming Project, which aims to model scientific reasoning and create expert systems.


    1965


    Donald Michie is the head of Edinburgh University's Department of Machine Intelligence and Perception.


    1965


    Georg Nees organizes the first generative art exhibition, Computer Graphic, in Stuttgart, West Germany.


    1965


    With the expert system DENDRAL, computer scientist Edward Feigenbaum starts a ten-year endeavor to automate the chemical analysis of organic molecules.


    1966


    The Automatic Language Processing Advisory Committee (ALPAC) issues a cautious assessment on machine translation's present status.


    1967


    On a DEC PDP-6 at MIT, Richard Greenblatt finishes work on Mac Hack, a computer that plays competitive tournament chess.


    1967


    Waseda University's Ichiro Kato begins work on the WABOT project, which culminates in the unveiling of a full-scale humanoid intelligent robot five years later.


    1968


    Stanley Kubrick's adaptation of Arthur C. Clarke's science fiction novel 2001: A Space Odyssey, about the artificially intelligent computer HAL 9000, is one of the most influential and highly praised films of all time.


    1968


    At MIT, Terry Winograd starts work on SHRDLU, a natural language understanding program.


    1969


    Washington, DC hosts the First International Joint Conference on Artificial Intelligence (IJCAI).


    1972


    Artist Harold Cohen develops AARON, an artificial intelligence computer that generates paintings.


    1972


    Ken Colby describes his efforts using the software program PARRY to simulate paranoia.


    1972


    In What Computers Can't Do, Hubert Dreyfus offers his criticism of artificial intelligence's intellectual basis.


    1972


    Ted Shortliffe, a doctorate student at Stanford University, has started work on the MYCIN expert system, which is aimed to identify bacterial illnesses and provide treatment alternatives.


    1972


    The UK Science Research Council releases the Lighthill Report on Artificial Intelligence, which highlights AI technological shortcomings and the challenges of combinatorial explosion.


    1972


    The Assault on Privacy: Computers, Data Banks, and Dossiers, by Arthur Miller, is an early study on the societal implications of computers.


    1972


    INTERNIST-I, an internal medicine expert system, is being developed by University of Pittsburgh physician Jack Myers, medical student Randolph Miller, and computer scientist Harry Pople.


    1974


    Paul Werbos, a social scientist, has completed his dissertation on a backpropagation algorithm that is currently extensively used in artificial neural network training for supervised learning applications.


    1974


    The memo discusses the notion of a frame, a "remembered framework" that fits reality by "changing detail as appropriate." Marvin Minsky distributes MIT AI Lab document 306 on "A Framework for Representing Knowledge."


    1975


    The phrase "genetic algorithm" is used by John Holland to explain evolutionary strategies in natural and artificial systems.


    1976


    In Computer Power and Human Reason, computer scientist Joseph Weizenbaum expresses his mixed feelings on artificial intelligence research.


    1978


    At Rutgers University, EXPERT, a generic knowledge representation technique for constructing expert systems, becomes live.


    1978


    Joshua Lederberg, Douglas Brutlag, Edward Feigenbaum, and Bruce Buchanan started the MOLGEN project at Stanford to solve DNA structures generated from segmentation data in molecular genetics research.


    1979


    Raj Reddy, a computer scientist at Carnegie Mellon University, founded the Robotics Institute.


    1979


    While working with a robot, the first human is slain.


    1979


    Hans Moravec rebuilds and equips the Stanford Cart with a stereoscopic vision system after it has evolved into an autonomous rover over almost two decades.


    1980


    The American Association of Artificial Intelligence (AAAI) holds its first national conference at Stanford University.


    1980


    In his Chinese Room argument, philosopher John Searle claims that a computer's modeling of action does not establish comprehension, intentionality, or awareness.


    1982


    Release of Blade Runner, a science fiction picture based on Philip K. Dick's tale Do Androids Dream of Electric Sheep? (1968).


    1982


    The associative brain network, initially developed by William Little in 1974, is popularized by physicist John Hopfield.


    1984


    In Fortune Magazine, Tom Alexander writes "Why Computers Can't Outthink the Experts."


    1984


    At the Microelectronics and Computer Consortium (MCC) in Austin, TX, computer scientist Doug Lenat launches the Cyc project, which aims to create a vast commonsense knowledge base and artificial intelligence architecture.


    1984


    Orion Pictures releases the first Terminator picture, which features robotic assassins from the future and an AI known as Skynet.


    1986


    Honda establishes a research facility to build humanoid robots that can cohabit and interact with humans.


    1986


    Rodney Brooks, an MIT roboticist, describes the subsumption architecture for behavior-based robots.


    1986


    The Society of Mind is published by Marvin Minsky, who depicts the brain as a collection of collaborating agents.


    1989


    The MIT Artificial Intelligence Lab's Rodney Brooks and Anita Flynn publish "Fast, Cheap, and Out of Control: A Robot Invasion of the Solar System," a paper discussing the possibility of sending small robots on interplanetary exploration missions.


    1993


    The Cog interactive robot project is launched at MIT by Rodney Brooks, Lynn Andrea Stein, Cynthia Breazeal, and others.


    1995


    The phrase "generative music" was used by musician Brian Eno to describe systems that create ever-changing music by modifying parameters over time.


    1995


    The MQ-1 Predator unmanned aerial aircraft from General Atomics has entered US military and reconnaissance duty.


    1997


    Under normal tournament settings, IBM's Deep Blue supercomputer overcomes reigning chess champion Garry Kasparov.


    1997


    In Nagoya, Japan, the inaugural RoboCup, an international tournament featuring over forty teams of robot soccer players, takes place.


    1997


    NaturallySpeaking is Dragon Systems' first commercial voice recognition software product.


    1999


    Sony introduces AIBO, a robotic dog, to the general public.


    2000


    The Advanced Step in Innovative Mobility humanoid robot, ASIMO, is unveiled by Honda.


    2001


    At Super Bowl XXXV, Visage Corporation unveils the FaceFINDER automatic face-recognition technology.


    2002


    The Roomba autonomous household vacuum cleaner is released by the iRobot Corporation, which was created by Rodney Brooks, Colin Angle, and Helen Greiner.


    2004


    In the Mojave Desert near Primm, NV, DARPA hosts its inaugural autonomous vehicle Grand Challenge, but none of the cars complete the 150-mile route.


    2005


    Under the direction of neurologist Henry Markram, the Swiss Blue Brain Project is formed to imitate the human brain.


    2006


    Netflix is awarding a $1 million prize to the first programming team to create the greatest recommender system based on prior user ratings.


    2007


    DARPA has announced the commencement of the Urban Challenge, an autonomous car competition that will test merging, passing, parking, and navigating traffic and junctions.


    2009


    Under the leadership of Sebastian Thrun, Google launches its self-driving car project (now known as Waymo) in the San Francisco Bay Area.


    2009


    Fei-Fei Li of Stanford University describes her work on ImageNet, a library of millions of hand-annotated photographs used to teach AIs to recognize the presence or absence of items visually.


    2010


    Human manipulation of automated trading algorithms causes a "flash collapse" in the US stock market.


    2011


    Demis Hassabis, Shane Legg, and Mustafa Suleyman developed DeepMind in the United Kingdom to educate AIs how to play and succeed at classic video games.


    2011


    Watson, IBM's natural language computer system, has beaten Jeopardy! Ken Jennings and Brad Rutter are the champions.


    2011


    The iPhone 4S comes with Apple's mobile suggestion assistant Siri.


    2011


    Andrew Ng, a computer scientist, and Google colleagues Jeff Dean and Greg Corrado have launched an informal Google Brain deep learning research cooperation.


    2013


    The European Union's Human Brain Project aims to better understand how the human brain functions and to duplicate its computing capabilities.


    2013


    Stop Killer Robots is a campaign launched by Human Rights Watch.


    2013


    Spike Jonze's science fiction drama Her has been released. A guy and his AI mobile suggestion assistant Samantha fall in love in the film.


    2014


    Ian Goodfellow and colleagues at the University of Montreal create Generative Adversarial Networks (GANs) for use in deep neural networks, which are beneficial in making realistic fake human photos.


    2014


    Eugene Goostman, a chatbot that plays a thirteen-year-old kid, is said to have passed a Turing-like test.


    2014


    According to physicist Stephen Hawking, the development of AI might lead to humanity's extinction.


    2015


    DeepFace is a deep learning face recognition system that Facebook has released on its social media platform.


    2016


    In a five-game battle, DeepMind's AlphaGo software beats Lee Sedol, a 9th dan Go player.


    2016


    Tay, a Microsoft AI chatbot, has been put on Twitter, where users may teach it to send abusive and inappropriate posts.


    2017


    The Asilomar Meeting on Beneficial AI is hosted by the Future of Life Institute.


    2017


    Anthony Levandowski, an AI self-driving start-up engineer, formed the Way of the Future church with the goal of creating a superintelligent robot god.


    2018


    Google has announced Duplex, an AI program that uses natural language to schedule appointments over the phone.


    2018


    The General Data Protection Regulation (GDPR) and "Ethics Guidelines for Trustworthy AI" are published by the European Union.


    2019


    A lung cancer screening AI developed by Google AI and Northwestern Medicine in Chicago, IL, surpasses specialized radiologists.


    2019


    Elon Musk cofounded OpenAI, which generates realistic tales and journalism via artificial intelligence text generation. Because of its ability to spread false news, it was previously judged "too risky" to utilize.


    2020


    TensorFlow Quantum, an open-source framework for quantum machine learning, was announced by Google AI in conjunction with the University of Waterloo, the "moonshot faculty" X, and Volkswagen.




    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.










    Artificial Intelligence - Machine Translation.

      



    Machine translation is the process of using computer technology to automatically translate human languages.

    The US administration saw machine translation as a valuable instrument in diplomatic attempts to restrict communism in the USSR and the People's Republic of China from the 1950s through the 1970s.

    Machine translation has lately become a tool for marketing goods and services in countries where they would otherwise be unavailable due to language limitations, as well as a standalone offering.

    Machine translation is also one of the litmus tests for artificial intelligence progress.

    This artificial intelligence study advances along three broad paradigms.

    Rule-based expert systems and statistical methods to machine translation are the earliest.

    Neural-based machine translation and example-based machine translation are two more contemporary paradigms (or translation by analogy).

    Within computer linguistics, automated language translation is now regarded an academic specialization.

    While there are multiple possible roots for the present discipline of machine translation, the notion of automated translation as an academic topic derives from a 1947 communication between crystallographer Andrew D. Booth of Birkbeck College (London) and Warren Weaver of the Rockefeller Foundation.

    "I have a manuscript in front of me that is written in Russian, but I am going to assume that it is truly written in English and that it has been coded in some bizarre symbols," Weaver said in a preserved note to colleagues in 1949.

    To access the information contained in the text, all I have to do is peel away the code" (Warren Weaver, as cited in Arnold et al. 1994, 13).

    Most commercial machine translation systems have a translation engine at their core.

    The user's sentences are parsed several times by translation engines, each time applying algorithmic rules to transform the source sentence into the desired target language.

    There are rules for word-based and phrase-based trans formation.

    The initial objective of a parser software is generally to replace words using a two-language dictionary.

    Additional processing rounds of the phrases use comparative grammatical rules that consider sentence structure, verb form, and suffixes.

    The intelligibility and accuracy of translation engines are measured.

    Machine translation isn't perfect.

    Poor grammar in the source text, lexical and structural differences between languages, ambiguous usage, multiple meanings of words and idioms, and local variations in usage can all lead to "word salad" translations.

    In 1959–60, MIT philosopher, linguist, and mathematician Yehoshua Bar-Hillel issued the harshest early criticism of machine translation of language.

    In principle, according to Bar-Hillel, near-perfect machine translation is impossible.

    He used the following sentence to demonstrate the issue: John was on the prowl for his toy box.

    He eventually discovered it.

    In the pen, there was a box.

    John was overjoyed.

    The word "pen" poses a problem in this statement since it might refer to a child's playpen or a writing ballpoint pen.

    Knowing the difference necessitates a broad understanding of the world, which a computer lacks.

    When the National Academy of Sciences Automatic Language Processing Advisory Committee (ALPAC) released an extremely damaging report about the poor quality and high cost of machine translation in 1964, the initial rounds of US government funding eroded.

    ALPAC came to the conclusion that the country already had an abundant supply of human translators capable of producing significantly greater translations.

    Many machine translation experts slammed the ALPAC report, pointing to machine efficiency in the preparation of first drafts and the successful rollout of a few machine translation systems.

    In the 1960s and 1970s, there were only a few machine translation research groups.

    The TAUM group in Canada, the Mel'cuk and Apresian groups in the Soviet Union, the GETA group in France, and the German Saarbr├╝cken SUSY group were among the biggest.

    SYSTRAN (System Translation), a private corporation financed by government contracts founded by Hungarian-born linguist and computer scientist Peter Toma, was the main supplier of automated translation technology and services in the United States.

    In the 1950s, Toma became interested in machine translation while studying at the California Institute of Technology.

    Around 1960, Toma moved to Georgetown University and started collaborating with other machine translation experts.

    The Georgetown machine translation project, as well as SYSTRAN's initial contract with the United States Air Force in 1969, were both devoted to translating Russian into English.

    That same year, at Wright-Patterson Air Force Base, the company's first machine translation programs were tested.

    SYSTRAN software was used by the National Aeronautics and Space Administration (NASA) as a translation help during the Apollo-Soyuz Test Project in 1974 and 1975.

    Shortly after, SYSTRAN was awarded a contract by the Commission of the European Communities to offer automated translation services, and the company has subsequently amalgamated with the European Commission (EC).

    By the 1990s, the EC had seventeen different machine translation systems focused on different language pairs in use for internal communications.

    In 1992, SYSTRAN began migrating its mainframe software to personal computers.

    SYSTRAN Professional Premium for Windows was launched in 1995 by the company.

    SYSTRAN continues to be the industry leader in machine translation.

    METEO, which has been in use by the Canadian Meteorological Center in Montreal since 1977 for the purpose of translating weather bulletins from English to French; ALPS, developed by Brigham Young University for Bible translation; SPANAM, the Pan American Health Organization's Spanish-to-English automatic translation system; and METAL, developed at the University of Toronto.

    In the late 1990s, machine translation became more readily accessible to the general public through web browsers.

    Babel Fish, a web-based application created by a group of researchers at Digital Equipment Corporation using SYSTRAN machine translation technology, was one of the earliest online language translation services (DEC).

    Thirty-six translation pairs between thirteen languages were supported by the technology.

    Babel Fish began as an AltaVista web search engine tool before being sold to Yahoo! and then Microsoft.

    The majority of online translation services still use rule-based and statistical machine translation.

    Around 2016, SYSTRAN, Microsoft Translator, and Google Translate made the switch to neural machine translation.

    103 languages are supported by Google Translate.

    Predictive deep learning algorithms, artificial neural networks, or connectionist systems based after biological brains are used in neural machine translation.

    Machine translation based on neural networks is achieved in two steps.

    The translation engine models its interpretation in the first phase based on the context of each source word within the entire sentence.

    The artificial neural network then translates the entire word model into the target language in the second phase.

    Simply said, the engine predicts the probability of word sequences and combinations inside whole sentences, resulting in a fully integrated translation model.

    The underlying algorithms use statistical models to learn language rules.

    The Harvard SEAS natural language processing group, in collaboration with SYSTRAN, has launched OpenNMT, an open-source neural machine translation system.



    Jai Krishna Ponnappan


    You may also want to read more about Artificial Intelligence here.



    See also: 


    Cheng, Lili; Natural Language Processing and Speech Understanding.



    Further Reading:


    Arnold, Doug J., Lorna Balkan, R. Lee Humphreys, Seity Meijer, and Louisa Sadler. 1994. Machine Translation: An Introductory Guide. Manchester and Oxford: NCC Blackwell.

    Bar-Hillel, Yehoshua. 1960. “The Present Status of Automatic Translation of Languages.” Advances in Computers 1: 91–163.

    Garvin, Paul L. 1967. “Machine Translation: Fact or Fancy?” Datamation 13, no. 4: 29–31.

    Hutchins, W. John, ed. 2000. Early Years in Machine Translation: Memoirs and Biographies of Pioneers. Philadelphia: John Benjamins.

    Locke, William Nash, and Andrew Donald Booth, eds. 1955. Machine Translation of Languages. New York: Wiley.

    Yngve, Victor H. 1964. “Implications of Mechanical Translation Research.” Proceedings of the American Philosophical Society 108 (August): 275–81.



    Artificial Intelligence - What Is The Mac Hack IV Program?

     




    Mac Hack IV, a 1967 chess software built by Richard Greenblatt, gained notoriety for being the first computer chess program to engage in a chess tournament and to play adequately against humans, obtaining a USCF rating of 1,400 to 1,500.

    Greenblatt's software, written in the macro assembly language MIDAS, operated on a DEC PDP-6 computer with a clock speed of 200 kilohertz.

    While a graduate student at MIT's Artificial Intelligence Laboratory, he built the software as part of Project MAC.

    "Chess is the drosophila [fruit fly] of artificial intelligence," according to Russian mathematician Alexander Kronrod, the field's chosen experimental organ ism (Quoted in McCarthy 1990, 227).



    Creating a champion chess software has been a cherished goal in artificial intelligence since 1950, when Claude Shan ley first described chess play as a task for computer programmers.

    Chess and games in general involve difficult but well-defined issues with well-defined rules and objectives.

    Chess has long been seen as a prime illustration of human-like intelligence.

    Chess is a well-defined example of human decision-making in which movements must be chosen with a specific purpose in mind, with limited knowledge and uncertainty about the result.

    The processing capability of computers in the mid-1960s severely restricted the depth to which a chess move and its alternative answers could be studied since the number of different configurations rises exponentially with each consecutive reply.

    The greatest human players have been proven to examine a small number of moves in greater detail rather than a large number of moves in lower depth.

    Greenblatt aimed to recreate the methods used by good players to locate significant game tree branches.

    He created Mac Hack to reduce the number of nodes analyzed while choosing moves by using a minimax search of the game tree along with alpha-beta pruning and heuristic components.

    In this regard, Mac Hack's style of play was more human-like than that of more current chess computers (such as Deep Thought and Deep Blue), which use the sheer force of high processing rates to study tens of millions of branches of the game tree before making moves.

    In a contest hosted by MIT mathematician Seymour Papert in 1967, Mac Hack defeated MIT philosopher Hubert Dreyfus and gained substantial renown among artificial intelligence researchers.

    The RAND Corporation published a mimeographed version of Dreyfus's paper, Alchemy and Artificial Intelligence, in 1965, which criticized artificial intelligence researchers' claims and aspirations.

    Dreyfus claimed that no computer could ever acquire intelligence since human reason and intelligence are not totally rule-bound, and hence a computer's data processing could not imitate or represent human cognition.

    In a part of the paper titled "Signs of Stagnation," Dreyfus highlighted attempts to construct chess-playing computers, among his many critiques of AI.

    Mac Hack's victory against Dreyfus was first seen as vindication by the AI community.



    Jai Krishna Ponnappan


    You may also want to read more about Artificial Intelligence here.



    See also: 


    Alchemy and Artificial Intelligence; Deep Blue.



    Further Reading:



    Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books.

    Greenblatt, Richard D., Donald E. Eastlake III, and Stephen D. Crocker. 1967. “The Greenblatt Chess Program.” In AFIPS ’67: Proceedings of the November 14–16, 1967, Fall Joint Computer Conference, 801–10. Washington, DC: Thomson Book Company.

    Marsland, T. Anthony. 1990. “A Short History of Computer Chess.” In Computers, Chess, and Cognition, edited by T. Anthony Marsland and Jonathan Schaeffer, 3–7. New York: Springer-Verlag.

    McCarthy, John. 1990. “Chess as the Drosophila of AI.” In Computers, Chess, and Cognition, edited by T. Anthony Marsland and Jonathan Schaeffer, 227–37. New York: Springer-Verlag.

    McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman.




    Artificial Intelligence - Who Is Ray Kurzweil (1948–)?




    Ray Kurzweil is a futurist and inventor from the United States.

    He spent the first half of his career developing the first CCD flat-bed scanner, the first omni-font optical character recognition device, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed, large-vocabulary speech recognition machine.

    He has earned several awards for his contributions to technology, including the Technical Grammy Award in 2015 and the National Medal of Technology.

    Kurzweil is the cofounder and chancellor of Singularity University, as well as the director of engineering at Google, where he leads a team that works on artificial intelligence and natural language processing.

    Singularity University is a non-accredited graduate school founded on the premise of tackling great issues like renewable energy and space travel by gaining a deep understanding of the opportunities presented by technology progress's current acceleration.

    The university, which is headquartered in Silicon Valley, has evolved to include one hundred chapters in fifty-five countries, delivering seminars, educational programs, and business acceleration programs.

    While at Google, Kurzweil published the book How to Create a Mind (2012).

    He claims that the neo cortex is a hierarchical structure of pattern recognizers in his Pattern Recognition Theory of Mind.

    Kurzweil claims that replicating this design in machines might lead to the creation of artificial superintelligence.

    He believes that by doing so, he will be able to bring natural language comprehension to Google.

    Kurzweil's popularity stems from his work as a futurist.

    Futurists are those who specialize in or are interested in the near-to-long-term future and associated topics.

    They use well-established methodologies like scenario planning to carefully examine forecasts and construct future possibilities.

    Kurzweil is the author of five national best-selling books, including The Singularity Is Near, which was named a New York Times best-seller (2005).

    He has an extensive list of forecasts.

    Kurzweil predicted the enormous development of international internet usage in the second part of the decade in his debut book, The Age of Intelligent Machines (1990).

    He correctly predicted that computers will soon exceed humans in making the greatest investing choices in his second extremely important book, The Age of Spiritual Machines (where "spiritual" stands for "aware"), published in 1999.

    Kurzweil prophesied in the same book that computers would one day "appear to have their own free will" and perhaps have "spiritual experiences" (Kurz weil 1999, 6).

    Human-machine barriers will dissolve to the point that they will basically live forever as combined human-machine hybrids.

    Scientists and philosophers have slammed Kurzweil's forecast of a sentient computer, claiming that awareness cannot be created by calculations.

    Kurzweil tackles the phenome non of the Technological Singularity in his third book, The Singularity Is Near.

    John von Neumann, a famous mathematician, created the word singularity.

    In a 1950s chat with his colleague Stanislaw Ulam, von Neumann proposed that the ever-accelerating speed of technological progress "appears to be reaching some essential singularity in the history of the race beyond which human activities as we know them could not continue" (Ulam 1958, 5).

    To put it another way, technological development would alter the course of human history.

    Vernor Vinge, a computer scientist, math professor, and science fiction writer, rediscovered the word in 1993 and utilized it in his article "The Coming Technological Singularity." In Vinge's article, technological progress is more accurately defined as an increase in processing power.

    Vinge investigates the idea of a self-improving artificial intelligence agent.

    According to this theory, the artificial intelligent agent continues to update itself and grow technologically at an unfathomable pace, eventually resulting in the birth of a superintelligence—that is, an artificial intelligence that far exceeds all human intelligence.

    In Vinge's apocalyptic vision, robots first become autonomous, then superintelligent, to the point where humans lose control of technology and machines seize control of their own fate.

    Machines will rule the planet because technology is more intelligent than humans.

    According to Vinge, the Singularity is the end of the human age.

    Kurzweil presents an anti-dystopic Singularity perspective.

    Kurzweil's core premise is that humans can develop something smarter than themselves; in fact, exponential advances in computer power make the creation of an intelligent machine all but inevitable, to the point that the machine will surpass humans in intelligence.

    Kurzweil believes that machine intelligence and human intellect will converge at this moment.

    The subtitle of The Singularity Is Near is When Humans Transcend Biology, which is no coincidence.

    Kurzweil's overarching vision is based on discontinuity: no lesson from the past, or even the present, can aid humans in determining the way to the future.

    This also explains why new types of education, such as Singularity University, are required.

    Every sentimental look back to history, every memory of the past, renders humans more susceptible to technological change.

    With the arrival of a new superintelligent, almost immortal race, history as a human construct will soon come to an end.

    Posthumans, the next phase in human development, are known as immortals.

    Kurzweil believes that posthumanity will be made up of sentient robots rather than people with mechanical bodies.

    He claims that the future should be formed on the assumption that mankind is in the midst of an extraordinary period of technological advancement.

    The Singularity, he believes, would elevate humanity beyond its wildest dreams.

    While Kurzweil claims that artificial intelligence is now outpacing human intellect on certain activities, he also acknowledges that the moment of superintelligence, often known as the Technological Singularity, has not yet arrived.

    He believes that individuals who embrace the new age of human-machine synthesis and are daring to go beyond evolution's boundaries would view humanity's future as positive. 




    Jai Krishna Ponnappan


    You may also want to read more about Artificial Intelligence here.



    See also: 


    General and Narrow AI; Superintelligence; Technological Singularity.



    Further Reading:




    Kurzweil, Ray. 1990. The Age of Intelligent Machines. Cambridge, MA: MIT Press.

    Kurzweil, Ray. 1999. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Penguin.

    Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

    Ulam, Stanislaw. 1958. “Tribute to John von Neumann.” Bulletin of the American Mathematical Society 64, no. 3, pt. 2 (May): 1–49.

    Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. Cleveland, OH: NASA Lewis Research Center.



     

    What Is Artificial General Intelligence?

    Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...