Showing posts sorted by relevance for query computer science. Sort by date Show all posts
Showing posts sorted by relevance for query computer science. Sort by date Show all posts

Artificial Intelligence - History And Timeline

     




    1942

    The Three Laws of Robotics by science fiction author Isaac Asimov occur in the short tale "Runaround."


    1943


    Emil Post, a mathematician, talks about "production systems," a notion he adopted for the 1957 General Problem Solver.


    1943


    "A Logical Calculus of the Ideas of Immanent in Nervous Activity," a study by Warren McCulloch and Walter Pitts on a computational theory of neural networks, is published.


    1944


    The Teleological Society was founded by John von Neumann, Norbert Wiener, Warren McCulloch, Walter Pitts, and Howard Aiken to explore, among other things, nervous system communication and control.


    1945


    In his book How to Solve It, George Polya emphasizes the importance of heuristic thinking in issue solving.


    1946


    In New York City, the first of eleven Macy Conferences on Cybernetics gets underway. "Feedback Mechanisms and Circular Causal Systems in Biological and Social Systems" is the focus of the inaugural conference.



    1948


    Norbert Wiener, a mathematician, publishes Cybernetics, or Control and Communication in the Animal and the Machine.


    1949


    In his book The Organization of Behavior, psychologist Donald Hebb provides a theory for brain adaptation in human education: "neurons that fire together connect together."


    1949


    Edmund Berkeley's book Giant Brains, or Machines That Think, is published.


    1950


    Alan Turing's "Computing Machinery and Intelligence" describes the Turing Test, which attributes intelligence to any computer capable of demonstrating intelligent behavior comparable to that of a person.


    1950


    Claude Shannon releases "Programming a Computer for Playing Chess," a groundbreaking technical study that shares search methods and strategies.



    1951


    Marvin Minsky, a math student, and Dean Edmonds, a physics student, create an electronic rat that can learn to navigate a labyrinth using Hebbian theory.


    1951


    John von Neumann, a mathematician, releases "General and Logical Theory of Automata," which reduces the human brain and central nervous system to a computer.


    1951


    For the University of Manchester's Ferranti Mark 1 computer, Christopher Strachey produces a checkers software and Dietrich Prinz creates a chess routine.


    1952


    Cyberneticist W. Edwards wrote Design for a Brain: The Origin of Adaptive Behavior, a book on the logical underpinnings of human brain function. Ross Ashby is a British actor.


    1952


    At Cornell University Medical College, physiologist James Hardy and physician Martin Lipkin begin developing a McBee punched card system for mechanical diagnosis of patients.


    1954


    Science-Fiction Thinking Machines: Robots, Androids, Computers, edited by Groff Conklin, is a theme-based anthology.


    1954


    The Georgetown-IBM project exemplifies the power of text machine translation.


    1955


    Under the direction of economist Herbert Simon and graduate student Allen Newell, artificial intelligence research began at Carnegie Tech (now Carnegie Mellon University).


    1955


    For Scientific American, mathematician John Kemeny wrote "Man as a Machine."


    1955


    In a Rockefeller Foundation proposal for a Dartmouth University meeting, mathematician John McCarthy coined the phrase "artificial intelligence."



    1956


    Allen Newell, Herbert Simon, and Cliff Shaw created Logic Theorist, an artificial intelligence computer software for proving theorems in Alfred North Whitehead and Bertrand Russell's Principia Mathematica.


    1956


    The "Constitutional Convention of AI," a Dartmouth Summer Research Project, brings together specialists in cybernetics, automata, information theory, operations research, and game theory.


    1956


    On television, electrical engineer Arthur Samuel shows off his checkers-playing AI software.


    1957


    Allen Newell and Herbert Simon created the General Problem Solver AI software.


    1957


    The Rockefeller Medical Electronics Center shows how an RCA Bizmac computer application might help doctors distinguish between blood disorders.


    1958


    The Computer and the Brain, an unfinished work by John von Neumann, is published.


    1958


    At the "Mechanisation of Thought Processes" symposium at the UK's Teddington National Physical Laboratory, Firmin Nash delivers the Group Symbol Associator its first public demonstration.


    1958


    For linear data categorization, Frank Rosenblatt develops the single layer perceptron, which includes a neural network and supervised learning algorithm.


    1958


    The high-level programming language LISP is specified by John McCarthy of the Massachusetts Institute of Technology (MIT) for AI research.


    1959


    "The Reasoning Foundations of Medical Diagnosis," written by physicist Robert Ledley and radiologist Lee Lusted, presents Bayesian inference and symbolic logic to medical difficulties.


    1959


    At MIT, John McCarthy and Marvin Minsky create the Artificial Intelligence Laboratory.


    1960


    James L. Adams, an engineering student, built the Stanford Cart, a remote control vehicle with a television camera.


    1962


    In his short novel "Without a Thought," science fiction and fantasy author Fred Saberhagen develops sentient killing robots known as Berserkers.


    1963


    John McCarthy developed the Stanford Artificial Intelligence Laboratory (SAIL).


    1963


    Under Project MAC, the Advanced Research Experiments Agency of the United States Department of Defense began financing artificial intelligence projects at MIT.


    1964


    Joseph Weizenbaum of MIT created ELIZA, the first software allowing natural language conversation with a computer (a "chatbot").


    1965


    I am a statistician from the United Kingdom. J. Good's "Speculations Concerning the First Ultraintelligent Machine," which predicts an impending intelligence explosion, is published.


    1965


    Hubert L. Dreyfus and Stuart E. Dreyfus, philosophers and mathematicians, publish "Alchemy and AI," a study critical of artificial intelligence.


    1965


    Joshua Lederberg and Edward Feigenbaum founded the Stanford Heuristic Programming Project, which aims to model scientific reasoning and create expert systems.


    1965


    Donald Michie is the head of Edinburgh University's Department of Machine Intelligence and Perception.


    1965


    Georg Nees organizes the first generative art exhibition, Computer Graphic, in Stuttgart, West Germany.


    1965


    With the expert system DENDRAL, computer scientist Edward Feigenbaum starts a ten-year endeavor to automate the chemical analysis of organic molecules.


    1966


    The Automatic Language Processing Advisory Committee (ALPAC) issues a cautious assessment on machine translation's present status.


    1967


    On a DEC PDP-6 at MIT, Richard Greenblatt finishes work on Mac Hack, a computer that plays competitive tournament chess.


    1967


    Waseda University's Ichiro Kato begins work on the WABOT project, which culminates in the unveiling of a full-scale humanoid intelligent robot five years later.


    1968


    Stanley Kubrick's adaptation of Arthur C. Clarke's science fiction novel 2001: A Space Odyssey, about the artificially intelligent computer HAL 9000, is one of the most influential and highly praised films of all time.


    1968


    At MIT, Terry Winograd starts work on SHRDLU, a natural language understanding program.


    1969


    Washington, DC hosts the First International Joint Conference on Artificial Intelligence (IJCAI).


    1972


    Artist Harold Cohen develops AARON, an artificial intelligence computer that generates paintings.


    1972


    Ken Colby describes his efforts using the software program PARRY to simulate paranoia.


    1972


    In What Computers Can't Do, Hubert Dreyfus offers his criticism of artificial intelligence's intellectual basis.


    1972


    Ted Shortliffe, a doctorate student at Stanford University, has started work on the MYCIN expert system, which is aimed to identify bacterial illnesses and provide treatment alternatives.


    1972


    The UK Science Research Council releases the Lighthill Report on Artificial Intelligence, which highlights AI technological shortcomings and the challenges of combinatorial explosion.


    1972


    The Assault on Privacy: Computers, Data Banks, and Dossiers, by Arthur Miller, is an early study on the societal implications of computers.


    1972


    INTERNIST-I, an internal medicine expert system, is being developed by University of Pittsburgh physician Jack Myers, medical student Randolph Miller, and computer scientist Harry Pople.


    1974


    Paul Werbos, a social scientist, has completed his dissertation on a backpropagation algorithm that is currently extensively used in artificial neural network training for supervised learning applications.


    1974


    The memo discusses the notion of a frame, a "remembered framework" that fits reality by "changing detail as appropriate." Marvin Minsky distributes MIT AI Lab document 306 on "A Framework for Representing Knowledge."


    1975


    The phrase "genetic algorithm" is used by John Holland to explain evolutionary strategies in natural and artificial systems.


    1976


    In Computer Power and Human Reason, computer scientist Joseph Weizenbaum expresses his mixed feelings on artificial intelligence research.


    1978


    At Rutgers University, EXPERT, a generic knowledge representation technique for constructing expert systems, becomes live.


    1978


    Joshua Lederberg, Douglas Brutlag, Edward Feigenbaum, and Bruce Buchanan started the MOLGEN project at Stanford to solve DNA structures generated from segmentation data in molecular genetics research.


    1979


    Raj Reddy, a computer scientist at Carnegie Mellon University, founded the Robotics Institute.


    1979


    While working with a robot, the first human is slain.


    1979


    Hans Moravec rebuilds and equips the Stanford Cart with a stereoscopic vision system after it has evolved into an autonomous rover over almost two decades.


    1980


    The American Association of Artificial Intelligence (AAAI) holds its first national conference at Stanford University.


    1980


    In his Chinese Room argument, philosopher John Searle claims that a computer's modeling of action does not establish comprehension, intentionality, or awareness.


    1982


    Release of Blade Runner, a science fiction picture based on Philip K. Dick's tale Do Androids Dream of Electric Sheep? (1968).


    1982


    The associative brain network, initially developed by William Little in 1974, is popularized by physicist John Hopfield.


    1984


    In Fortune Magazine, Tom Alexander writes "Why Computers Can't Outthink the Experts."


    1984


    At the Microelectronics and Computer Consortium (MCC) in Austin, TX, computer scientist Doug Lenat launches the Cyc project, which aims to create a vast commonsense knowledge base and artificial intelligence architecture.


    1984


    Orion Pictures releases the first Terminator picture, which features robotic assassins from the future and an AI known as Skynet.


    1986


    Honda establishes a research facility to build humanoid robots that can cohabit and interact with humans.


    1986


    Rodney Brooks, an MIT roboticist, describes the subsumption architecture for behavior-based robots.


    1986


    The Society of Mind is published by Marvin Minsky, who depicts the brain as a collection of collaborating agents.


    1989


    The MIT Artificial Intelligence Lab's Rodney Brooks and Anita Flynn publish "Fast, Cheap, and Out of Control: A Robot Invasion of the Solar System," a paper discussing the possibility of sending small robots on interplanetary exploration missions.


    1993


    The Cog interactive robot project is launched at MIT by Rodney Brooks, Lynn Andrea Stein, Cynthia Breazeal, and others.


    1995


    The phrase "generative music" was used by musician Brian Eno to describe systems that create ever-changing music by modifying parameters over time.


    1995


    The MQ-1 Predator unmanned aerial aircraft from General Atomics has entered US military and reconnaissance duty.


    1997


    Under normal tournament settings, IBM's Deep Blue supercomputer overcomes reigning chess champion Garry Kasparov.


    1997


    In Nagoya, Japan, the inaugural RoboCup, an international tournament featuring over forty teams of robot soccer players, takes place.


    1997


    NaturallySpeaking is Dragon Systems' first commercial voice recognition software product.


    1999


    Sony introduces AIBO, a robotic dog, to the general public.


    2000


    The Advanced Step in Innovative Mobility humanoid robot, ASIMO, is unveiled by Honda.


    2001


    At Super Bowl XXXV, Visage Corporation unveils the FaceFINDER automatic face-recognition technology.


    2002


    The Roomba autonomous household vacuum cleaner is released by the iRobot Corporation, which was created by Rodney Brooks, Colin Angle, and Helen Greiner.


    2004


    In the Mojave Desert near Primm, NV, DARPA hosts its inaugural autonomous vehicle Grand Challenge, but none of the cars complete the 150-mile route.


    2005


    Under the direction of neurologist Henry Markram, the Swiss Blue Brain Project is formed to imitate the human brain.


    2006


    Netflix is awarding a $1 million prize to the first programming team to create the greatest recommender system based on prior user ratings.


    2007


    DARPA has announced the commencement of the Urban Challenge, an autonomous car competition that will test merging, passing, parking, and navigating traffic and junctions.


    2009


    Under the leadership of Sebastian Thrun, Google launches its self-driving car project (now known as Waymo) in the San Francisco Bay Area.


    2009


    Fei-Fei Li of Stanford University describes her work on ImageNet, a library of millions of hand-annotated photographs used to teach AIs to recognize the presence or absence of items visually.


    2010


    Human manipulation of automated trading algorithms causes a "flash collapse" in the US stock market.


    2011


    Demis Hassabis, Shane Legg, and Mustafa Suleyman developed DeepMind in the United Kingdom to educate AIs how to play and succeed at classic video games.


    2011


    Watson, IBM's natural language computer system, has beaten Jeopardy! Ken Jennings and Brad Rutter are the champions.


    2011


    The iPhone 4S comes with Apple's mobile suggestion assistant Siri.


    2011


    Andrew Ng, a computer scientist, and Google colleagues Jeff Dean and Greg Corrado have launched an informal Google Brain deep learning research cooperation.


    2013


    The European Union's Human Brain Project aims to better understand how the human brain functions and to duplicate its computing capabilities.


    2013


    Stop Killer Robots is a campaign launched by Human Rights Watch.


    2013


    Spike Jonze's science fiction drama Her has been released. A guy and his AI mobile suggestion assistant Samantha fall in love in the film.


    2014


    Ian Goodfellow and colleagues at the University of Montreal create Generative Adversarial Networks (GANs) for use in deep neural networks, which are beneficial in making realistic fake human photos.


    2014


    Eugene Goostman, a chatbot that plays a thirteen-year-old kid, is said to have passed a Turing-like test.


    2014


    According to physicist Stephen Hawking, the development of AI might lead to humanity's extinction.


    2015


    DeepFace is a deep learning face recognition system that Facebook has released on its social media platform.


    2016


    In a five-game battle, DeepMind's AlphaGo software beats Lee Sedol, a 9th dan Go player.


    2016


    Tay, a Microsoft AI chatbot, has been put on Twitter, where users may teach it to send abusive and inappropriate posts.


    2017


    The Asilomar Meeting on Beneficial AI is hosted by the Future of Life Institute.


    2017


    Anthony Levandowski, an AI self-driving start-up engineer, formed the Way of the Future church with the goal of creating a superintelligent robot god.


    2018


    Google has announced Duplex, an AI program that uses natural language to schedule appointments over the phone.


    2018


    The General Data Protection Regulation (GDPR) and "Ethics Guidelines for Trustworthy AI" are published by the European Union.


    2019


    A lung cancer screening AI developed by Google AI and Northwestern Medicine in Chicago, IL, surpasses specialized radiologists.


    2019


    Elon Musk cofounded OpenAI, which generates realistic tales and journalism via artificial intelligence text generation. Because of its ability to spread false news, it was previously judged "too risky" to utilize.


    2020


    TensorFlow Quantum, an open-source framework for quantum machine learning, was announced by Google AI in conjunction with the University of Waterloo, the "moonshot faculty" X, and Volkswagen.




    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.










    Artificial Intelligence - Who Is Hugo de Garis?

     


    Hugo de Garis (1947–) is an expert in genetic algorithms, artificial intelligence, and topological quantum computing.

    He is the creator of the concept of evolvable hardware, which uses evolutionary algorithms to produce customized electronics that can alter structural design and performance dynamically and autonomously in response to their surroundings.

    De Garis is most known for his 2005 book The Artilect Battle, in which he describes what he thinks will be an unavoidable twenty-first-century worldwide war between mankind and ultraintelligent robots.

    In the 1980s, De Garis got fascinated in genetic algorithms, neural networks, and the idea of artificial brains.

    In artificial intelligence, genetic algorithms include the use of software to model and apply Darwinian evolutionary ideas to search and optimization issues.

    The "fittest" candidate simulations of axons, dendrites, signals, and synapses in artificial neural networks were evolved using evolutionary algorithms developed by de Garis.

    De Garis developed artificial neural systems that resembled those seen in organic brains.

    In the 1990s, his work with a new type of programmable computer chips spawned the subject of computer science known as evolvable hardware.

    The use of programmable circuits allowed neural networks to grow and evolve at high rates.

    De Garis also started playing around with cellular automata, which are mathematical models of complex systems that emerge generatively from basic units and rules.

    The coding of around 11,000 fundamental rules was required in an early version of his modeling of cellular automata that acted like brain networks.

    About 60,000 such rules were encoded in a subsequent version.

    De Garis called his neural networks-on-a-chip a Cellular Automata Machine in the 2000s.

    De Garis started to hypothesize that the period of "Brain Building on the Cheap" had come as the price of chips dropped (de Garis 2005, 45).

    He started referring to himself as the "Father of Artificial Intelligence." He claims that in the future decades, whole artificial brains with billions of neurons will be built utilizing information acquired from molecular scale robot probes of human brain tissues and the advent of new path breaking brain imaging tools.

    Topological quantum computing is another enabling technology that de Garis thinks will accelerate the creation of artificial brains.

    He claims that once the physical boundaries of standard silicon chip manufacturing are approached, quantum mechanical phenomena must be harnessed.

    Inventions in reversible heatless computing will also be significant in dissipating the harmful temperature effects of tightly packed circuits.

    De Garis also supports the development of artificial embryology, often known as "embryofacture," which involves the use of evolutionary engineering and self-assembly methods to mimic the development of fully aware beings from single fertilized eggs.

    According to De Garis, because to fast breakthroughs in artificial intelligence technology, a conflict over our last innovation will be unavoidable before the end of the twenty-first century.

    He thinks the battle will finish with a catastrophic human extinction catastrophe he refers to as "gigadeath." De Garis speculates in his book The Artilect War that continued Moore's Law doubling of transistors packed on computer chips, accompanied by the development of new technologies such as femtotechnology (the achievement of femtometer-scale struc turing of matter), quantum computing, and neuroengineering, will almost certainly lead to gigadeath.

    De Garis felt compelled to create The Artilect War as a cautionary tale and as a self-admitted architect of the impending calamity.

    The Cosmists and the Terrans are two antagonistic worldwide political parties that De Garis uses to frame his discussion of an impending Artilect War.

    The Cosmists will be apprehensive of the immense power of future superintelligent machines, but they will regard their labor in creating them with such veneration that they will experience a near-messianic enthusiasm in inventing and unleashing them into the world.

    Regardless of the hazards to mankind, the Cosmists will strongly encourage the development and nurturing of ever-more sophisticated and powerful artificial minds.

    The Terrans, on the other hand, will fight against the creation of artificial minds once they realize they represent a danger to human civilization.

    They will feel compelled to fight these artificial intelligences because they constitute an existential danger to humanity.

    De Garis dismisses a Cyborgian compromise in which humans and their technological creations blend.

    He thinks that robots will grow so powerful and intelligent that only a small percentage of humanity would survive the confrontation.

    China and the United States, geopolitical adversaries, will be forced to exploit these technology to develop more complex and autonomous economies, defense systems, and military robots.

    Artificial intelligence's dominance in the world will be welcomed by the Cosmists, who will come to see them as near-gods deserving of worship.

    The Terrans, on the other hand, will fight the transfer of global economic, social, and military dominance to our machine overlords.

    They will see the new situation as a terrible tragedy that has befallen humanity.

    His case for a future battle over superintelligent robots has sparked a lot of discussion and controversy among scientific and engineering specialists, as well as a lot of criticism in popular science journals.

    In his 2005 book, de Garis implicates himself as a cause of the approaching conflict and as a hidden Cosmist, prompting some opponents to question his intentions.

    De Garis has answered that he feels compelled to issue a warning now because he thinks there will be enough time for the public to understand the full magnitude of the danger and react when they begin to discover substantial intelligence hidden in household equipment.

    If De Garis' warning is taken seriously, he presents a variety of eventualities.

    First, he suggests that the Terrans may be able to defeat Cosmist thinking before a superintelligence takes control, though this is unlikely.

    De Garis suggests a second scenario in which artilects quit the earth as irrelevant, leaving human civilisation more or less intact.

    In a third possibility, the Cosmists grow so terrified of their own innovations that they abandon them.

    Again, de Garis believes this is improbable.

    In a fourth possibility, he imagines that all Terrans would transform into Cyborgs.

    In a fifth scenario, the Terrans will aggressively seek down and murder the Cosmists, maybe even in outer space.

    The Cosmists will leave Earth, construct artilects, and ultimately vanish from the solar system to conquer the cosmos in a sixth scenario.

    In a seventh possibility, the Cosmists will flee to space and construct artilects that will fight each other until none remain.

    In the eighth scenario, the artilects will go to space and be destroyed by an alien super-artilect.

    De Garis has been criticized of believing that The Terminator's nightmarish vision would become a reality, rather than contemplating that superintelligent computers may just as well bring world peace.

    De Garis answered that there is no way to ensure that artificial brains operate ethically (humanely).

    He also claims that it is difficult to foretell whether or not a superintelligence would be able to bypass an implanted death switch or reprogram itself to disobey orders aimed at instilling human respect.

    Hugo de Garis was born in 1947 in Sydney, Australia.

    In 1970, he graduated from Melbourne University with a bachelor's degree in Applied Mathematics and Theoretical Physics.

    He joined the global electronics corporation Philips as a software and hardware architect after teaching undergraduate mathematics at Cambridge University for four years.

    He worked at locations in the Netherlands and Belgium.

    In 1992, De Garis received a doctorate in Artificial Life and Artificial Intelligence from the Université Libre de Bruxelles in Belgium.

    "Genetic Programming: GenNets, Artificial Nervous Systems, Artificial Embryos," was the title of his thesis.

    De Garis directed the Center for Data Analysis and Stochastic Processes at the Artificial Intelligence and Artificial Life Research Unit at Brussels as a graduate student, where he explored evolutionary engineering, which uses genetic algorithms to develop complex systems.

    He also worked as a senior research associate at George Mason University's Artificial Intelligence Center in Northern Virginia, where he worked with machine learning pioneer Ryszard Michalski.

    De Garis did a postdoctoral fellowship at Tsukuba's Electrotechnical Lab.

    He directed the Brain Builder Group at the Advanced Telecommunications Research Institute International in Kyoto, Japan, for the following eight years, while they attempted a moon-shot quest to develop a billion-neuron artificial brain.

    De Garis returned to Brussels, Belgium, in 2000 to oversee Star Lab's Brain Builder Group, which was working on a rival artificial brain project.

    When the dot-com bubble burst in 2001, De Garis' lab went bankrupt while working on a life-size robot cat.

    De Garis then moved on to Utah State University as an Associate Professor of Computer Science, where he stayed until 2006.

    De Garis was the first to teach advanced research courses on "brain building" and "quantum computing" at Utah State.

    He joined Wuhan University's International School of Software in China as Professor of Computer Science and Mathematical Physics in 2006, where he also served as the leader of the Artificial Intelligence group.

    De Garis kept working on artificial brains, but he also started looking into topological quantum computing.

    De Garis joined the advisory board of Novamente, a commercial business that aims to develop artificial general intelligence, in the same year.

    Two years later, Chinese authorities gave his Wuhan University Brain Builder Group a significant funding to begin building an artificial brain.

    The China-Brain Project was the name given to the initiative.

    De Garis relocated to Xiamen University in China in 2008, where he ran the Artificial Brain Lab in the School of Information Science and Technology's Artificial Intelligence Institute until his retirement in 2010.



    ~ Jai Krishna Ponnappan

    You may also want to read more about Artificial Intelligence here.


    See also: 


    Superintelligence; Technological Singularity; The Terminator.


    Further Reading:


    de Garis, Hugo. 1989. “What If AI Succeeds? The Rise of the Twenty-First Century Artilect.” AI Magazine 10, no. 2 (Summer): 17–22.

    de Garis, Hugo. 1990. “Genetic Programming: Modular Evolution for Darwin Machines.” In Proceedings of the International Joint Conference on Neural Networks, 194–97. Washington, DC: Lawrence Erlbaum.

    de Garis, Hugo. 2005. The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. ETC Publications.

    de Garis, Hugo. 2007. “Artificial Brains.” In Artificial General Intelligence: Cognitive Technologies, edited by Ben Goertzel and Cassio Pennachin, 159–74. Berlin: Springer.

    Geraci, Robert M. 2008. “Apocalyptic AI: Religion and the Promise of Artificial Intelligence.” Journal of the American Academy of Religion 76, no. 1 (March): 138–66.

    Spears, William M., Kenneth A. De Jong, Thomas Bäck, David B. Fogel, and Hugo de Garis. 1993. “An Overview of Evolutionary Computation.” In Machine Learning: ECML-93, Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence), vol. 667, 442–59. Berlin: Springer.


    Artificial Intelligence - How Is AI Contributing To Cybernetics?

     





    The study of communication and control in live creatures and machines is known as cybernetics.

    Although the phrase "cybernetic thinking" is no longer generally used in the United States, it pervades computer science, engineering, biology, and the social sciences today.

    Throughout the last half-century, cybernetic connectionist and artificial neural network approaches to information theory and technology have often clashed, and in some cases hybridized, with symbolic AI methods.

    Norbert Wiener (1894–1964), who coined the term "cybernetics" from the Greek word for "steersman," saw the field as a unifying force that brought disparate topics like game theory, operations research, theory of automata, logic, and information theory together and elevated them.

    Winer argued in Cybernetics, or Control and Communication in the Animal and the Machine (1948), that contemporary science had become too much of a specialist's playground as a consequence of tendencies dating back to the early Enlightenment.

    Wiener envisioned a period when experts might collaborate "not as minions of some great administrative officer, but united by the desire, indeed by the spiritual imperative, to comprehend the area as a whole, and to give one another the power of that knowledge" (Weiner 1948b, 3).

    For Wiener, cybernetics provided researchers with access to many sources of knowledge while maintaining their independence and unbiased detachment.

    Wiener also believed that man and machine should be seen as basically interchangeable epistemologically.

    The biological sciences and medicine, according to Wiener, would remain semi-exact and dependent on observer subjectivity until these common components were discovered.



    In the setting of World War II (1939– 1945), Wiener developed his cybernetic theory.

    Operations research and game theory, for example, are interdisciplinary sciences rich in mathematics that have previously been utilized to identify German submarines and create the best feasible solutions to complex military decision-making challenges.

    Wiener committed himself into the job of implementing modern cybernetic weapons against the Axis countries in his role as a military adviser.

    To that purpose, Wiener focused on deciphering the feedback processes involved in curvilinear flight prediction and applying these concepts to the development of advanced fire-control systems for shooting down enemy aircraft.

    Claude Shannon, a long-serving Bell Labs researcher, went even further than Wiener in attempting to bring cybernetic ideas to life, most notably in his experiments with Theseus, an electromechanical mouse that used digital relays and a feedback process to learn how to navigate mazes based on previous experience.

    Shannon created a slew of other automata that mimicked the behavior of thinking machines.

    Shannon's mentees, including AI pioneers John McCarthy and Marvin Minsky, followed in his footsteps and labeled him a symbolic information processor.

    McCarthy, who is often regarded with establishing the field of artificial intelligence, studied the mathematical logic that underpins human thought.



    Minsky opted to research neural network models as a machine imitation of human vision.

    The so-called McCulloch-Pitts neurons were the core components of cybernetic understanding of human cognitive processing.

    These neurons were strung together by axons for communication, establishing a cybernated system encompassing a crude simulation of the wet science of the brain, and were named after Warren McCulloch and Walter Pitts.

    Pitts admired Wiener's straightforward analogy of cerebral tissue to vacuum tube technology, and saw these switching devices as metallic analogues to organic cognitive components.

    McCulloch-Pitts neurons were believed to be capable of mimicking basic logical processes required for learning and memory.

    Pitts perceived a close binary equivalence between the electrical discharges produced by these devices and the electrochemical nerve impulses generated in the brain in the 1940s.

    McCulloch-Pitts inputs may be either a zero or a one, and the output can also be a zero or a one in their most basic form.

    Each input may be categorized as excitatory or inhibitory.

    It was therefore merely a short step from artificial to animal memory for Pitts and Wiener.

    Donald Hebb, a Canadian neuropsychologist, made even more significant contributions to the research of artificial neurons.

    These were detailed in his book The Organization of Behavior, published in 1949.

    Associative learning is explained by Hebbian theory as a process of neural synaptic cells firing and connecting together.

    In his study of the artificial "perceptron," a model and algorithm that weighted inputs so that it could be taught to detect particular kinds of patterns, U.S.

    Navy researcher Frank Rosenblatt expanded the metaphor.

    The eye and cerebral circuitry of the perceptron could approximately discern between pictures of cats and dogs.

    The navy saw the perceptron as "the embryo of an electronic computer that it anticipates to be able to walk, speak, see, write, reproduce itself, and be cognizant of its existence," according to a 1958 interview with Rosenblatt (New York Times, July 8, 1958, 25).

    Wiener, Shannon, McCulloch, Pitts, and other cyberneticists were nourished by the famed Macy Conferences on Cybernetics in the 1940s and 1950s, which attempted to automate human comprehension of the world and the learning process.

    The gatherings also acted as a forum for discussing artificial intelligence issues.

    The divide between the areas developed over time, but it was visible during the 1956 Dartmouth Summer Research Project on ArtificialIntelligence.

    Organic cybernetics research was no longer well-defined in American scientific practice by 1970.

    Computing sciences and technology evolved from machine cybernetics.

    Cybernetic theories are now on the periphery of social and hard scientific disciplines such as cognitive science, complex systems, robotics, systems theory, and computer science, but they were critical to the information revolution of the twentieth and twenty-first centuries.

    In recent studies of artificial neural networks and unsupervised machine learning, Hebbian theory has seen a resurgence of attention.

    Cyborgs—beings made up of biological and mechanical pieces that augment normal functions—could be regarded a subset of cybernetics (which was once known as "medical cybernetics" in the 1960s).


    ~ Jai Krishna Ponnappan

    You may also want to read more about Artificial Intelligence here.



    See also: 


    Dartmouth AI Conference; Macy Conferences; Warwick, Kevin.


    Further Reading


    Ashby, W. Ross. 1956. An Introduction to Cybernetics. London: Chapman & Hall.

    Galison, Peter. 1994. “The Ontology of the Enemy: Norbert Weiner and the Cybernetic Vision.” Critical Inquiry 21, no. 1 (Autumn): 228–66.

    Kline, Ronald R. 2017. The Cybernetics Moment: Or Why We Call Our Age the Information Age. Baltimore, MD: Johns Hopkins University Press.

    Mahoney, Michael S. 1990. “Cybernetics and Information Technology.” In Companion to the History of Modern Science, edited by R. C. Olby, G. N. Cantor, J. R. R. Christie, and M. J. S. Hodge, 537–53. London: Routledge.

    “New Navy Device Learns by Doing; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” 1958. New York Times, July 8, 25.

    Weiner, Norbert. 1948a. “Cybernetics.” Scientific American 179, no. 5 (November): 14–19.

    Weiner, Norbert. 1948b. Cybernetics, or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press.



    Artificial Intelligence - The Human Brain Project

      



    The European Union's major brain research endeavor is the Human Brain Project.

    The project, which encompasses Big Science in terms of the number of participants and its lofty ambitions, is a multidisciplinary coalition of over one hundred partner institutions and includes professionals from the disciplines of computer science, neurology, and robotics.

    The Human Brain Project was launched in 2013 as an EU Future and Emerging Technologies initiative with a budget of over one billion euros.

    The ten-year project aims to make fundamental advancements in neuroscience, medicine, and computer technology.

    Researchers working on the Human Brain Project hope to learn more about how the brain functions and how to imitate its computing skills.

    Human Brain Organization, Systems and Cognitive Neuroscience, Theoretical Neuroscience, and implementations such as the Neuroinformatics Platform, Brain Simulation Platform, Medical Informatics Platform, and Neuromorphic Computing Platform are among the twelve subprojects of the Human Brain Project.

    Six information and communication technology platforms were released by the Human Brain Project in 2016 as the main research infrastructure for ongoing brain research.

    The project's research is focused on the creation of neuromorphic (brain-inspired) computer chips, in addition to infrastructure established for gathering and distributing data from the scientific community.

    BrainScaleS is a subproject that uses analog signals to simulate the neuron and its synapses.

    SpiNNaker (Spiking Neural Network Design) is a supercomputer architecture based on numerical models operating on special multicore digital devices.

    The Neurorobotic Platform is another ambitious subprogram, where "virtual brain models meet actual or simulated robot bodies" (Fauteux 2019).

    The project's modeling of the human brain, which includes 100 billion neurons with 7,000 synaptic connections to other neurons, necessitates massive computational resources.

    Computer models of the brain are created on six supercomputers at research sites around Europe.

    These models are currently being used by project researchers to examine illnesses.

    The show has been panned.

    Scientists protested in a 2014 open letter to the European Commission about the program's lack of openness and governance, as well as the program's small breadth of study in comparison to its initial goal and objectives.

    The Human Brain Project has a new governance structure as a result of an examination and review of its financing procedures, needs, and stated aims.

     



    Jai Krishna Ponnappan


    You may also want to read more about Artificial Intelligence here.



    See also: 


    Blue Brain Project; Cognitive Computing; SyNAPSE.


    Further Reading:


    Amunts, Katrin, Christoph Ebell, Jeff Muller, Martin Telefont, Alois Knoll, and Thomas Lippert. 2016. “The Human Brain Project: Creating a European Research Infrastructure to Decode the Human Brain.” Neuron 92, no. 3 (November): 574–81.

    Fauteux, Christian. 2019. “The Progress and Future of the Human Brain Project.” Scitech Europa, February 15, 2019. https://www.scitecheuropa.eu/human-brain-project/92951/.

    Markram, Henry. 2012. “The Human Brain Project.” Scientific American 306, no. 6 

    (June): 50–55.

    Markram, Henry, Karlheinz Meier, Thomas Lippert, Sten Grillner, Richard Frackowiak, 

    Stanislas Dehaene, Alois Knoll, Haim Sompolinsky, Kris Verstreken, Javier 

    DeFelipe, Seth Grant, Jean-Pierre Changeux, and Alois Sariam. 2011. “Introduc￾ing the Human Brain Project.” Procedia Computer Science 7: 39–42.



    What Is Artificial General Intelligence?

    Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...