Showing posts sorted by relevance for query communication. Sort by date Show all posts
Showing posts sorted by relevance for query communication. Sort by date Show all posts

Artificial Intelligence - How Is AI Contributing To Cybernetics?

 





The study of communication and control in live creatures and machines is known as cybernetics.

Although the phrase "cybernetic thinking" is no longer generally used in the United States, it pervades computer science, engineering, biology, and the social sciences today.

Throughout the last half-century, cybernetic connectionist and artificial neural network approaches to information theory and technology have often clashed, and in some cases hybridized, with symbolic AI methods.

Norbert Wiener (1894–1964), who coined the term "cybernetics" from the Greek word for "steersman," saw the field as a unifying force that brought disparate topics like game theory, operations research, theory of automata, logic, and information theory together and elevated them.

Winer argued in Cybernetics, or Control and Communication in the Animal and the Machine (1948), that contemporary science had become too much of a specialist's playground as a consequence of tendencies dating back to the early Enlightenment.

Wiener envisioned a period when experts might collaborate "not as minions of some great administrative officer, but united by the desire, indeed by the spiritual imperative, to comprehend the area as a whole, and to give one another the power of that knowledge" (Weiner 1948b, 3).

For Wiener, cybernetics provided researchers with access to many sources of knowledge while maintaining their independence and unbiased detachment.

Wiener also believed that man and machine should be seen as basically interchangeable epistemologically.

The biological sciences and medicine, according to Wiener, would remain semi-exact and dependent on observer subjectivity until these common components were discovered.



In the setting of World War II (1939– 1945), Wiener developed his cybernetic theory.

Operations research and game theory, for example, are interdisciplinary sciences rich in mathematics that have previously been utilized to identify German submarines and create the best feasible solutions to complex military decision-making challenges.

Wiener committed himself into the job of implementing modern cybernetic weapons against the Axis countries in his role as a military adviser.

To that purpose, Wiener focused on deciphering the feedback processes involved in curvilinear flight prediction and applying these concepts to the development of advanced fire-control systems for shooting down enemy aircraft.

Claude Shannon, a long-serving Bell Labs researcher, went even further than Wiener in attempting to bring cybernetic ideas to life, most notably in his experiments with Theseus, an electromechanical mouse that used digital relays and a feedback process to learn how to navigate mazes based on previous experience.

Shannon created a slew of other automata that mimicked the behavior of thinking machines.

Shannon's mentees, including AI pioneers John McCarthy and Marvin Minsky, followed in his footsteps and labeled him a symbolic information processor.

McCarthy, who is often regarded with establishing the field of artificial intelligence, studied the mathematical logic that underpins human thought.



Minsky opted to research neural network models as a machine imitation of human vision.

The so-called McCulloch-Pitts neurons were the core components of cybernetic understanding of human cognitive processing.

These neurons were strung together by axons for communication, establishing a cybernated system encompassing a crude simulation of the wet science of the brain, and were named after Warren McCulloch and Walter Pitts.

Pitts admired Wiener's straightforward analogy of cerebral tissue to vacuum tube technology, and saw these switching devices as metallic analogues to organic cognitive components.

McCulloch-Pitts neurons were believed to be capable of mimicking basic logical processes required for learning and memory.

Pitts perceived a close binary equivalence between the electrical discharges produced by these devices and the electrochemical nerve impulses generated in the brain in the 1940s.

McCulloch-Pitts inputs may be either a zero or a one, and the output can also be a zero or a one in their most basic form.

Each input may be categorized as excitatory or inhibitory.

It was therefore merely a short step from artificial to animal memory for Pitts and Wiener.

Donald Hebb, a Canadian neuropsychologist, made even more significant contributions to the research of artificial neurons.

These were detailed in his book The Organization of Behavior, published in 1949.

Associative learning is explained by Hebbian theory as a process of neural synaptic cells firing and connecting together.

In his study of the artificial "perceptron," a model and algorithm that weighted inputs so that it could be taught to detect particular kinds of patterns, U.S.

Navy researcher Frank Rosenblatt expanded the metaphor.

The eye and cerebral circuitry of the perceptron could approximately discern between pictures of cats and dogs.

The navy saw the perceptron as "the embryo of an electronic computer that it anticipates to be able to walk, speak, see, write, reproduce itself, and be cognizant of its existence," according to a 1958 interview with Rosenblatt (New York Times, July 8, 1958, 25).

Wiener, Shannon, McCulloch, Pitts, and other cyberneticists were nourished by the famed Macy Conferences on Cybernetics in the 1940s and 1950s, which attempted to automate human comprehension of the world and the learning process.

The gatherings also acted as a forum for discussing artificial intelligence issues.

The divide between the areas developed over time, but it was visible during the 1956 Dartmouth Summer Research Project on ArtificialIntelligence.

Organic cybernetics research was no longer well-defined in American scientific practice by 1970.

Computing sciences and technology evolved from machine cybernetics.

Cybernetic theories are now on the periphery of social and hard scientific disciplines such as cognitive science, complex systems, robotics, systems theory, and computer science, but they were critical to the information revolution of the twentieth and twenty-first centuries.

In recent studies of artificial neural networks and unsupervised machine learning, Hebbian theory has seen a resurgence of attention.

Cyborgs—beings made up of biological and mechanical pieces that augment normal functions—could be regarded a subset of cybernetics (which was once known as "medical cybernetics" in the 1960s).


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; Macy Conferences; Warwick, Kevin.


Further Reading


Ashby, W. Ross. 1956. An Introduction to Cybernetics. London: Chapman & Hall.

Galison, Peter. 1994. “The Ontology of the Enemy: Norbert Weiner and the Cybernetic Vision.” Critical Inquiry 21, no. 1 (Autumn): 228–66.

Kline, Ronald R. 2017. The Cybernetics Moment: Or Why We Call Our Age the Information Age. Baltimore, MD: Johns Hopkins University Press.

Mahoney, Michael S. 1990. “Cybernetics and Information Technology.” In Companion to the History of Modern Science, edited by R. C. Olby, G. N. Cantor, J. R. R. Christie, and M. J. S. Hodge, 537–53. London: Routledge.

“New Navy Device Learns by Doing; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” 1958. New York Times, July 8, 25.

Weiner, Norbert. 1948a. “Cybernetics.” Scientific American 179, no. 5 (November): 14–19.

Weiner, Norbert. 1948b. Cybernetics, or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press.



Artificial Intelligence - Who Is Heather Knight?




Heather Knight is a robotics and artificial intelligence specialist best recognized for her work in the entertainment industry.

Her Collaborative Humans and Robots: Interaction, Sociability, Machine Learning, and Art (CHARISMA) Research Lab at Oregon State University aims to apply performing arts techniques to robots.

Knight identifies herself as a social roboticist, a person who develops non-anthropomorphic—and sometimes nonverbal—machines that interact with people.

She makes robots that act in ways that are modeled after human interpersonal communication.

These behaviors include speaking styles, greeting movements, open attitudes, and a variety of other context indicators that assist humans in establishing rapport with robots in ordinary life.

Knight examines social and political policies relating to robotics in the CHARISMA Lab, where he works with social robots and so-called charismatic machines.

The Marilyn Monrobot interactive robot theatre company was founded by Knight.

The Robot Film Festival provides a venue for roboticists to demonstrate their latest inventions in a live setting, as well as films that are relevant to the evolving state of the art in robotics and robot-human interaction.

The Marilyn Monrobot firm arose from Knight's involvement with the Syyn Labs creative collective and her observations of Guy Hoffman, Director of the MIT Media Innovation Lab, on robots built for performance reasons.

Knight's production firm specializes on robot humor.

Knight claims that theatrical spaces are ideal for social robotics research because they not only encourage playfulness—requiring robot actors to express themselves and interact—but also include creative constraints that robots thrive in, such as a fixed stage, trial-and-error learning, and repeat performances (with manipu lated variations).

The usage of robots in entertainment situations, according to Knight, is beneficial since it increases human culture, imagination, and creativity.

At the TEDWomen conference in 2010, Knight debuted Data, a stand-up comedy robot.

Aldebaran Robotics created Data, an Nao robot (now SoftBank Group).

Data performs a comedy performance (with roughly 200 pre-programmed jokes) while gathering input from the audience and fine-tuning its act in real time.

The robot was created at Carnegie Mellon University by Scott Satkin and Varun Ramakrisha.

Knight is presently collaborating with Ginger the Robot on a comedic project.

The development of algorithms for artificial social intelligence is also fueled by robot entertainment.

In other words, art is utilized to motivate the development of new technologies.

To evaluate audience responses and understand the noises made by audiences, Data and Ginger use a microphone and a machine learning system (laughter, chatter, clap ping, etc.).

After each joke, the audience is given green and red cards to hold up.

Green cards indicate to the robots that the audience enjoys the joke.

Red cards are given out when jokes fall flat.

Knight has discovered that excellent robot humor doesn't have to disguise the fact that it's about a robot.

Rather, Data makes people laugh by drawing attention to its machine-specific issues and making self-deprecating remarks about its limits.

In order to create expressive, captivating robots, Knight has found improvisational acting and dancing skills to be quite useful.

In the process, she has changed the original Robotic Paradigm's technique of Sense-Plan-Act, preferring Sensing-Character-Enactment, which is more similar to the procedure utilized in theatrical performance in practice.

Knight is presently experimenting with ChairBots, which are hybrid robots made by gluing IKEA wooden chairs to Neato Botvacs (a brand of intelligent robotic vacuum cleaner).

The ChairBots are being tested in public places to see how a basic robot might persuade people to get out of the way using just rudimentary gestures as a mode of communication.

They've also been used to persuade prospective café customers to come in, locate a seat, and settle down.

Knight collaborated on the synthetic organic robot art piece Public Anemone for the SIGGRAPH computer graphics conference while pursuing degrees at the MIT Media Lab with Personal Robots group head Professor Cynthia Breazeal.

The installation consisted of a fiberglass cave filled with glowing creatures that moved and responded to music and people.

The cave's centerpiece robot, also known as "Public Anemone," swayed and interacted with visitors, bathed in a waterfall, watered a plant, and interacted with other cave attractions.

Knight collaborated with animatronics designer Dan Stiehl to create capacitive sensor-equipped artificial tube worms.

The tubeworm's fiberoptic tentacles drew into their tubes and changed color when a human observer reached into the cave, as though prompted by protective impulses.

The team behind Public Anemone defined the initiative as "a step toward fully embodied robot theatrical performance" and "an example of intelligent staging." Knight also helped with the mechanical design of the Smithsonian/Cooper-Hewitt Design Museum's "Cyberflora" kinetic robot flower garden display in 2003.

Her master's thesis at MIT focused on the Sensate Bear, a huggable robot teddy bear with full-body capacitive touch sensors that she used to investigate real-time algorithms incorporating social touch and nonverbal communication.

In 2016, Knight received her PhD from Carnegie Mellon University.

Her dissertation focused on expressive motion in robots with a reduced degree of freedom.

Humans do not require robots to closely resemble humans in appearance or behavior to be treated as close associates, according to Knight's research.

Humans, on the other hand, are quick to anthropomorphize robots and offer them autonomy.

Indeed, she claims, when robots become more human-like in appearance, people may feel uneasy or anticipate a far higher level of humanlike conduct.

Professor Matt Mason of the School of Computer Science and Robotics Institute advised Knight.

She was formerly a robotic artist in residence at Alphabet's X, Google's parent company's research lab.

Knight has previously worked with Aldebaran Robotics and NASA's Jet Propulsion Laboratory as a research scientist and engineer.

While working as an engineer at Aldebaran Robotics, Knight created the touch sensing panel for the Nao autonomous family companion robot, as well as the infrared detection and emission capabilities in its eyes.

Syyn Labs won a UK Music Video Award for her work on the opening two minutes of the OK Go video "This Too Shall Pass," which contains a Rube Goldberg machine.

She is now assisting Clearpath Robotics in making its self-driving, mobile-transport robots more socially conscious. 





Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


RoboThespian; Turkle, Sherry.


Further Reading:



Biever, Celeste. 2010. “Wherefore Art Thou, Robot?” New Scientist 208, no. 2792: 50–52.

Breazeal, Cynthia, Andrew Brooks, Jesse Gray, Matt Hancher, Cory Kidd, John McBean, Dan Stiehl, and Joshua Strickon. 2003. “Interactive Robot Theatre.” Communications of the ACM 46, no. 7: 76–84.

Knight, Heather. 2013. “Social Robots: Our Charismatic Friends in an Automated Future.” Wired UK, April 2, 2013. https://www.wired.co.uk/article/the-inventor.

Knight, Heather. 2014. How Humans Respond to Robots: Building Public Policy through Good Design. Washington, DC: Brookings Institute, Center for Technology Innovation.



AI - SyNAPSE

 


 

Project SyNAPSE (Systemsof Neuromorphic Adaptive Plastic Scalable Electronics) is a collaborativecognitive computing effort sponsored by the Defense Advanced Research ProjectsAgency to develop the architecture for a brain-inspired neurosynaptic computercore.

The project, which began in 2008, is a collaboration between IBM Research, HRL Laboratories, and Hewlett-Packard.

Researchers from a number of universities are also involved in the project.


The acronym SyNAPSE comes from the Ancient Greek word v, which means "conjunction," and refers to the neural connections that let information go to the brain.



The project's purpose is to reverse-engineer the functional intelligence of rats, cats, or potentially humans to produce a flexible, ultra-low-power system for use in robots.

The initial DARPA announcement called for a machine that could "scale to biological levels" and break through the "algorithmic-computational paradigm" (DARPA 2008, 4).

In other words, they needed an electronic computer that could analyze real-world complexity, respond to external inputs, and do so in near-real time.

SyNAPSE is a reaction to the need for computer systems that can adapt to changing circumstances and understand the environment while being energy efficient.

Scientists at SyNAPSE are working on neuromorphicelectronics systems that are analogous to biological nervous systems and capable of processing data from complex settings.




It is envisaged that such systems would gain a considerable deal of autonomy in the future.

The SyNAPSE project takes an interdisciplinary approach, drawing on concepts from areas as diverse as computational neuroscience, artificial neural networks, materials science, and cognitive science.


Basic science and engineering will need to be expanded in the following areas by SyNAPSE: 


  •  simulation—for the digital replication of systems in order to verify functioning prior to the installation of material neuromorphological systems.





In 2008, IBM Research and HRL Laboratories received the first SyNAPSE grant.

Various aspects of the grant requirements were subcontracted to a variety of vendors and contractors by IBM and HRL.

The project was split into four parts, each of which began following a nine-month feasibility assessment.

The first simulator, C2, was released in 2009 and operated on a BlueGene/P supercomputer, simulating cortical simulations with 109 neurons and 1013 synapses, similar to those seen in a mammalian cat brain.

Following a revelation by the Blue Brain Project leader that the simulation did not meet the complexity claimed, the software was panned.

Each neurosynaptic core is 2 millimeters by 3 millimeters in size and is made up of materials derived from human brain biology.

The cores and actual brains have a more symbolic than comparable relationship.

Communication replaces real neurons, memory replaces synapses, and axons and dendrites are replaced by communication.

This enables the team to explain a biological system's hardware implementation.





HRL Labs stated in 2012 that it has created the world's first working memristor array layered atop a traditional CMOS circuit.

The term "memristor," which combines the words "memory" and "transistor," was invented in the 1970s.

Memory and logic functions are integrated in a memristor.

In 2012, project organizers reported the successful large-scale simulation of 530 billion neurons and 100 trillion synapses on the Blue Gene/Q Sequoia machine at Lawrence Livermore National Laboratory in California, which is the world's second fastest supercomputer.





The TrueNorth processor, a 5.4-billion-transistor chip with 4096 neurosynaptic cores coupled through an intrachip network that includes 1 million programmable spiking neurons and 256 million adjustable synapses, was presented by IBM in 2014.

Finally, in 2016, an end-to-end ecosystem (including scalable systems, software, and apps) that could fully use the TrueNorth CPU was unveiled.

At the time, there were reports on the deployment of applications such as interactive handwritten character recognition and data-parallel text extraction and recognition.

TrueNorth's cognitive computing chips have now been put to the test in simulations like a virtual-reality robot driving and playing the popular videogame Pong.

DARPA has been interested in the construction of brain-inspired computer systems since the 1980s.

Dharmendra Modha, director of IBM Almaden's Cognitive ComputingInitiative, and Narayan Srinivasa, head of HRL's Center for Neural and Emergent Systems, are leading the Project SyNAPSE project.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cognitive Computing; Computational Neuroscience.


References And Further Reading


Defense Advanced Research Projects Agency (DARPA). 2008. “Systems of Neuromorphic Adaptive Plastic Scalable Electronics.” DARPA-BAA 08-28. Arlington, VA: DARPA, Defense Sciences Office.

Hsu, Jeremy. 2014. “IBM’s New Brain.” IEEE Spectrum 51, no. 10 (October): 17–19.

Merolla, Paul A., et al. 2014. “A Million Spiking-Neuron Integrated Circuit with a Scalable Communication Network and Interface.” Science 345, no. 6197 (August): 668–73.

Monroe, Don. 2014. “Neuromorphic Computing Gets Ready for the (Really) Big Time.” Communications of the ACM 57, no. 6 (June): 13–15.




Quantum Revolution 2.0 - Technology and Social Change



Increased scientific knowledge has always had a significant effect on technical, social, and economic advances, just as it has always entailed enormous ideological revolutions. 



The natural sciences are, in reality, the primary engine of our contemporary wealth. 


The persistent quest of information leads to scientific advancement, which, when coupled with the dynamism of free-market competition, leads to equally consistent technical advancement. 

The one gives humanity with ever-increasing insights into the structure and processes of nature, while the second provides us with almost unlimited opportunities for individual activities, economic growth, and quality-of-life improvements. 



Here are a few instances from the past: 


• During the Renaissance, new technical breakthroughs such as papermaking, printing, mechanical clocks, navigation tools/shipping, building, and so on ushered in unparalleled wealth for Europeans. 

• The fruits of Newtonian physics found a spectacular technical expression in the shape of steam engines and heat machines, based on the new theory of heat, during the Industrial Revolution of the 18th and 19th centuries. 

• Transportation and manufacturing were transformed by railway and industrial equipment. 

• In the late 1800s, Faraday and Maxwell's electromagnetic field theory led immediately to city electricity, modern telecommunications, and electrical devices for a significant portion of the rural population. 

• The technological revolution of the twentieth century roughly corresponds to the first generation of quantum technologies and has brought us lasers, computers, imaging devices, and much more (including, unfortunately, the atomic bomb), resulting in a first wave of political and economic globalization. 



Digitization, with its ever-faster information processing and transmission, industrial integration with information and communication technology, and, of course, the internet, has ushered in a new era of political and economic globalization. 


Something new will emerge from the impending second quantum revolution. 

It will radically transform communication, engagement, and manufacturing once again. 

The Quantum Revolution 2.0, like all other technological revolutions, will usher in yet another significant shift in our way of life and society. 



~ Jai Krishna Ponnappan


You may also want to read more about Quantum Computing here.







How Can Atomic Clocks Help Humans Arrive On Mars On Time?



    Autonomous Navigation - Overcoming Technological Limitations



    NASA navigators are assisting in the development of a future in which spacecraft may safely and independently travel to destinations such as the Moon and Mars.


    • Today, navigators guide a spacecraft by calculating its position from Earth and transmitting the data to space in a two-way relay system that may take minutes to hours to give instructions. 
    • This mode of navigation ensures that our spacecraft remain connected to the earth, waiting for instructions from our planet, no matter how far a mission goes across the solar system.
    • This constraint will obstruct any future crewed voyage to another planet. 


    How can astronauts travel to destinations distant from Earth if they don't have direct control over their path? 


    And how will they be able to land properly on another planet if there is a communication delay that slows down their ability to alter their trajectory into the atmosphere?


    The Deep Space Atomic Clock, a toaster-sized clock developed by NASA, seeks to provide answers to these concerns. 


    How a Toaster-Sized Atomic Clock Could Pave the Way for Deep Space  Exploration | Smart News | Smithsonian Magazine

    • It's the first GPS-like device that's tiny enough to go on a spaceship and steady enough to operate. 
    • The technological demonstrated allows the spaceship to determine its location without relying on data from Earth.
    • The clock will be sent into Earth's orbit for a year in late June on a SpaceX Falcon Heavy rocket, where it will be tested to see whether it can assist spacecraft in locating themselves in space.



    If the Deep Orbit Atomic Clock's first year in space goes well, it may open the way for one-way navigation in the future, when humans can be led over the Moon's surface by a GPS-like system or safely fly their own missions to Mars and beyond.


    • Navigators on Earth guide every spaceship traveling to the furthest reaches of the universe. 
    • By allowing onboard autonomous navigation, or self-driving spaceship, the Deep Space Atomic Clock will alter that.



    Deep Space Navigation




    Atomic clocks in space are not a novel concept. 


    • Every GPS gadget and smartphone uses atomic clocks on satellites circling Earth to calculate its position. 
    • Satellites transmit signals from space, and the receiver triangulates your location by calculating the time it takes for the signals to reach your GPS.
    • At the moment, spacecraft beyond Earth's orbit do not have a GPS to help them navigate across space. 


    GPS satellites' atomic clocks aren't precise enough to transmit instructions to spacecraft, where even a fraction of a second may mean missing a planet by kilometers.


    • Instead, navigators transmit a signal to the spaceship, which bounces it back to Earth, using massive antennas on Earth.
    • Ground-based clocks keep track of how long it takes the signal to complete this two-way trip. 
    • The length of time informs them how far away and how quickly the spaceship is traveling. 
    • Only then will navigators be able to give the spacecraft instructions, instructing it where to travel.
    • "It's the same idea as an echo," Seubert said. "If I scream in front of a mountain, the longer it takes for the echo to return to me, the farther away the mountain is."


    Two-way navigation implies that a mission must wait for a signal containing instructions to traverse the enormous distances between planets, no matter how far into space it travels. 


    • It's a procedure made famous by Curiosity's arrival on Mars, when the world waited 14 minutes for the rover to transmit the word that it had landed safely with mission headquarters. 
    • A one-way communication between Earth and Mars may take anything from 4 to 20 minutes to get between the planets, depending on where they are in their orbits.
    • It's a sluggish, arduous method of navigating deep space, one that clogs up NASA's Deep Space Network's massive antennae like a busy phone line. 
    • A spaceship traveling at tens of thousands of kilometers per hour may be at a completely different location by the time it "knows" where it is during this interaction.



    Atomic Clocks To Compute Precise Locations In Space




    This two-way system may be replaced with an atomic clock small enough to go on a mission but precise enough to provide correct instructions. 


    • A signal would be sent from Earth to a spaceship in the future. 
    • The Deep Space Atomic Clock aboard, like its Earthly counterparts, would measure the time it took for that signal to reach it. 
    • After that, the spacecraft could compute its own location and course, effectively directing itself.


    Having a clock aboard would allow onboard radio navigation, which, when coupled with optical navigation, would provide astronauts with a more precise and safe method to navigate themselves.


    • This one-way navigation technique may be used on Mars and beyond. 
    • By sending a single signal into space, DSN antennas would be able to connect with many missions at the same time. 
    • The new technique has the potential to enhance GPS accuracy on Earth. 
    • Additionally, several spacecraft equipped with Deep Space Atomic Clocks might circle Mars, forming a GPS-like network that would guide robots and people on the surface.


    The Deep Space Atomic Clock will be able to assist in navigation not just on Earth, but also on distant planets. Consider what would happen if we had GPS on other planets.



    • Burt and JPL clock scientists Robert Tjoelker and John Prestage developed a mercury ion clock that, like refrigerator-size atomic clocks on Earth, retains its stability in space. 
    • The Deep Space Atomic Clock was shown to be 50 times more accurate than GPS clocks in lab testing. Every ten million years, there is a one-second mistake.
    • The clock's ability to stay steady in orbit will be determined by its demonstration in space. 
    • A Deep Space Atomic Clock may launch on a mission as early as the 2030s if it succeeds. 
    • The first step toward self-driving spaceship capable of transporting people to distant planets.



    General Atomics Electromagnetic Systems of Englewood, Colorado supplied the spacecraft for the Deep Space Atomic Clock. 

    It is supported by NASA's Space Technology Mission Directorate's Technology Demonstration Missions program and NASA's Human Exploration and Operations Mission Directorate's Space Communications and Navigations program. The project is overseen by JPL.


    ~ Jai Krishna Ponnappan


    Courtesy - NASA.gov


    You may also want to read more about Space Missions and Systems here.




    Artificial Intelligence - What Were The Macy Conferences?

     



    The Macy Conferences on Cybernetics, which ran from 1946 to 1960, aimed to provide the framework for developing multidisciplinary disciplines such as cybernetics, cognitive psychology, artificial life, and artificial intelligence.

    Famous twentieth-century scholars, academics, and researchers took part in the Macy Conferences' freewheeling debates, including psychiatrist W.

    Ross Ashby, anthropologist Gregory Bateson, ecologist G. Evelyn Hutchinson, psychologist Kurt Lewin, philosopher Donald Marquis, neurophysiologist Warren McCulloch, cultural anthropologist Margaret Mead, economist Oskar Morgenstern, statistician Leonard Savage, physicist Heinz von Foerster McCulloch, a neurophysiologist at the Massachusetts Institute of Technology's Research Laboratory for Electronics, and von Foerster, a professor of signal engineering at the University of Illinois at Urbana-Champaign and coeditor with Mead of the published Macy Conference proceedings, were the two main organizers of the conferences.

    All meetings were sponsored by the Josiah Macy Jr. Foundation, a nonprofit organization.

    The conferences were started by Macy administrators Frank Fremont-Smith and Lawrence K. Frank, who believed that they would spark multidisciplinary discussion.

    The disciplinary isolation of medical research was a major worry for Fremont-Smith and Frank.

    A Macy-sponsored symposium on Cerebral Inhibitions in 1942 preceded the Macy meetings, during which Harvard physiology professor Arturo Rosenblueth presented the first public discussion on cybernetics, titled "Behavior, Purpose, and Teleology." The 10 conferences conducted between 1946 and 1953 focused on biological and social systems' circular causation and feedback processes.

    Between 1954 and 1960, five transdisciplinary Group Processes Conferences were held as a result of these sessions.

    To foster direct conversation amongst participants, conference organizers avoided formal papers in favor of informal presentations.

    The significance of control, communication, and feedback systems in the human nervous system was stressed in the early Macy Conferences.

    The contrasts between analog and digital processing, switching circuit design and Boolean logic, game theory, servomechanisms, and communication theory were among the other subjects explored.

    These concerns belong under the umbrella of "first-order cybernetics." Several biological issues were also discussed during the conferences, including adrenal cortex function, consciousness, aging, metabolism, nerve impulses, and homeostasis.

    The sessions acted as a forum for discussing long-standing issues in what would eventually be referred to as artificial intelligence.

    (At Dartmouth College in 1955, mathematician John McCarthy invented the phrase "artificial intelligence.") Gregory Bateson, for example, gave a lecture at the inaugural Macy Conference that differentiated between "learning" and "learning to learn" based on his anthropological research and encouraged listeners to consider how a computer might execute either job.

    Attendees in the eighth conference discussed decision theory research, which was led by Leonard Savage.

    Ross Ashby suggested the notion of chess-playing automatons at the ninth conference.

    The usefulness of automated computers as logic models for human cognition was discussed more than any other issue during the Macy Conferences.

    In 1964, the Macy Conferences gave rise to the American Society for Cybernetics, a professional organization.

    The Macy Conferences' early arguments on feedback methods were applied to topics as varied as artillery control, project management, and marital therapy.


    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.



    See also: 


    Cybernetics and AI; Dartmouth AI Conference.


    References & Further Reading:


    Dupuy, Jean-Pierre. 2000. The Mechanization of the Mind: On the Origins of Cognitive Science. Princeton, NJ: Princeton University Press.

    Hayles, N. Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.

    Heims, Steve J. 1988. “Optimism and Faith in Mechanism among Social Scientists at the Macy Conferences on Cybernetics, 1946–1953.” AI & Society 2: 69–78.

    Heims, Steve J. 1991. The Cybernetics Group. Cambridge, MA: MIT Press.

    Pias, Claus, ed. 2016. The Macy Conferences, 1946–1953: The Complete Transactions. Zürich, Switzerland: Diaphanes.




    Artificial Intelligence - History And Timeline

       




      1942

      The Three Laws of Robotics by science fiction author Isaac Asimov occur in the short tale "Runaround."


      1943


      Emil Post, a mathematician, talks about "production systems," a notion he adopted for the 1957 General Problem Solver.


      1943


      "A Logical Calculus of the Ideas of Immanent in Nervous Activity," a study by Warren McCulloch and Walter Pitts on a computational theory of neural networks, is published.


      1944


      The Teleological Society was founded by John von Neumann, Norbert Wiener, Warren McCulloch, Walter Pitts, and Howard Aiken to explore, among other things, nervous system communication and control.


      1945


      In his book How to Solve It, George Polya emphasizes the importance of heuristic thinking in issue solving.


      1946


      In New York City, the first of eleven Macy Conferences on Cybernetics gets underway. "Feedback Mechanisms and Circular Causal Systems in Biological and Social Systems" is the focus of the inaugural conference.



      1948


      Norbert Wiener, a mathematician, publishes Cybernetics, or Control and Communication in the Animal and the Machine.


      1949


      In his book The Organization of Behavior, psychologist Donald Hebb provides a theory for brain adaptation in human education: "neurons that fire together connect together."


      1949


      Edmund Berkeley's book Giant Brains, or Machines That Think, is published.


      1950


      Alan Turing's "Computing Machinery and Intelligence" describes the Turing Test, which attributes intelligence to any computer capable of demonstrating intelligent behavior comparable to that of a person.


      1950


      Claude Shannon releases "Programming a Computer for Playing Chess," a groundbreaking technical study that shares search methods and strategies.



      1951


      Marvin Minsky, a math student, and Dean Edmonds, a physics student, create an electronic rat that can learn to navigate a labyrinth using Hebbian theory.


      1951


      John von Neumann, a mathematician, releases "General and Logical Theory of Automata," which reduces the human brain and central nervous system to a computer.


      1951


      For the University of Manchester's Ferranti Mark 1 computer, Christopher Strachey produces a checkers software and Dietrich Prinz creates a chess routine.


      1952


      Cyberneticist W. Edwards wrote Design for a Brain: The Origin of Adaptive Behavior, a book on the logical underpinnings of human brain function. Ross Ashby is a British actor.


      1952


      At Cornell University Medical College, physiologist James Hardy and physician Martin Lipkin begin developing a McBee punched card system for mechanical diagnosis of patients.


      1954


      Science-Fiction Thinking Machines: Robots, Androids, Computers, edited by Groff Conklin, is a theme-based anthology.


      1954


      The Georgetown-IBM project exemplifies the power of text machine translation.


      1955


      Under the direction of economist Herbert Simon and graduate student Allen Newell, artificial intelligence research began at Carnegie Tech (now Carnegie Mellon University).


      1955


      For Scientific American, mathematician John Kemeny wrote "Man as a Machine."


      1955


      In a Rockefeller Foundation proposal for a Dartmouth University meeting, mathematician John McCarthy coined the phrase "artificial intelligence."



      1956


      Allen Newell, Herbert Simon, and Cliff Shaw created Logic Theorist, an artificial intelligence computer software for proving theorems in Alfred North Whitehead and Bertrand Russell's Principia Mathematica.


      1956


      The "Constitutional Convention of AI," a Dartmouth Summer Research Project, brings together specialists in cybernetics, automata, information theory, operations research, and game theory.


      1956


      On television, electrical engineer Arthur Samuel shows off his checkers-playing AI software.


      1957


      Allen Newell and Herbert Simon created the General Problem Solver AI software.


      1957


      The Rockefeller Medical Electronics Center shows how an RCA Bizmac computer application might help doctors distinguish between blood disorders.


      1958


      The Computer and the Brain, an unfinished work by John von Neumann, is published.


      1958


      At the "Mechanisation of Thought Processes" symposium at the UK's Teddington National Physical Laboratory, Firmin Nash delivers the Group Symbol Associator its first public demonstration.


      1958


      For linear data categorization, Frank Rosenblatt develops the single layer perceptron, which includes a neural network and supervised learning algorithm.


      1958


      The high-level programming language LISP is specified by John McCarthy of the Massachusetts Institute of Technology (MIT) for AI research.


      1959


      "The Reasoning Foundations of Medical Diagnosis," written by physicist Robert Ledley and radiologist Lee Lusted, presents Bayesian inference and symbolic logic to medical difficulties.


      1959


      At MIT, John McCarthy and Marvin Minsky create the Artificial Intelligence Laboratory.


      1960


      James L. Adams, an engineering student, built the Stanford Cart, a remote control vehicle with a television camera.


      1962


      In his short novel "Without a Thought," science fiction and fantasy author Fred Saberhagen develops sentient killing robots known as Berserkers.


      1963


      John McCarthy developed the Stanford Artificial Intelligence Laboratory (SAIL).


      1963


      Under Project MAC, the Advanced Research Experiments Agency of the United States Department of Defense began financing artificial intelligence projects at MIT.


      1964


      Joseph Weizenbaum of MIT created ELIZA, the first software allowing natural language conversation with a computer (a "chatbot").


      1965


      I am a statistician from the United Kingdom. J. Good's "Speculations Concerning the First Ultraintelligent Machine," which predicts an impending intelligence explosion, is published.


      1965


      Hubert L. Dreyfus and Stuart E. Dreyfus, philosophers and mathematicians, publish "Alchemy and AI," a study critical of artificial intelligence.


      1965


      Joshua Lederberg and Edward Feigenbaum founded the Stanford Heuristic Programming Project, which aims to model scientific reasoning and create expert systems.


      1965


      Donald Michie is the head of Edinburgh University's Department of Machine Intelligence and Perception.


      1965


      Georg Nees organizes the first generative art exhibition, Computer Graphic, in Stuttgart, West Germany.


      1965


      With the expert system DENDRAL, computer scientist Edward Feigenbaum starts a ten-year endeavor to automate the chemical analysis of organic molecules.


      1966


      The Automatic Language Processing Advisory Committee (ALPAC) issues a cautious assessment on machine translation's present status.


      1967


      On a DEC PDP-6 at MIT, Richard Greenblatt finishes work on Mac Hack, a computer that plays competitive tournament chess.


      1967


      Waseda University's Ichiro Kato begins work on the WABOT project, which culminates in the unveiling of a full-scale humanoid intelligent robot five years later.


      1968


      Stanley Kubrick's adaptation of Arthur C. Clarke's science fiction novel 2001: A Space Odyssey, about the artificially intelligent computer HAL 9000, is one of the most influential and highly praised films of all time.


      1968


      At MIT, Terry Winograd starts work on SHRDLU, a natural language understanding program.


      1969


      Washington, DC hosts the First International Joint Conference on Artificial Intelligence (IJCAI).


      1972


      Artist Harold Cohen develops AARON, an artificial intelligence computer that generates paintings.


      1972


      Ken Colby describes his efforts using the software program PARRY to simulate paranoia.


      1972


      In What Computers Can't Do, Hubert Dreyfus offers his criticism of artificial intelligence's intellectual basis.


      1972


      Ted Shortliffe, a doctorate student at Stanford University, has started work on the MYCIN expert system, which is aimed to identify bacterial illnesses and provide treatment alternatives.


      1972


      The UK Science Research Council releases the Lighthill Report on Artificial Intelligence, which highlights AI technological shortcomings and the challenges of combinatorial explosion.


      1972


      The Assault on Privacy: Computers, Data Banks, and Dossiers, by Arthur Miller, is an early study on the societal implications of computers.


      1972


      INTERNIST-I, an internal medicine expert system, is being developed by University of Pittsburgh physician Jack Myers, medical student Randolph Miller, and computer scientist Harry Pople.


      1974


      Paul Werbos, a social scientist, has completed his dissertation on a backpropagation algorithm that is currently extensively used in artificial neural network training for supervised learning applications.


      1974


      The memo discusses the notion of a frame, a "remembered framework" that fits reality by "changing detail as appropriate." Marvin Minsky distributes MIT AI Lab document 306 on "A Framework for Representing Knowledge."


      1975


      The phrase "genetic algorithm" is used by John Holland to explain evolutionary strategies in natural and artificial systems.


      1976


      In Computer Power and Human Reason, computer scientist Joseph Weizenbaum expresses his mixed feelings on artificial intelligence research.


      1978


      At Rutgers University, EXPERT, a generic knowledge representation technique for constructing expert systems, becomes live.


      1978


      Joshua Lederberg, Douglas Brutlag, Edward Feigenbaum, and Bruce Buchanan started the MOLGEN project at Stanford to solve DNA structures generated from segmentation data in molecular genetics research.


      1979


      Raj Reddy, a computer scientist at Carnegie Mellon University, founded the Robotics Institute.


      1979


      While working with a robot, the first human is slain.


      1979


      Hans Moravec rebuilds and equips the Stanford Cart with a stereoscopic vision system after it has evolved into an autonomous rover over almost two decades.


      1980


      The American Association of Artificial Intelligence (AAAI) holds its first national conference at Stanford University.


      1980


      In his Chinese Room argument, philosopher John Searle claims that a computer's modeling of action does not establish comprehension, intentionality, or awareness.


      1982


      Release of Blade Runner, a science fiction picture based on Philip K. Dick's tale Do Androids Dream of Electric Sheep? (1968).


      1982


      The associative brain network, initially developed by William Little in 1974, is popularized by physicist John Hopfield.


      1984


      In Fortune Magazine, Tom Alexander writes "Why Computers Can't Outthink the Experts."


      1984


      At the Microelectronics and Computer Consortium (MCC) in Austin, TX, computer scientist Doug Lenat launches the Cyc project, which aims to create a vast commonsense knowledge base and artificial intelligence architecture.


      1984


      Orion Pictures releases the first Terminator picture, which features robotic assassins from the future and an AI known as Skynet.


      1986


      Honda establishes a research facility to build humanoid robots that can cohabit and interact with humans.


      1986


      Rodney Brooks, an MIT roboticist, describes the subsumption architecture for behavior-based robots.


      1986


      The Society of Mind is published by Marvin Minsky, who depicts the brain as a collection of collaborating agents.


      1989


      The MIT Artificial Intelligence Lab's Rodney Brooks and Anita Flynn publish "Fast, Cheap, and Out of Control: A Robot Invasion of the Solar System," a paper discussing the possibility of sending small robots on interplanetary exploration missions.


      1993


      The Cog interactive robot project is launched at MIT by Rodney Brooks, Lynn Andrea Stein, Cynthia Breazeal, and others.


      1995


      The phrase "generative music" was used by musician Brian Eno to describe systems that create ever-changing music by modifying parameters over time.


      1995


      The MQ-1 Predator unmanned aerial aircraft from General Atomics has entered US military and reconnaissance duty.


      1997


      Under normal tournament settings, IBM's Deep Blue supercomputer overcomes reigning chess champion Garry Kasparov.


      1997


      In Nagoya, Japan, the inaugural RoboCup, an international tournament featuring over forty teams of robot soccer players, takes place.


      1997


      NaturallySpeaking is Dragon Systems' first commercial voice recognition software product.


      1999


      Sony introduces AIBO, a robotic dog, to the general public.


      2000


      The Advanced Step in Innovative Mobility humanoid robot, ASIMO, is unveiled by Honda.


      2001


      At Super Bowl XXXV, Visage Corporation unveils the FaceFINDER automatic face-recognition technology.


      2002


      The Roomba autonomous household vacuum cleaner is released by the iRobot Corporation, which was created by Rodney Brooks, Colin Angle, and Helen Greiner.


      2004


      In the Mojave Desert near Primm, NV, DARPA hosts its inaugural autonomous vehicle Grand Challenge, but none of the cars complete the 150-mile route.


      2005


      Under the direction of neurologist Henry Markram, the Swiss Blue Brain Project is formed to imitate the human brain.


      2006


      Netflix is awarding a $1 million prize to the first programming team to create the greatest recommender system based on prior user ratings.


      2007


      DARPA has announced the commencement of the Urban Challenge, an autonomous car competition that will test merging, passing, parking, and navigating traffic and junctions.


      2009


      Under the leadership of Sebastian Thrun, Google launches its self-driving car project (now known as Waymo) in the San Francisco Bay Area.


      2009


      Fei-Fei Li of Stanford University describes her work on ImageNet, a library of millions of hand-annotated photographs used to teach AIs to recognize the presence or absence of items visually.


      2010


      Human manipulation of automated trading algorithms causes a "flash collapse" in the US stock market.


      2011


      Demis Hassabis, Shane Legg, and Mustafa Suleyman developed DeepMind in the United Kingdom to educate AIs how to play and succeed at classic video games.


      2011


      Watson, IBM's natural language computer system, has beaten Jeopardy! Ken Jennings and Brad Rutter are the champions.


      2011


      The iPhone 4S comes with Apple's mobile suggestion assistant Siri.


      2011


      Andrew Ng, a computer scientist, and Google colleagues Jeff Dean and Greg Corrado have launched an informal Google Brain deep learning research cooperation.


      2013


      The European Union's Human Brain Project aims to better understand how the human brain functions and to duplicate its computing capabilities.


      2013


      Stop Killer Robots is a campaign launched by Human Rights Watch.


      2013


      Spike Jonze's science fiction drama Her has been released. A guy and his AI mobile suggestion assistant Samantha fall in love in the film.


      2014


      Ian Goodfellow and colleagues at the University of Montreal create Generative Adversarial Networks (GANs) for use in deep neural networks, which are beneficial in making realistic fake human photos.


      2014


      Eugene Goostman, a chatbot that plays a thirteen-year-old kid, is said to have passed a Turing-like test.


      2014


      According to physicist Stephen Hawking, the development of AI might lead to humanity's extinction.


      2015


      DeepFace is a deep learning face recognition system that Facebook has released on its social media platform.


      2016


      In a five-game battle, DeepMind's AlphaGo software beats Lee Sedol, a 9th dan Go player.


      2016


      Tay, a Microsoft AI chatbot, has been put on Twitter, where users may teach it to send abusive and inappropriate posts.


      2017


      The Asilomar Meeting on Beneficial AI is hosted by the Future of Life Institute.


      2017


      Anthony Levandowski, an AI self-driving start-up engineer, formed the Way of the Future church with the goal of creating a superintelligent robot god.


      2018


      Google has announced Duplex, an AI program that uses natural language to schedule appointments over the phone.


      2018


      The General Data Protection Regulation (GDPR) and "Ethics Guidelines for Trustworthy AI" are published by the European Union.


      2019


      A lung cancer screening AI developed by Google AI and Northwestern Medicine in Chicago, IL, surpasses specialized radiologists.


      2019


      Elon Musk cofounded OpenAI, which generates realistic tales and journalism via artificial intelligence text generation. Because of its ability to spread false news, it was previously judged "too risky" to utilize.


      2020


      TensorFlow Quantum, an open-source framework for quantum machine learning, was announced by Google AI in conjunction with the University of Waterloo, the "moonshot faculty" X, and Volkswagen.




      ~ Jai Krishna Ponnappan

      Find Jai on Twitter | LinkedIn | Instagram


      You may also want to read more about Artificial Intelligence here.










      What Is Artificial General Intelligence?

      Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...