Showing posts sorted by date for query Dartmouth Artificial Intelligence Conference. Sort by relevance Show all posts
Showing posts sorted by date for query Dartmouth Artificial Intelligence Conference. Sort by relevance Show all posts

Artificial Intelligence - Who Was Allen Newell?

 



Allen Newell (1927–1992) was an American writer who lived from 1927 to 1992.


 Allen In the late 1950s and early 1960s, Newell collaborated with Herbert Simon to develop the earliest models of human cognition.

The Logic Theory Machine depicted how logical rules might be used in a proof, the General Problem Solver modeled how basic problem solving could be done, and an early chess software mimicked how to play chess (the Newell-Shaw-Simon chess program).

Newell and Simon demonstrated for the first time in these models how computers can modify symbols and how these manipulations may be used to describe, produce, and explain intelligent behavior.

Newell began his career at Stanford University as a physics student.

He joined to the RAND Corporation to work on complex system models after a year of graduate studies in mathematics at Princeton.

He met and was inspired by Oliver Selfridge while at RAND, who led him to modeling cognition.

He also met Herbert Simon, who would go on to receive the Nobel Prize in Economics for his work on economic decision-making processes, particularly satisficing.

Simon persuaded Newell to attend Carnegie Institute of Technology (now Carnegie Mellon University).

For the most of his academic career, Newell worked with Simon.

Newell's main goal was to simulate the human mind's operations using computer models in order to better comprehend it.

Newell earned his PhD at Carnegie Mellon, where he worked with Simon.

He began his academic career as a tenured and chaired professor.

He was a founding member of the Department of Computer Science (today known as the school), where he held his major position.

With Simon, Newell examined the mind, especially problem solving, as part of his major line of study.

Their book Human Problem Solving, published in 1972, outlined their idea of intelligence and included examples from arithmetic problems and chess.

To assess what resources are being utilized in cognition, they employed a lot of verbal talk-aloud proto cols, which are more accurate than think-aloud or retrospective protocols.

Ericsson and Simon eventually documented the science of verbal protocol data in more detail.

In his final lecture ("Desires and Diversions"), he stated that if you're going to be distracted, you should make the most of it.

He accomplished this via remarkable accomplishments in the areas of his diversions, as well as the use of some of them in his final effort.

One of the early hypertext systems, ZOG, was one of these diversions.

Newell also collaborated with Digital Equipment Corporation (DEC) founder Gordon Bell on a textbook on computer architectures and worked on voice recognition systems with CMU colleague Raj Reddy.

Working with Stuart Card and Thomas Moran at Xerox PARC to develop ideas of how people interact with computers was maybe the longest-running and most fruitful diversion.

The Psychology of Human-Computer Interaction documents these theories (1983).

Their study resulted in the Keystroke Level Model and GOMS, two models for representing human behavior, as well as the Model Human Processor, a simplified description of the mechanics of cognition in this domain.

Some of the first work in human-computer interface was done here (HCI).

Their strategy advocated for first knowing the user and the task, then employing technology to assist the user in completing the job.

In his farewell talk, Newell also said that scientists should have a last endeavor that would outlive them.

Newell's last goal was to advocate for unified theories of cognition (UTCs) and to develop Soar, a proposed UTC and example.

His idea imagined what it would be like to have a theory that combined all of psychology's restrictions, facts, and theories into a single unified outcome that could be implemented by a computer program.

Soar continues to be a successful continuing project, despite the fact that it is not yet completed.

While Soar has yet fully unify psychology, it has made significant progress in describing problem solving, learning, and their interactions, as well as how to create autonomous, reactive entities in huge simulations.

He looked into how learning could be modeled as part of his final project (with Paul Rosenbloom).

Later, this project was merged with Soar.

Learning, according to Newell and Rosenbloom, follows a power law of practice, in which the time to complete a task is proportional to the practice (trial) number raised to a small negative power (e.g., Time trial -).

This holds true across a broad variety of activities.

Their explanation was that when tasks were completed in a hierarchical order, what was learnt at the lowest level had the greatest impact on reaction time, but as learning progressed up the hierarchy, it was less often employed and saved less time, thus learning slowed but did not cease.

Newell delivered the William James Lectures at Harvard in 1987.

He detailed what it would take to develop a unified theory in psychology in these lectures.

These lectures were taped and are accessible in CMU's library.

He gave them again the following autumn and turned them into a book (1990).

Soar's representation of cognition is based on searching through issue spaces.

It takes the form of a manufacturing system (using IF-THEN rules).

It makes an effort to use an operator.

Soar recurses with an impasse to solve the issue if it doesn't have one or can't apply it.

As a result, knowledge is represented as operator parts and issue spaces, as well as how to overcome impasses.

As a result, the architecture is how these choices and information may be organized.

Soar models have been employed in a range of cognitive science and AI applications, including military simulations, and systems with up to one million rules have been constructed.

Kathleen Carley, a social scientist at CMU, and Newell discussed how to use these cognitive models to simulate social agents.

Work on Soar continues, notably at the University of Michigan under the direction of John Laird, with a concentration on intelligent agents presently.

In 1975, the ACM A. M. Turing Award was given to Newell and Simon for their contributions to artificial intelligence, psychology of human cognition, and list processing.

Their work is credited with making significant contributions to computer science as an empirical investigation.

Newell has also been inducted into the National Academies of Sciences and Engineering.

He was awarded the National Medal of Science in 1992.

Newell was instrumental in establishing a productive and supportive research group, department, and institution.

His son said at his memorial service that he was not only a great scientist, but also a great father.

His weaknesses were that he was very intelligent, that he worked really hard, and that he had the same opinion of you.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; General Problem Solver; Simon, Herbert A.


References & Further Reading:


Newell, Allen. 1990. Unified Theories of Cognition. Cambridge, MA: Harvard University Press.

Newell, Allen. 1993. Desires and Diversions. Carnegie Mellon University, School of Computer Science. Stanford, CA: University Video Communications.

Simon, Herbert A. 1998. “Allen Newell: 1927–1992.” IEEE Annals of the History of Computing 20, no. 2: 63–76.




Artificial Intelligence - What Were The Macy Conferences?

 



The Macy Conferences on Cybernetics, which ran from 1946 to 1960, aimed to provide the framework for developing multidisciplinary disciplines such as cybernetics, cognitive psychology, artificial life, and artificial intelligence.

Famous twentieth-century scholars, academics, and researchers took part in the Macy Conferences' freewheeling debates, including psychiatrist W.

Ross Ashby, anthropologist Gregory Bateson, ecologist G. Evelyn Hutchinson, psychologist Kurt Lewin, philosopher Donald Marquis, neurophysiologist Warren McCulloch, cultural anthropologist Margaret Mead, economist Oskar Morgenstern, statistician Leonard Savage, physicist Heinz von Foerster McCulloch, a neurophysiologist at the Massachusetts Institute of Technology's Research Laboratory for Electronics, and von Foerster, a professor of signal engineering at the University of Illinois at Urbana-Champaign and coeditor with Mead of the published Macy Conference proceedings, were the two main organizers of the conferences.

All meetings were sponsored by the Josiah Macy Jr. Foundation, a nonprofit organization.

The conferences were started by Macy administrators Frank Fremont-Smith and Lawrence K. Frank, who believed that they would spark multidisciplinary discussion.

The disciplinary isolation of medical research was a major worry for Fremont-Smith and Frank.

A Macy-sponsored symposium on Cerebral Inhibitions in 1942 preceded the Macy meetings, during which Harvard physiology professor Arturo Rosenblueth presented the first public discussion on cybernetics, titled "Behavior, Purpose, and Teleology." The 10 conferences conducted between 1946 and 1953 focused on biological and social systems' circular causation and feedback processes.

Between 1954 and 1960, five transdisciplinary Group Processes Conferences were held as a result of these sessions.

To foster direct conversation amongst participants, conference organizers avoided formal papers in favor of informal presentations.

The significance of control, communication, and feedback systems in the human nervous system was stressed in the early Macy Conferences.

The contrasts between analog and digital processing, switching circuit design and Boolean logic, game theory, servomechanisms, and communication theory were among the other subjects explored.

These concerns belong under the umbrella of "first-order cybernetics." Several biological issues were also discussed during the conferences, including adrenal cortex function, consciousness, aging, metabolism, nerve impulses, and homeostasis.

The sessions acted as a forum for discussing long-standing issues in what would eventually be referred to as artificial intelligence.

(At Dartmouth College in 1955, mathematician John McCarthy invented the phrase "artificial intelligence.") Gregory Bateson, for example, gave a lecture at the inaugural Macy Conference that differentiated between "learning" and "learning to learn" based on his anthropological research and encouraged listeners to consider how a computer might execute either job.

Attendees in the eighth conference discussed decision theory research, which was led by Leonard Savage.

Ross Ashby suggested the notion of chess-playing automatons at the ninth conference.

The usefulness of automated computers as logic models for human cognition was discussed more than any other issue during the Macy Conferences.

In 1964, the Macy Conferences gave rise to the American Society for Cybernetics, a professional organization.

The Macy Conferences' early arguments on feedback methods were applied to topics as varied as artillery control, project management, and marital therapy.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cybernetics and AI; Dartmouth AI Conference.


References & Further Reading:


Dupuy, Jean-Pierre. 2000. The Mechanization of the Mind: On the Origins of Cognitive Science. Princeton, NJ: Princeton University Press.

Hayles, N. Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.

Heims, Steve J. 1988. “Optimism and Faith in Mechanism among Social Scientists at the Macy Conferences on Cybernetics, 1946–1953.” AI & Society 2: 69–78.

Heims, Steve J. 1991. The Cybernetics Group. Cambridge, MA: MIT Press.

Pias, Claus, ed. 2016. The Macy Conferences, 1946–1953: The Complete Transactions. Zürich, Switzerland: Diaphanes.




Artificial Intelligence - Who Was John McCarthy?

 


John McCarthy  (1927–2011) was an American computer scientist and mathematician who was best known for helping to develop the subject of artificial intelligence in the late 1950s and pushing the use of formal logic in AI research.

McCarthy was a creative thinker who earned multiple accolades for his contributions to programming languages and operating systems research.

Throughout McCarthy's life, however, artificial intelligence and "formalizing common sense" remained his primary research interest (McCarthy 1990).

As a graduate student, McCarthy first met the concepts that would lead him to AI at the Hixon conference on "Cerebral Mechanisms in Behavior" in 1948.

The symposium was place at the California Institute of Technology, where McCarthy had just finished his undergraduate studies and was now enrolled in a graduate mathematics program.

In the United States, machine intelligence had become a subject of substantial academic interest under the wide term of cybernetics by 1948, and many renowned cyberneticists, notably Princeton mathematician John von Neumann, were in attendance at the symposium.

McCarthy moved to Princeton's mathematics department a year later, when he discussed some early ideas inspired by the symposium with von Neumann.

McCarthy never published the work, despite von Neumann's urging, since he believed cybernetics could not solve his problems concerning human knowing.

McCarthy finished a PhD on partial differential equations at Princeton.

He stayed at Princeton as an instructor after graduating in 1951, and in the summer of 1952, he had the chance to work at Bell Labs with cyberneticist and inventor of information theory Claude Shannon, whom he persuaded to collaborate on an edited collection of writings on machine intelligence.

Automata Studies received contributions from a variety of fields, ranging from pure mathematics to neuroscience.

McCarthy, on the other hand, felt that the published studies did not devote enough attention to the important subject of how to develop intelligent machines.

McCarthy joined the mathematics department at Stanford in 1953, but was fired two years later, maybe because he spent too much time thinking about intelligent computers and not enough time working on his mathematical studies, he speculated.

In 1955, he accepted a position at Dartmouth, just as IBM was preparing to establish the New England Computation Center at MIT.

The New England Computation Center gave Dartmouth access to an IBM computer that was installed at MIT and made accessible to a group of New England colleges.

McCarthy met IBM researcher Nathaniel Rochester via the IBM initiative, and he recruited McCarthy to IBM in the summer of 1955 to work with his research group.

McCarthy persuaded Rochester of the need for more research on machine intelligence, and he submitted a proposal to the Rockefeller Foundation for a "Summer Research Project on Artificial Intelligence" with Rochester, Shannon, and Marvin Minsky, a graduate student at Princeton, which included the first known use of the phrase "artificial intelligence." Despite the fact that the Dartmouth Project is usually regarded as a watershed moment in the development of AI, the conference did not go as McCarthy had envisioned.

The Rockefeller Foundation supported the proposal at half the proposed budget since it was for such an unique field of research with a relatively young professor as author, and because Shannon's reputation carried substantial weight with the Foundation.

Furthermore, since the event took place over many weeks in the summer of 1955, only a handful of the guests were able to attend the whole period.

As a consequence, the Dartmouth conference was a fluid affair with an ever-changing and unpredictably diverse guest list.

Despite its chaotic implementation, the meeting was crucial in establishing AI as a distinct area of research.

McCarthy won a Sloan grant to spend a year at MIT, closer to IBM's New England Computation Center, while still at Dartmouth in 1957.

McCarthy was given a post in the Electrical Engineering department at MIT in 1958, which he accepted.

Later, he was joined by Minsky, who worked in the mathematics department.

McCarthy and Minsky suggested the construction of an official AI laboratory to Jerome Wiesner, head of MIT's Research Laboratory of Electronics, in 1958.

McCarthy and Minsky agreed on the condition that Wiesner let six freshly accepted graduate students into the laboratory, and the "artificial intelligence project" started teaching its first generation of students.

McCarthy released his first article on artificial intelligence in the same year.

In his book "Programs with Common Sense," he described a computer system he named the Advice Taker that would be capable of accepting and understanding instructions in ordinary natural language from nonexpert users.

McCarthy would later define Advice Taker as the start of a study program aimed at "formalizing common sense." McCarthy felt that everyday common sense notions, such as comprehending that if you don't know a phone number, you'll need to look it up before calling, might be written as mathematical equations and fed into a computer, enabling the machine to come to the same conclusions as humans.

Such formalization of common knowledge, McCarthy felt, was the key to artificial intelligence.

McCarthy's presentation, which was presented at the United Kingdom's National Physical Laboratory's "Symposium on Mechansation of Thought Processes," helped establish the symbolic program of AI research.

McCarthy's research was focused on AI by the late 1950s, although he was also involved in a range of other computing-related topics.

In 1957, he was assigned to a group of the Association for Computing Machinery charged with developing the ALGOL programming language, which would go on to become the de facto language for academic research for the next several decades.

He created the LISP programming language for AI research in 1958, and its successors are widely used in business and academia today.

McCarthy contributed to computer operating system research via the construction of time sharing systems, in addition to his work on programming languages.

Early computers were large and costly, and they could only be operated by one person at a time.

McCarthy identified the necessity for several users throughout a major institution, such as a university or hospital, to be able to use the organization's computer systems concurrently via computer terminals in their offices from his first interaction with computers in 1955 at IBM.

McCarthy pushed for study on similar systems at MIT, serving on a university committee that looked into the issue and ultimately assisting in the development of MIT's Compatible Time-Sharing System (CTSS).

Although McCarthy left MIT before the CTSS work was completed, his advocacy with J.C.R.

Licklider, future office head at the Advanced Research Projects Agency, the predecessor to DARPA, while a consultant at Bolt Beranek and Newman in Cambridge, was instrumental in helping MIT secure significant federal support for computing research.

McCarthy was recruited to join what would become the second department of computer science in the United States, after Purdue's, by Stanford Professor George Forsythe in 1962.

McCarthy insisted on going only as a full professor, which he believed would be too much for Forsythe to handle as a young researcher.

Forsythe was able to persuade Stanford to grant McCarthy a full chair, and he moved to Stanford in 1965 to establish the Stanford AI laboratory.

Until his retirement in 2000, McCarthy oversaw research at Stanford on AI topics such as robotics, expert systems, and chess.

McCarthy was up in a family where both parents were ardent members of the Communist Party, and he had a lifetime interest in Russian events.

He maintained numerous professional relationships with Soviet cybernetics and AI experts, traveling and lecturing there in the mid-1960s, and even arranged a chess match between a Stanford chess computer and a Russian equivalent in 1965, which the Russian program won.

He developed many foundational concepts in symbolic AI theory while at Stanford, such as circumscription, which expresses the idea that a computer must be allowed to make reasonable assumptions about problems presented to it; otherwise, even simple scenarios would have to be specified in such exacting logical detail that the task would be all but impossible.

McCarthy's accomplishments have been acknowledged with various prizes, including the 1971 Turing Award, the 1988 Kyoto Prize, admission into the National Academy of Sciences in 1989, the 1990 Presidential Medal of Science, and the 2003 Benjamin Franklin Medal.

McCarthy was a brilliant thinker who continuously imagined new technologies, such as a space elevator for economically transporting stuff into orbit and a system of carts strung from wires to better urban transportation.

In a 2008 interview, McCarthy was asked what he felt the most significant topics in computing now were, and he answered without hesitation, "Formalizing common sense," the same endeavor that had inspired him from the start.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cybernetics and AI; Expert Systems; Symbolic Logic.


References & Further Reading:


Hayes, Patrick J., and Leora Morgenstern. 2007. “On John McCarthy’s 80th Birthday, in Honor of His Contributions.” AI Magazine 28, no. 4 (Winter): 93–102.

McCarthy, John. 1990. Formalizing Common Sense: Papers, edited by Vladimir Lifschitz. Norwood, NJ: Albex.

Morgenstern, Leora, and Sheila A. McIlraith. 2011. “John McCarthy’s Legacy.” Artificial Intelligence 175, no. 1 (January): 1–24.

Nilsson, Nils J. 2012. “John McCarthy: A Biographical Memoir.” Biographical Memoirs of the National Academy of Sciences. http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/mccarthy-john.pdf.



Artificial Intelligence - Who Was Marvin Minsky?

 






Donner Professor of Natural Sciences Marvin Minsky (1927–2016) was a well-known cognitive scientist, inventor, and artificial intelligence researcher from the United States.

At the Massachusetts Institute of Technology, he cofounded the Artificial Intelligence Laboratory in the 1950s and the Media Lab in the 1980s.

His renown was such that the sleeping astronaut Dr.

Victor Kaminski (killed by the HAL 9000 sentient computer) was named after him when he was an adviser on Stanley Kubrick's iconic film 2001: A Space Odyssey in the 1960s.

At the conclusion of high school in the 1940s, Minsky got interested in intelligence, thinking, and learning machines.

He was interested in neurology, physics, music, and psychology as a Harvard student.



On problem-solving and learning ideas, he collaborated with cognitive psychologist George Miller, and on perception and brain modeling theories with J.C.R. Licklider, professor of psychoacoustics and later father of the internet.

Minsky started thinking about mental ideas while at Harvard.

"I thought the brain was made up of tiny relays called neurons, each of which had a probability linked to it that determined whether the neuron would conduct an electric pulse," he later recalled.

"Technically, this system is now known as a stochastic neural network" (Bern stein 1981).

This hypothesis is comparable to Donald Hebb's Hebbian theory, which he laid forth in his book The Organization of Behavior (1946).

In the mathematics department, he finished his undergraduate thesis on topology.

Minsky studied mathematics as a graduate student at Princeton University, but he became increasingly interested in attempting to build artificial neurons out of vacuum tubes like those described in Warren McCulloch and Walter Pitts' famous 1943 paper "A Logical Calculus of the Ideas Immanent in Nervous Activity." He thought that a machine like this might navigate mazes like a rat.



In the summer of 1951, he and fellow Princeton student Dean Edmonds created the system, termed SNARC (Stochastic Neural-Analog Reinforcement Calculator), with money from the Office of Naval Research.

There were 300 tubes in the machine, as well as multiple electric motors and clutches.

Making it a learning machine, the machine employed the clutches to adjust its own knobs.

The electric rat initially walked at random, but after learning how to make better choices and accomplish a wanted objective via reinforcement of probability, it learnt how to make better choices and achieve a desired goal.

Multiple rats finally gathered in the labyrinth and learnt from one another.

Minsky built a second memory for his hard-wired neural network in his dissertation thesis, which helped the rat recall what stimulus it had received.

When confronted with a new circumstance, this enabled the system to explore its memories and forecast the optimum course of action.

Minsky had believed that by adding enough memory loops to his self-organizing random networks, conscious intelligence would arise spontaneously.

In 1954, Minsky finished his dissertation, "Neural Nets and the Brain Model Problem." After graduating from Princeton, Minsky continued to consider how to create artificial intelligence.



In 1956, he organized and participated in the DartmouthSummer Research Project on Artificial Intelligence with John McCarthy, Nathaniel Rochester, and Claude Shannon.

The Dartmouth workshop is often referred to as a watershed moment in AI research.

Minsky started replicating the computational process of proving Euclid's geometric theorems using bits of paper during the summer workshop since no computer was available.

He realized he could create an imagined computer that would locate proofs without having to tell it precisely what it needed to accomplish.

Minsky showed the results to Nathaniel Rochester, who returned to IBM and asked Herbert Gelernter, a new physics hire, to write a geometry-proving program on a computer.

Gelernter built a program in FORTRAN List Processing Language, a language he invented.

Later, John McCarthy combined Gelernter's language with ideas from mathematician Alonzo Church to develop LISP, the most widely used AI language (List-Processing).

Minsky began his studies at MIT in 1957.

He started worked on pattern recognition difficulties with Oliver Selfridge at the university's Lincoln Laboratory.

The next year, he was hired as an assistant professor in the mathematics department.

He founded the AI Group with McCarthy, who had transferred to MIT from Dartmouth.

They continued to work on machine learning concepts.

Minsky started working with mathematician Seymour Papert in the 1960s.

Perceptrons: An Introduction to Computational Geometry (1969) was a joint publication describing a kind of artificial neural network described by Cornell Aeronautical Lab oratory psychologist Frank Rosenblatt.

The book sparked a decades-long debate in the AI field, which continues to this day in certain aspects.

The mathematical arguments provided in Minsky and Papert's book pushed the field to shift toward symbolic AI (also known as "Good Old-Fashioned AI" or GOFAI) in the 1980s, when artificial intelligence researchers rediscovered perceptrons and neural networks.

Time-shared computers were more widely accessible on the MIT campus in the 1960s, and Minsky started working with students on machine intelligence issues.

One of the first efforts was to teach computers how to solve problems in basic calculus using symbolic manipulation techniques such as differentiation and integration.

In 1961, his student James Robert Slagle built a software for symbol manipulation.

SAINT was the name of the application, which operated on an IBM 7090 transistorized mainframe computer (Symbolic Automatic INTegrator).

Other students applied the technique to any symbol manipulation that their software MACSYMA would demand.

Minsky's pupils also had to deal with the challenge of educating a computer to reason by analogy.

Minsky's team also worked on issues related to computational linguistics, computer vision, and robotics.

Daniel Bobrow, one of his pupils, taught a computer how to answer word problems, an accomplishment that combined language processing and mathematics.

Henry Ernst, a student, designed the first computer-controlled robot, a mechanical hand with photoelectric touch sensors for grasping nuclear materials.

Minsky collaborated with Papert to develop semi-independent programs that could interact with one another to address increasingly complex challenges in computer vision and manipulation.

Minsky and Papert combined their nonhierarchical management techniques into a natural intelligence hypothesis known as the Society of Mind.

Intelligence, according to this view, is an emergent feature that results from tiny interactions between programs.

After studying various constructions, the MIT AI Group trained a computer-controlled robot to build structures out of children's blocks by 1970.

Throughout the 1970s and 1980s, the blocks-manipulating robot and the Society of Mind hypothesis evolved.

Minsky finally released The Society of Mind (1986), a model for the creation of intelligence through individual mental actors and their interactions, rather than any fundamental principle or universal technique.

He discussed consciousness, self, free will, memory, genius, language, memory, brainstorming, learning, and many other themes in the book, which is made up of 270 unique articles.

Agents, according to Minsky, do not require their own mind, thinking, or feeling abilities.

They are not intelligent.

However, when they work together as a civilization, they develop what we call human intellect.

To put it another way, understanding how to achieve any certain goal requires the collaboration of various agents.

Agents are required by Minsky's robot constructor to see, move, locate, grip, and balance blocks.

"I'd like to believe that this effort provided us insights into what goes on within specific sections of children's brains when they learn to 'play' with basic toys," he wrote (Minsky 1986, 29).

Minsky speculated that there may be over a hundred agents collaborating to create what we call mind.

In the book Emotion Machine, he expanded on his views on Society of Mind (2006).

He argued that emotions are not a separate kind of reasoning in this section.

Rather, they reflect different ways of thinking about various sorts of challenges that people face in the real world.

According to Minsky, the mind changes between different modes of thought, thinks on several levels, finds various ways to represent things, and constructs numerous models of ourselves.

Minsky remarked on a broad variety of popular and significant subjects linked to artificial intelligence and robotics in his final years via his books and interviews.

The Turing Option (1992), a book created by Minsky in partnership with science fiction novelist Harry Harrison, is set in the year 2023 and deals with issues of artificial intelligence.

In a 1994 article for Scientific American headlined "Will Robots Inherit the Earth?" he said, "Yes, but they will be our children" (Minsky 1994, 113).

Minsky once suggested that a superintelligent AI may one day spark a Riemann Hypothesis Catastrophe, in which an agent charged with answering the hypothesis seizes control of the whole planet's resources in order to obtain even more supercomputing power.

He didn't think this was a plausible scenario.

Humans could be able to converse with intelligent alien life forms, according to Minsky.

They'd think like humans because they'd be constrained by the same "space, time, and material constraints" (Minsky 1987, 117).

Minsky was also a critic of the Loebner Prize, the world's oldest Turing Test-like competition, claiming that it is detrimental to artificial intelligence research.

To anybody who could halt Hugh Loebner's yearly competition, he offered his own Minsky Loebner Prize Revocation Prize.

Both Minsky and Loebner died in 2016, yet the Loebner Prize competition is still going on.

Minsky was also responsible for the development of the confocal microscope (1957) and the head-mounted display (HMD) (1963).

He was awarded the Turing Award in 1969, the Japan Prize in 1990, and the Benjamin Franklin Medal in 1991. (2001). Daniel Bobrow (operating systems), K. Eric Drexler (molecular nanotechnology), Carl Hewitt (mathematics and philosophy of logic), Danny Hillis (parallel computing), Benjamin Kuipers (qualitative simulation), Ivan Sutherland (computer graphics), and Patrick Winston (computer graphics) were among Minsky's doctoral students (who succeeded Minsky as director of the MIT AI Lab).


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.




See also: 


AI Winter; Chatbots and Loebner Prize; Dartmouth AI Conference; 2001: A Space Odyssey.



References & Further Reading:


Bernstein, Jeremy. 1981. “Marvin Minsky’s Vision of the Future.” New Yorker, December 7, 1981. https://www.newyorker.com/magazine/1981/12/14/a-i.

Minsky, Marvin. 1986. The Society of Mind. London: Picador.

Minsky, Marvin. 1987. “Why Intelligent Aliens Will Be Intelligible.” In Extraterrestrials: Science and Alien Intelligence, edited by Edward Regis, 117–28. Cambridge, UK: Cambridge University Press.

Minsky, Marvin. 1994. “Will Robots Inherit the Earth?” Scientific American 271, no. 4 (October): 108–13.

Minsky, Marvin. 2006. The Emotion Machine. New York: Simon & Schuster.

Minsky, Marvin, and Seymour Papert. 1969. Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: Massachusetts Institute of Technology.

Singh, Push. 2003. “Examining the Society of Mind.” Computing and Informatics 22, no. 6: 521–43.


Artificial Intelligence - What Is The Dartmouth AI Conference?

      



    The Dartmouth Conference on Artificial Intelligence, officially known as the "Dartmouth Summer Research Project on Artificial Intelligence," was held in 1956 and is frequently referred to as the AI Constitution.


    • The multidisciplinary conference, held on the Dartmouth College campus in Hanover, New Hampshire, brought together specialists in cybernetics, automata and information theory, operations research, and game theory.
    • Claude Shannon (the "father of information theory"), Marvin Minsky, John McCarthy, Herbert Simon, Allen Newell ("founding fathers of artificial intelligence"), and Nathaniel Rochester (architect of IBM's first commercial scientific mainframe computer) were among the more than twenty attendees.
    • Participants came from the MIT Lincoln Laboratory, Bell Laboratories, and the RAND Systems Research Laboratory.




    The Rockefeller Foundation provided a substantial portion of the funding for the Dartmouth Conference.



    The Dartmouth Conference, which lasted around two months, was envisaged by the organizers as a method to make quick progress on computer models of human cognition.


    • "Every facet of learning or any other trait of intelligence may in theory be so clearly characterized that a computer can be constructed to replicate it," organizers said as a starting point for their deliberations (McCarthy 1955, 2).



    • In his Rockefeller Foundation proposal a year before to the summer meeting, mathematician and principal organizer John McCarthy created the phrase "artificial intelligence." McCarthy subsequently said that the new name was intended to establish a barrier between his study and the discipline of cybernetics.
    • He was a driving force behind the development of symbol processing techniques to artificial intelligence, which were at the time in the minority.
    • In the 1950s, analog cybernetic techniques and neural networks were the most common brain modeling methodologies.




    Issues Covered At The Conference.



    The Dartmouth Conference included a broad variety of issues, from complexity theory and neuron nets to creative thinking and unpredictability.


    • The conference is notable for being the site of the first public demonstration of Newell, Simon, and Clifford Shaw's Logic Theorist, a program that could independently verify theorems stated in Bertrand Russell and Alfred North Whitehead's Principia Mathematica.
    • The only program at the conference that tried to imitate the logical features of human intellect was Logic Theorist.
    • Attendees predicted that by 1970, digital computers would have become chess grandmasters, discovered new and important mathematical theorems, produced passable language translations and understood spoken language, and composed classical music.
    • Because the Rockefeller Foundation never received a formal report on the conference, the majority of information on the events comes from memories, handwritten notes, and a few papers authored by participants and published elsewhere.



    Mechanization of Thought Processes


    Following the Dartmouth Conference, the British National Physical Laboratory (NPL) hosted an international conference on "Mechanization of Thought Processes" in 1958.


    • Several Dartmouth Conference attendees, including Minsky and McCarthy, spoke at the NPL conference.
    • Minsky mentioned the Dartmouth Conference's relevance in the creation of his heuristic software for solving plane geometry issues and the switch from analog feedback, neural networks, and brain modeling to symbolic AI techniques at the NPL conference.
    • Neural networks did not resurface as a research topic until the mid-1980s.



    Dartmouth Summer Research Project 


    The Dartmouth Summer Research Project on Artificial Intelligence was a watershed moment in the development of AI. 

    The Dartmouth Summer Research Project on Artificial Intelligence, which began in 1956, brought together a small group of scientists to kick off this area of study. 

    To mark the occasion, more than 100 researchers and academics gathered at Dartmouth for AI@50, a conference that celebrated the past, appraised current achievements, and helped seed ideas for future artificial intelligence research. 

    John McCarthy, then a mathematics professor at the College, convened the first gathering. 

    The meeting would "continue on the basis of the premise that any facet of learning or any other attribute of intelligence may in theory be so clearly characterized that a computer can be constructed to replicate it," according to his plan. 

    The director of AI@50, Professor of Philosophy James Moor, explains that the researchers who came to Hanover 50 years ago were thinking about methods to make robots more aware and sought to set out a framework to better comprehend human intelligence.



    Context Of The Dartmouth AI Conference:


    Cybernetics, automata theory, and sophisticated information processing were all terms used in the early 50s to describe the science of "thinking machines." 


    The wide range of names reflects the wide range of intellectual approaches. 


    In, John McCarthy, a Dartmouth College Assistant Professor of Mathematics, wanted to form a group to clarify and develop ideas regarding thinking machines. 



    • For the new field, he chose the moniker 'Artificial Intelligence.' He picked the term mainly to escape a concentration on limited automata theory and cybernetics, which was largely focused on analog feedback, as well as the possibility of having to accept or dispute with the aggressive Norbert Wiener as guru. 
    • McCarthy addressed the Rockefeller Foundation in early to seek money for a summer seminar at Dartmouth that would attract roughly 150 people. 
    • In June, he and Claude Shannon, then at Bell Labs, met with Robert Morison, Director of Biological and Medical Research, to explore the concept and potential financing, but Morison was skeptical if money would be made available for such a bold initiative. 



    McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon officially proposed the proposal in September. The term "artificial intelligence" was coined as a result of this suggestion. 


    According to the proposal, 


    • We suggest that during the summer of at Dartmouth College in Hanover, New Hampshire, a -month, -man artificial intelligence research be conducted. 
    • The research will be based on the hypothesis that any part of learning, or any other characteristic of intelligence, can be characterized exactly enough for a computer to imitate it. 
    • It will be attempted to figure out how to get robots to speak, develop abstractions and ideas, solve issues that are now reserved for people, and improve themselves. 
    • We believe that if a properly chosen group of scientists worked on one or more of these topics together for a summer, considerable progress might be accomplished. 
    • Computers, natural language processing, neural networks, theory of computing, abstraction, and creativity are all discussed further in the proposal (these areas within the field of artificial intelligence are considered still relevant to the work of the field). 

    He remarked, "We'll focus on the difficulty of figuring out how to program a calculator to construct notions and generalizations. 


    Of course, this is subject to change once the group meets." Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Herbert A. Simon, and Allen Newell were among the participants at the meeting, according to Stottler Henke Associates. 

    The real participants arrived at various times, most of which were for far shorter periods of time. 


    • Rochester was replaced for three weeks by Trenchard More, and MacKay and Holland were unable to attend—but the project was prepared to commence. 
    • Around June of that year, the first participants (perhaps simply Ray Solomonoff, maybe with Tom Etter) came to Dartmouth College in Hanover, New Hampshire, to join John McCarthy, who had already set up residence there. 
    • Ray and Marvin remained at the Professors' apartments, while the most of the guests stayed at the Hanover Inn.




    List Of Dartmouth AI Conference Attendees:


    1. Ray Solomonoff
    2. Marvin Minsky
    3. John McCarthy
    4. Claude Shannon
    5. Trenchard More
    6. Nat Rochester
    7. Oliver Selfridge
    8. Julian Bigelow
    9. W. Ross Ashby
    10. W.S. McCulloch
    11. Abraham Robinson
    12. Tom Etter
    13. John Nash
    14. David Sayre
    15. Arthur Samuel
    16. Kenneth R. Shoulders
    17. Shoulders' friend
    18. Alex Bernstein
    19. Herbert Simon
    20. Allen Newell


    ~ Jai Krishna Ponnappan

    You may also want to read more about Artificial Intelligence here.


    See also: 

    Cybernetics and AI; Macy Conferences; McCarthy, John; Minsky, Marvin; Newell, Allen; Simon, Herbert A.


    References & Further Reading:


    Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books.

    Gardner, Howard. 1985. The Mind’s New Science: A History of the Cognitive Revolution. New York: Basic Books.

    Kline, Ronald. 2011. “Cybernetics, Automata Studies, and the Dartmouth Conference on Artificial Intelligence.” IEEE Annals of the History of Computing 33, no. 4 (April): 5–16.

    McCarthy, John. 1955. “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.” Rockefeller Foundation application, unpublished.

    Moor, James. 2006. “The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years.” AI Magazine 27, no. 4 (Winter): 87–91.

    Solomonoff, R.J.The Time Scale of Artificial Intelligence; Reflections on Social Effects, Human Systems Management, Vol 5 1985, Pp 149-153

    Moor, J., The Dartmouth College Artificial Intelligence Conference: The Next Fifty years, AI Magazine, Vol 27, No., 4, Pp. 87-9, 2006

    ump up to:

    Kline, Ronald R., Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence, IEEE Annals of the History of Computing, October–December, 2011, IEEE Computer Society


    McCorduck, P., Machines Who Think, A.K. Peters, Ltd, Second Edition, 2004

    Nilsson, N., The Quest for Artificial Intelligence, Cambridge University Press, 2010


    Kline, Ronald R., Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence, IEEE Annals of the History of Computing, October–December, 2011, IEEE Computer Society, (citing letters, from Rockefeller Foundation Archives, Dartmouth file6, 17, 1955 etc.


    McCarthy, J., Minsky, M., Rochester, N., Shannon, C.E., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence., http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf August, 1955


    McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, archived from the original on 2007-08-26, retrieved 2006-04-09 retrieved 10:47 (UTC), 9th of April 2006

     Stottler-Henke retrieved 18:19 (UTC), 27th of July 2006

    Nilsson, N., The Quest for Artificial Intelligence, Cambridge University Press, 2010, P. 53

    Solomonoff, R.J., dart56ray622716talk710.pdf, 1956 URL:{http://raysolomonoff.com/dartmouth/boxbdart/dart56ray622716talk710.pdf

    McCarthy, J., List, Sept., 1956; List among Solomonoff papers to be posted on website solomonof.com
    http://raysolomonoff.com/dartmouth/boxbdart/dart56ray812825who.pdf 1956

    Nilsson, N., The Quest for Artificial Intelligence, Cambridge University Press, 2010,
    personal communication

    McCorduck, P., Machines Who Think, A.K. Peters, Ltd, Second Edition, 2004.

    What Is Artificial General Intelligence?

    Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...