Showing posts with label Allen Newell. Show all posts
Showing posts with label Allen Newell. Show all posts

AI - Symbol Manipulation.

 



The broad information-processing skills of a digital stored program computer are referred to as symbol manipulation.

From the 1960s through the 1980s, seeing the computer as fundamentally a symbol manipulator became the norm, leading to the scientific study of symbolic artificial intelligence, now known as Good Old-Fashioned AI (GOFAI).

In the 1960s, the emergence of stored-program computers sparked a renewed interest in a computer's programming flexibility.

Symbol manipulation became a comprehensive theory of intelligent behavior as well as a research guideline for AI.

The Logic Theorist, created by Herbert Simon, Allen Newell, and Cliff Shaw in 1956, was one of the first computer programs to mimic intelligent symbol manipulation.

The Logic Theorist was able to prove theorems from Bertrand Russell's Principia Mathematica (1910–1913) and Alfred North Whitehead's Principia Mathematica (1910–1913).

It was presented at Dartmouth's Artificial Intelligence Summer Research Project in 1956. (the Dartmouth Conference).


John McCarthy, a Dartmouth mathematics professor who invented the phrase "artificial intelligence," convened this symposium.


The Dartmouth Conference might be dubbed the genesis of AI since it was there that the Logic Theorist first appeared, and many of the participants went on to become pioneering AI researchers.

The features of symbol manipulation, as a generic process that underpins all types of intelligent problem-solving behavior, were thoroughly explicated and provided a foundation for most of the early work in AI only in the early 1960s, when Simon and Newell had built their General Problem Solver (GPS).

In 1961, Simon and Newell took their knowledge of AI and their work on GPS to a wider audience.


"A computer is not a number-manipulating device; it is a symbol-manipulating device," they wrote in Science, "and the symbols it manipulates may represent numbers, letters, phrases, or even nonnumerical, nonverbal patterns" (Newell and Simon 1961, 2012).





Reading "symbols or patterns presented by appropriate input devices, storing symbols in memory, copying symbols from one memory location to another, erasing symbols, comparing symbols for identity, detecting specific differences between their patterns, and behaving in a manner conditional on the results of its processes," Simon and Newell continued (Newell and Simon 1961, 2012).


The growth of symbol manipulation in the 1960s was also influenced by breakthroughs in cognitive psychology and symbolic logic prior to WWII.


Starting in the 1930s, experimental psychologists like Edwin Boring at Harvard University began to advance their profession away from philosophical and behavioralist methods.





Boring challenged his colleagues to break the mind open and create testable explanations for diverse cognitive mental operations (an approach that was adopted by Kenneth Colby in his work on PARRY in the 1960s).

Simon and Newell also emphasized their debt to pre-World War II developments in formal logic and abstract mathematics in their historical addendum to Human Problem Solving—not because all thought is logical or follows the rules of deductive logic, but because formal logic treated symbols as tangible objects.

"The formalization of logic proved that symbols can be copied, compared, rearranged, and concatenated with just as much definiteness of procedure as [wooden] boards can be sawed, planed, measured, and glued [in a carpenter shop]," Simon and Newell noted (Newell and Simon 1973, 877).



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Expert Systems; Newell, Allen; PARRY; Simon, Herbert A.


References & Further Reading:


Boring, Edwin G. 1946. “Mind and Mechanism.” American Journal of Psychology 59, no. 2 (April): 173–92.

Feigenbaum, Edward A., and Julian Feldman. 1963. Computers and Thought. New York: McGraw-Hill.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman and Company

Newell, Allen, and Herbert A. Simon. 1961. “Computer Simulation of Human Thinking.” Science 134, no. 3495 (December 22): 2011–17.

Newell, Allen, and Herbert A. Simon. 1972. Human Problem Solving. Englewood Cliffs, NJ: Prentice Hall.

Schank, Roger, and Kenneth Colby, eds. 1973. Computer Models of Thought and Language. San Francisco: W. H. Freeman and Company.


Artificial Intelligence - Who Was Herbert A. Simon?

 


Herbert A. Simon (1916–2001) was a multidisciplinary scholar who contributed significantly to artificial intelligence.


He is largely regarded as one of the twentieth century's most prominent social scientists.

His contributions at Carnegie Mellon University lasted five decades.

Early artificial intelligence research was driven by the idea of the computer as a symbol manipulator rather than a number cruncher.

Emil Post, who originally wrote about this sort of computational model in 1943, is credited with inventing production systems, which included sets of rules for symbol strings used to establish conditions—which must exist before rules can be applied—and the actions to be done or conclusions to be drawn.

Simon and his Carnegie Mellon colleague Allen Newell popularized these theories regarding symbol manipulation and production systems by praising their potential benefits for general-purpose reading, storing, and replicating, as well as comparing and contrasting various symbols and patterns.


Simon, Newell, and Cliff Shaw's Logic Theorist software was the first to employ symbol manipulation to construct "intelligent" behavior.


Theorems presented in Bertrand Russell and Alfred North Whitehead's Principia Mathematica (1910) might be independently proved by logic theorists.

Perhaps most notably, the Logic Theorist program uncovered a shorter, more elegant demonstration of Theorem 2.85 in the Principia Mathematica, which was subsequently rejected by the Journal of Symbolic Logic since it was coauthored by a machine.

Although it was theoretically conceivable to prove the Principia Mathematica's theorems in an exhaustively detailed and methodical manner, it was impractical in reality due to the time required.

Newell and Simon were fascinated by the human rules of thumb for solving difficult issues for which an extensive search for answers was impossible due to the massive quantities of processing necessary.

They used the term "heuristics" to describe procedures that may solve issues but do not guarantee success.


A heuristic is a "rule of thumb" used to solve a problem that is too difficult or time consuming to address using an exhaustive search, a formula, or a step-by-step method.


Heuristic approaches are often compared with algorithmic methods in computer science, with the result of the method being a significant differentiating element.

According to this contrast, a heuristic program will provide excellent results in most cases, but not always, while an algorithmic program is a clear technique that guarantees a solution.

This is not, however, a technical difference.

In fact, a heuristic procedure that consistently yields the best result may no longer be deemed "heuristic"—alpha-beta pruning is an example of this.

Simon's heuristics are still utilized by programmers who are trying to solve issues that demand a lot of time and/or memory.

The game of chess is one such example, in which an exhaustive search of all potential board configurations for the proper solution is beyond the human mind's or any computer's capabilities.


Indeed, for artificial intelligence research, Herbert Simon and Allen Newell referred to computer chess as the Drosophila or fruit fly.


Heuristics may also be used to solve issues that don't have a precise answer, such as in medical diagnosis, when heuristics are applied to a collection of symptoms to determine the most probable diagnosis.

Production rules are derived from a class of cognitive science models that apply heuristic principles to productions (situations).

In practice, these rules reduce down to "IF-THEN" statements that reflect specific preconditions or antecedents, as well as the conclusions or consequences that these preconditions or antecedents justify.

"IF there are two X's in a row, THEN put an O to block," is a frequent example offered for the application of production rules to the tic-tac-toe game.

These IF-THEN statements are incorporated into expert systems' inference mechanisms so that a rule interpreter can apply production rules to specific situations lodged in the context data structure or short-term working memory buffer containing information supplied about that situation and draw conclusions or make recommendations.


Production rules were crucial in the development of artificial intelligence as a discipline.


Joshua Lederberg, Edward Feigenbaum, and other Stanford University partners would later use this fundamental finding to develop DENDRAL, an expert system for detecting molecular structure, in the 1960s.

These production guidelines were developed in DENDRAL after discussions between the system's developers and other mass spectrometry specialists.

Edward Shortliffe, Bruce Buchanan, and Edward Feigenbaum used production principles to create MYCIN in the 1970s.

MYCIN has over 600 IFTHEN statements in it, each reflecting domain-specific knowledge about microbial illness diagnosis and treatment.

PUFF, EXPERT, PROSPECTOR, R1, and CLAVIER were among the several production rule systems that followed.


Simon, Newell, and Shaw demonstrated how heuristics may overcome the drawbacks of classical algorithms, which promise answers but take extensive searches or heavy computing to find.


A process for solving issues in a restricted, clear sequence of steps is known as an algorithm.

Sequential operations, conditional operations, and iterative operations are the three kinds of fundamental instructions required to create computable algorithms.

Sequential operations perform tasks in a step-by-step manner.

The algorithm only moves on to the next job when each step is completed.

Conditional operations are made up of instructions that ask questions and then choose the next step dependent on the response.

One kind of conditional operation is the "IF-THEN" expression.

Iterative operations run "loops" of instructions.

These statements tell the task flow to go back and repeat a previous series of statements in order to solve an issue.

Algorithms are often compared to cookbook recipes, in which a certain order and execution of actions in the manufacture of a product—in this example, food—are dictated by a specific sequence of set instructions.


Newell, Shaw, and Simon created list processing for the Logic Theorist software in 1956.


List processing is a programming technique for allocating dynamic storage.

It's mostly utilized in symbol manipulation computer applications like compiler development, visual or linguistic data processing, and artificial intelligence, among others.

Allen Newell, J. Clifford Shaw, and Herbert A. Simon are credited with creating the first list processing software with enormous, sophisticated, and flexible memory structures that were not reliant on subsequent computer/machine memory.

List processing techniques are used in a number of higher-order languages.

IPL and LISP, two artificial intelligence languages, are the most well-known.


Simon and Newell's Generic Problem Solver (GPS) was published in the early 1960s, and it thoroughly explains the essential properties of symbol manipulation as a general process that underpins all types of intelligent problem-solving behavior.


GPS formed the foundation for decades of early AI research.

To arrive at a solution, General Problem Solver is a software for a problem-solving method that employs means-ends analysis and planning.

GPS was created with the goal of separating the problem-solving process from information particular to the situation at hand, allowing it to be used to a wide range of issues.

Simon is an economist, a political scientist, and a cognitive psychologist.


Simon is known for the notions of limited rationality, satisficing, and power law distributions in complex systems, in addition to his important contributions to organizational theory, decision-making, and problem-solving.


Computer and data scientists are interested in all three themes.

Human reasoning is inherently constrained, according to bounded rationality.

Humans lack the time or knowledge required to make ideal judgments; problems are difficult, and the mind has cognitive limitations.

Satisficing is a term used to describe a decision-making process that produces a solution that "satisfies" and "suffices," rather than the most ideal answer.

Customers use satisficing in market conditions when they choose things that are "good enough," meaning sufficient or acceptable.


Simon described how power law distributions were obtained from preferred attachment mechanisms in his study on complex organizations.


When a relative change in one variable induces a proportionate change in another, power laws, also known as scaling laws, come into play.

A square is a simple illustration; when the length of a side doubles, the square's area quadruples.

Power laws may be found in biological systems, fractal patterns, and wealth distributions, among other things.

Preferential attachment processes explain why the affluent grow wealthier in income/wealth distributions: Individuals' wealth is dispersed according on their current level of wealth; those with more wealth get proportionately more income, and hence greater overall wealth, than those with less.

When graphed, such distributions often create so-called long tails.

These long-tailed distributions are being employed to explain crowdsourcing, microfinance, and online marketing, among other things.



Simon was born in Milwaukee, Wisconsin, to a Jewish electrical engineer with multiple patents who came from Germany in the early twentieth century.


His mother was a musical prodigy. Simon grew interested in the social sciences after reading books on psychology and economics written by an uncle.

He has said that two works inspired his early thinking on the subjects: Norman Angell's The Great Illusion (1909) and Henry George's Progress and Poverty (1879).



Simon obtained his doctorate in organizational decision-making from the University of Chicago in 1943.

Rudolf Carnap, Harold Lasswell, Edward Merriam, Nicolas Rashevsky, and Henry Schultz were among his instructors.

He started his career as a political science professor at the Illinois Institute of Technology, where he taught and conducted research.

In 1949, he transferred to Carnegie Mellon University, where he stayed until 2001.

He progressed through the ranks of the Department of Industrial Management to become its chair.

He has written twenty-seven books and several articles that have been published.

In 1959, he was elected a member of the American Academy of Arts and Sciences.

In 1975, Simon was awarded the coveted Turing Award, and in 1978, he was awarded the Nobel Prize in Economics.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; Expert Systems; General Problem Solver; Newell, Allen.


References & Further Reading:


Crowther-Heyck, Hunter. 2005. Herbert A. Simon: The Bounds of Reason in Modern America. Baltimore: Johns Hopkins Press.

Newell, Allen, and Herbert A. Simon. 1956. The Logic Theory Machine: A Complex Information Processing System. Santa Monica, CA: The RAND Corporation.

Newell, Allen, and Herbert A. Simon. 1976. “Computer Science as Empirical Inquiry: Symbols and Search.” Communications of the ACM 19, no. 3: 113–26.

Simon, Herbert A. 1996. Models of My Life. Cambridge, MA: MIT Press.



Artificial Intelligence - Speech Recognition And Natural Language Processing

 


Natural language processing (NLP) is a branch of artificial intelligence that entails mining human text and voice in order to produce or reply to human enquiries in a legible or natural manner.

To decode the ambiguities and opacities of genuine human language, NLP has needed advances in statistics, machine learning, linguistics, and semantics.

Chatbots will employ natural language processing to connect with humans across text-based and voice-based interfaces in the future.

Interactions between people with varying talents and demands will be supported by computer assistants.

By making search more natural, they will enable natural language searches of huge volumes of information, such as that found on the internet.

They may also incorporate useful ideas or nuggets of information into a variety of circumstances, including meetings, classes, and informal discussions.



They may even be able to "read" and react in real time to the emotions or moods of human speakers (so-called "sentient analysis").

By 2025, the market for NLP hardware, software, and services might be worth $20 billion per year.

Speech recognition, often known as voice recognition, has a long history.

Harvey Fletcher, a physicist who pioneered research showing the link between voice energy, frequency spectrum, and the perception of sound by a listener, initiated research into automated speech recognition and transcription at Bell Labs in the 1930s.

Most voice recognition algorithms nowadays are based on his research.

Homer Dudley, another Bell Labs scientist, received patents for a Vodor voice synthesizer that imitated human vocalizations and a parallel band pass vocodor that could take sound samples and put them through narrow band filters to identify their energy levels by 1940.

By putting the recorded energy levels through various filters, the latter gadget might convert them back into crude approximations of the original sounds.

Bell Labs researchers had found out how to make a system that could do more than mimic speech by the 1950s.

During that decade, digital technology had progressed to the point that the system could detect individual spoken word portions by comparing their frequencies and energy levels to a digital sound reference library.

In essence, the system made an informed guess about the expression being expressed.

The pace of change was gradual.

Bell Labs robots could distinguish around 10 syllables uttered by a single person by the mid-1950s.

Researchers at MIT, IBM, Kyoto University, and University College London were working on recognizing computers that employed statistics to detect words with numerous phonemes toward the end of the decade.

Phonemes are sound units that are perceived as separate from one another by listeners.



Additionally, progress was being made on systems that could recognize the voice of many speakers.

Allen Newell headed the first professional automated speech recognition group, which was founded in 1971.

The research team split their time between acoustics, parametrics, phonemics, lexical ideas, sentence processing, and semantics, among other levels of knowledge generation.

Some of the issues examined by the group were investigated via funds from the Defense Advanced Research Project Agency in the 1970s (DARPA).

DARPA was intrigued in the technology because it might be used to handle massive amounts of spoken data generated by multiple government departments and transform that data into insights and strategic solutions to challenges.

Techniques like dynamic temporal warping and continuous voice recognition have made progress.

Computer technology progressed significantly, and numerous mainframe and minicomputer manufacturers started to perform research in natural language processing and voice recognition.

The Speech Understanding Research (SUR) project at Carnegie Mellon University was one of the DARPA-funded projects.



The SUR project, directed by Raj Reddy, produced numerous groundbreaking speech recognition systems, including Hearsay, Dragon, Harpy, and Sphinx.

Harpy is notable in that it employs the beam search approach, which has been a standard in such systems for decades.

Beam search is a heuristic search technique that examines a network by extending the most promising node among a small number of possibilities.

Beam search is an improved version of best-first search that uses less memory.

It's a greedy algorithm in the sense that it uses the problem-solving heuristic of making the locally best decision at each step in the hopes of obtaining a global best choice.

In general, graph search algorithms have served as the foundation for voice recognition research for decades, just as they have in the domains of operations research, game theory, and artificial intelligence.

By the 1980s and 1990s, data processing and algorithms had advanced to the point where researchers could use statistical models to identify whole strings of words, even phrases.

The Pentagon remained the field's leader, but IBM's work had progressed to the point where the corporation was on the verge of manufacturing a computerized voice transcription device for its corporate clients.

Bell Labs had developed sophisticated digital systems for automatic voice dialing of telephone numbers.

Other applications that seemed to be within reach were closed captioned transcription of television broadcasts and personal automatic reservation systems.

The comprehension of spoken language has dramatically improved.

The Air Travel Information System was the first commercial system to emerge from DARPA funding (ATIS).

New obstacles arose, such as "disfluencies," or natural pauses, corrections, casual speech, interruptions, and verbal fillers like "oh" and "um" that organically formed from conversational speaking.

Every Windows 95 operating system came with the Speech Application Programming Interface (SAPI) in 1995.

SAPI (which comprised subroutine definitions, protocols, and tools) made it easier for programmers and developers to include speech recognition and voice synthesis into Windows programs.

Other software developers, in particular, were given the option to construct and freely share their own speech recognition engines thanks to SAPI.

It gave NLP technology a big boost in terms of increasing interest and generating wider markets.

The Dragon line of voice recognition and dictation software programs is one of the most well-known mass-market NLP solutions.

The popular Dragon NaturallySpeaking program aims to provide automatic real-time, large-vocabulary, continuous-speech dictation with the use of a headset or microphone.

The software took fifteen years to create and was initially published in 1997.

It is still widely regarded as the gold standard for personal computing today.

One hour of digitally recorded speech takes the program roughly 4–8 hours to transcribe, although dictation on screen is virtually instantaneous.

Similar software is packaged with voice dictation functions in smart phones, which converts regular speech into text for usage in text messages and emails.

The large amount of data accessible on the cloud, as well as the development of gigantic archives of voice recordings gathered from smart phones and electronic peripherals, have benefited industry tremendously in the twenty-first century.

Companies have been able to enhance acoustic and linguistic models for voice processing as a result of these massive training data sets.

To match observed and "classified" sounds, traditional speech recognition systems employed statistical learning methods.

Since the 1990s, more Markovian and hidden Markovian systems with reinforcement learning and pattern recognition algorithms have been used in speech processing.

Because of the large amounts of data available for matching and the strength of deep learning algorithms, error rates have dropped dramatically in recent years.

Despite the fact that linguists argue that natural languages need flexibility and context to be effectively comprehended, these approximation approaches and probabilistic functions are exceptionally strong in deciphering and responding to human voice inputs.

The n-gram, a continuous sequence of n elements from a given sample of text or voice, is now the foundation of computational linguistics.

Depending on the application, the objects might be pho nemes, syllables, letters, words, or base pairs.

N-grams are usually gathered from text or voice.

In terms of proficiency, no other method presently outperforms this one.

For their virtual assistants, Google and Bing have indexed the whole internet and incorporate user query data in their language models for voice search applications.

Today's systems are starting to identify new terms from their datasets on the fly, which is referred to as "lifelong learning" by humans, although this is still a novel technique.

Companies working in natural language processing will desire solutions that are portable (not reliant on distant servers), deliver near-instantaneous response, and provide a seamless user experience in the future.

Richard Socher, a deep learning specialist and the founder and CEO of the artificial intelligence start-up MetaMind, is working on a strong example of next-generation NLP.

Based on massive chunks of natural language information, the company's technology employs a neural networking architecture and reinforcement learning algorithms to provide responses to specific and highly broad inquiries.

Salesforce, the digital marketing powerhouse, just purchased the startup.

Text-to-speech analysis and advanced conversational interfaces in automobiles will be in high demand in the future, as will speech recognition and translation across cultures and languages, automatic speech understanding in noisy environments like construction sites, and specialized voice systems to control office and home automation processes and internet-connected devices.

To work on, any of these applications to enhance human speech will need the collection of massive data sets of natural language.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Natural Language Generation; Newell, Allen; Workplace Automation.


References & Further Reading:


Chowdhury, Gobinda G. 2003. “Natural Language Processing.” Annual Review of Information Science and Technology 37: 51–89.

Jurafsky, Daniel, and James H. Martin. 2014. Speech and Language Processing. Second edition. Upper Saddle River, NJ: Pearson Prentice Hall.

Mahavan, Radhika. n.d. “Natural Language Processing: Current Applications and Future Possibilities.” https://www.techemergence.com/nlp-current-applications-and-future-possibilities/.

Manning, Christopher D., and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press.

Metz, Cade. 2015. “AI’s Next Frontier: Machines That Understand Language.” Wired, June 24, 2015. https://www.wired.com/2015/06/ais-next-frontier-machines-understand-language/.

Nusca, Andrew. 2011. “Say Command: How Speech Recognition Will Change the World.” 

ZDNet, November 2, 2011. https://www.zdnet.com/article/say-command-how-speech-recognition-will-change-the-world/.





Artificial Intelligence - Who Was Allen Newell?

 



Allen Newell (1927–1992) was an American writer who lived from 1927 to 1992.


 Allen In the late 1950s and early 1960s, Newell collaborated with Herbert Simon to develop the earliest models of human cognition.

The Logic Theory Machine depicted how logical rules might be used in a proof, the General Problem Solver modeled how basic problem solving could be done, and an early chess software mimicked how to play chess (the Newell-Shaw-Simon chess program).

Newell and Simon demonstrated for the first time in these models how computers can modify symbols and how these manipulations may be used to describe, produce, and explain intelligent behavior.

Newell began his career at Stanford University as a physics student.

He joined to the RAND Corporation to work on complex system models after a year of graduate studies in mathematics at Princeton.

He met and was inspired by Oliver Selfridge while at RAND, who led him to modeling cognition.

He also met Herbert Simon, who would go on to receive the Nobel Prize in Economics for his work on economic decision-making processes, particularly satisficing.

Simon persuaded Newell to attend Carnegie Institute of Technology (now Carnegie Mellon University).

For the most of his academic career, Newell worked with Simon.

Newell's main goal was to simulate the human mind's operations using computer models in order to better comprehend it.

Newell earned his PhD at Carnegie Mellon, where he worked with Simon.

He began his academic career as a tenured and chaired professor.

He was a founding member of the Department of Computer Science (today known as the school), where he held his major position.

With Simon, Newell examined the mind, especially problem solving, as part of his major line of study.

Their book Human Problem Solving, published in 1972, outlined their idea of intelligence and included examples from arithmetic problems and chess.

To assess what resources are being utilized in cognition, they employed a lot of verbal talk-aloud proto cols, which are more accurate than think-aloud or retrospective protocols.

Ericsson and Simon eventually documented the science of verbal protocol data in more detail.

In his final lecture ("Desires and Diversions"), he stated that if you're going to be distracted, you should make the most of it.

He accomplished this via remarkable accomplishments in the areas of his diversions, as well as the use of some of them in his final effort.

One of the early hypertext systems, ZOG, was one of these diversions.

Newell also collaborated with Digital Equipment Corporation (DEC) founder Gordon Bell on a textbook on computer architectures and worked on voice recognition systems with CMU colleague Raj Reddy.

Working with Stuart Card and Thomas Moran at Xerox PARC to develop ideas of how people interact with computers was maybe the longest-running and most fruitful diversion.

The Psychology of Human-Computer Interaction documents these theories (1983).

Their study resulted in the Keystroke Level Model and GOMS, two models for representing human behavior, as well as the Model Human Processor, a simplified description of the mechanics of cognition in this domain.

Some of the first work in human-computer interface was done here (HCI).

Their strategy advocated for first knowing the user and the task, then employing technology to assist the user in completing the job.

In his farewell talk, Newell also said that scientists should have a last endeavor that would outlive them.

Newell's last goal was to advocate for unified theories of cognition (UTCs) and to develop Soar, a proposed UTC and example.

His idea imagined what it would be like to have a theory that combined all of psychology's restrictions, facts, and theories into a single unified outcome that could be implemented by a computer program.

Soar continues to be a successful continuing project, despite the fact that it is not yet completed.

While Soar has yet fully unify psychology, it has made significant progress in describing problem solving, learning, and their interactions, as well as how to create autonomous, reactive entities in huge simulations.

He looked into how learning could be modeled as part of his final project (with Paul Rosenbloom).

Later, this project was merged with Soar.

Learning, according to Newell and Rosenbloom, follows a power law of practice, in which the time to complete a task is proportional to the practice (trial) number raised to a small negative power (e.g., Time trial -).

This holds true across a broad variety of activities.

Their explanation was that when tasks were completed in a hierarchical order, what was learnt at the lowest level had the greatest impact on reaction time, but as learning progressed up the hierarchy, it was less often employed and saved less time, thus learning slowed but did not cease.

Newell delivered the William James Lectures at Harvard in 1987.

He detailed what it would take to develop a unified theory in psychology in these lectures.

These lectures were taped and are accessible in CMU's library.

He gave them again the following autumn and turned them into a book (1990).

Soar's representation of cognition is based on searching through issue spaces.

It takes the form of a manufacturing system (using IF-THEN rules).

It makes an effort to use an operator.

Soar recurses with an impasse to solve the issue if it doesn't have one or can't apply it.

As a result, knowledge is represented as operator parts and issue spaces, as well as how to overcome impasses.

As a result, the architecture is how these choices and information may be organized.

Soar models have been employed in a range of cognitive science and AI applications, including military simulations, and systems with up to one million rules have been constructed.

Kathleen Carley, a social scientist at CMU, and Newell discussed how to use these cognitive models to simulate social agents.

Work on Soar continues, notably at the University of Michigan under the direction of John Laird, with a concentration on intelligent agents presently.

In 1975, the ACM A. M. Turing Award was given to Newell and Simon for their contributions to artificial intelligence, psychology of human cognition, and list processing.

Their work is credited with making significant contributions to computer science as an empirical investigation.

Newell has also been inducted into the National Academies of Sciences and Engineering.

He was awarded the National Medal of Science in 1992.

Newell was instrumental in establishing a productive and supportive research group, department, and institution.

His son said at his memorial service that he was not only a great scientist, but also a great father.

His weaknesses were that he was very intelligent, that he worked really hard, and that he had the same opinion of you.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; General Problem Solver; Simon, Herbert A.


References & Further Reading:


Newell, Allen. 1990. Unified Theories of Cognition. Cambridge, MA: Harvard University Press.

Newell, Allen. 1993. Desires and Diversions. Carnegie Mellon University, School of Computer Science. Stanford, CA: University Video Communications.

Simon, Herbert A. 1998. “Allen Newell: 1927–1992.” IEEE Annals of the History of Computing 20, no. 2: 63–76.




Artificial Intelligence - Who Is J. Doyne Farmer?

 


J. Doyne Farmer (1952–) is a leading expert in artificial life, artificial evolution, and artificial intelligence in the United States.


He is most known for being the head of a group of young people who utilized a wearable computer to get an edge while playing on the roulette wheel at various Nevada casinos.

Farmer founded Eudaemonic Enterprises with boyhood buddy Norman Packard and others in graduate school in order to conquer the game of roulette in Las Vegas.


Farmer felt that by understanding the mechanics of a roulette ball in motion, they could design a computer to anticipate which numbered pocket it would end up in.


After releasing the ball on the spinning roulette wheel, the group identified and exploited the fact that it takes around 10 seconds for a croupier to settle bets.

The findings of their research were finally encoded into a little computer buried within a shoe's sole.

The shoe's user entered the ball's location and velocity information with his big toe, and a second person placed the bets when the signal was given.

The gang did not win big quantities of money while gambling due to frequent hardware issues, and they left after approximately a dozen excursions to different casinos.


According to the gang, they had a 20 percent edge over the house.


Several breakthroughs in chaos theory and complexity systems research are ascribed to the Eudaemonic Enterprises group.

Farmer's metadynamics AI algorithms have been used to model the beginning of life and the human immune system's operation.

While at the Santa Fe Institute, Farmer became regarded as a pioneer of complexity economics, or "econophysics." Farmer demonstrated how, similar to a natural food chain, enterprises and groupings of firms build a market ecology of species.


The growth and earnings of individual enterprises, as well as the groups to which they belong, are influenced by this web and the trading methods used by the firms.



Trading businesses, like natural predators, take advantage of these patterns of influence and diversity.


He observed that trading businesses might use both stabilizing and destabilizing techniques to help or hurt the whole market ecology.


  • Farmer cofounded the Prediction Company in order to create advanced statistical financial trading methods and automated quantitative trading in the hopes of outperforming the stock market and making quick money. UBS ultimately bought the firm.
  • He is now working on a book on the rational expectations approach to behavioral economics, and he proposes that complexity economics, which is made up of common "rules of thumb" or heuristics discovered in psychological tests and sociological studies of humans, is the way ahead. In chess, for example, "a queen is better than a rook" is an example heuristic.



Farmer is presently Oxford University's Baillie Gifford Professor of Mathematics.


  • He earned his bachelor's degree in physics from Stanford University and his master's degree in physics from the University of California, Santa Cruz, where he studied under George Blumenthal.
  • He is a cofounder of the journal Quantitative Finance and an Oppenheimer Fellow.
  • Farmer grew up in Silver City, New Mexico, where he was motivated by his Scoutmaster, scientist Tom Ingerson, who had the lads looking for abandoned Spanish gold mines and plotting a journey to Mars.
  • He credits such early events with instilling in him a lifelong passion for scientific research.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Newell, Allen.


Further Reading:


Bass, Thomas A. 1985. The Eudaemonic Pie. Boston: Houghton Mifflin Harcourt.

Bass, Thomas A. 1998. The Predictors: How a Band of Maverick Physicists Used Chaos Theory to Trade Their Way to a Fortune on Wall Street. New York: Henry Holt.

Brockman, John, ed. 2005. Curious Minds: How a Child Becomes a Scientist. New York: Vintage Books.

Freedman, David H. 1994. Brainmakers: How Scientists Are Moving Beyond Computers to Create a Rival to the Human Brain. New York: Simon & Schuster.

Waldrop, M. Mitchell. 1992. Complexity: The Emerging Science at the Edge of Order and Chaos. New York: Simon & Schuster.





What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...