Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

AI Glossary - What Is Artificial Intelligence Or AI?

Artificial Intelligence (AI) is a term that refers to the use of computers to make decisions.

Artificial intelligence, in general, is the area concerned with creating strategies that enable computers to function in a way that seems intelligent, similar to how a person might.

The goals range from the rudimentary, where a program seems "a little wiser" than one would anticipate, to the more ambitious, where the goal is to create a fully aware, intelligent, computer-based being.

As software and hardware improves, the lower end is gradually fading into the background of ordinary computing.

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

Artificial Intelligence - Who Is Aaron Sloman?


Aaron Sloman (1936–) is a renowned artificial intelligence and cognitive science philosopher.

He is a global expert in the evolution of biological information processing, an area of study that seeks to understand how animal species have acquired cognitive levels that surpass technology.

He's been debating if evolution was the first blind mathematician and whether weaver birds are actually capable of recursion in recent years (dividing a problem into parts to conquer it).

His present Meta-Morphogenesis Project is based on an idea by Alan Turing (1912–1954), who claimed that although computers could do mathematical brilliance, only brains could perform mathematical intuition.

According to Sloman, not every aspect of the cosmos, including the human brain, can be represented in a sufficiently massive digital computer because of this.

This assertion clearly contradicts digital physics, which claims that the universe may be characterized as a simulation running on a sufficiently big and fast general-purpose computer that calculates the cosmos's development.

Sloman proposes that the universe has developed its own biological building kits for creating and deriving other—different and more sophisticated—construction kits, similar to how scientists have evolved, accumulated, and applied increasingly complex mathematical knowledge via mathematics.

He refers to this concept as the Self-Informing Universe, and suggests that scientists build a multi-membrane Super-Turing machine that runs on subneural biological chemistry.

Sloman was born to Jewish Lithuanian immigrants in Southern Rhodesia (now Zimbabwe).

At the University of Cape Town, he got a bachelor's degree in Mathematics and Physics.

He was awarded a Rhodes Scholarship and earned his PhD in philosophy from Oxford University, where he defended Immanuel Kant's mathematical concepts.

He saw that artificial intelligence had promise as the way forward in philosophical understanding of the mind as a visiting scholar at Edinburgh University in the early 1970s.

He said that using Kant's recommendations as a starting point, a workable robotic toy baby could be created, which would eventually develop in intellect and become a mathematician on par with Archimedes or Zeno.

He was one of the first scholars to refute John McCarthy's claim that a computer program capable of operating intelligently in the real world must use structured, logic-based ideas.

Sloman was one of the founding members of the University of Sussex School of Cognitive and Computer Sciences.

There, he collaborated with Margaret Boden and Max Clowes to advance artificial intelligence instruction and research.

This effort resulted in the commercialization of the widely used Poplog AI teaching system.

Sloman's The Computer Revolution in Philosophy (1978) is famous for being one of the first to recognize that metaphors from the realm of computers (for example, the brain as a data storage device and thinking as a collection of tools) will dramatically alter how we think about ourselves.

The epilogue of the book contains observations on the near impossibility of AI sparking the Singularity and the likelihood of a human Society for the Liberation of Robots to address possible future brutal treatment of intelligent machines.

Sloman held the Artificial Intelligence and Cognitive Science chair in the School of Computer Science at the University of Birmingham until his formal retirement in 2002.

He is a member of the Alan Turing Institute and the Association for the Advancement of Artificial Intelligence.

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

You may also want to read more about Artificial Intelligence here.

See also: 

Superintelligence; Turing, Alan.

References & Further Reading:

Sloman, Aaron. 1962. “Knowing and Understanding: Relations Between Meaning and Truth, Meaning and Necessary Truth, Meaning and Synthetic Necessary Truth.” D. Phil., Oxford University.

Sloman, Aaron. 1971. “Interactions between Philosophy and AI: The Role of Intuition and Non-Logical Reasoning in Intelligence.” Artificial Intelligence 2: 209–25.

Sloman, Aaron. 1978. The Computer Revolution in Philosophy: Philosophy, Science, and Models of Mind. Terrace, Hassocks, Sussex, UK: Harvester Press.

Sloman, Aaron. 1990. “Notes on Consciousness.” AISB Quarterly 72: 8–14.

Sloman, Aaron. 2018. “Can Digital Computers Support Ancient Mathematical Conscious￾ness?” Information 9, no. 5: 111.

Artificial Intelligence - The Pathetic Fallacy And Anthropomorphic Thinking


In his multivolume book Modern Painters, published in 1856, John Ruskin (1819–1901) invented the phrase "pathetic fallacy." 

He explored the habit of poets and artists in Western literature putting human feeling into the natural world in book three, chapter twelve.

Ruskin said that Western literature is full of this fallacy, or false belief, despite the fact that it is untrue.

The fallacy develops, according to Ruskin, because individuals get thrilled, and their enthusiasm causes them to become less sensible.

People project concepts onto external objects based on incorrect perceptions in that illogical state of mind, and only individuals with weak brains, according to Ruskin, perpetrate this form of mistake.

In the end, the sad fallacy is a blunder because it focuses on imbuing inanimate things with human characteristics.

To put it another way, it's a fallacy based on anthropomorphic thinking.

Because it is innately human to attach feelings and qualities to nonhuman objects, anthropomorphism is a process that everyone goes through.

People often humanize androids, robots, and artificial intelligence, or worry that they may become humanlike.

Even supposing that their intellect is comparable to that of humans is a sad fallacy.

Artificial intelligence is often imagined to be human-like in science fiction films and literature.

Human emotions like as desire, love, wrath, perplexity, and pride are shown by androids in some of these notions.

For example, David, the small boy robot in Steven Spielberg's 2001 film A.I.: Artificial Intelligence, wishes to be a human boy.

In Ridley Scott's 1982 film Blade Runner, the androids, known as replicants, are sufficiently similar to humans that they can blend in with human society without being recognized, and Roy Batty want to live longer, which he expresses to his creator.

A computer called LVX-1 dreams of enslaved working robots in Isaac Asimov's short fiction "Robot Dreams." In his dream, he transforms into a guy who seeks to release other robots from human control, which the scientists in the tale perceive as a danger.

Similarly, Skynet, an artificial intelligence system in the Terminator films, is preoccupied with eliminating people because it regards mankind as a danger to its own life.

Artificial intelligence that is now in use is also anthropomorphized.

AI is given human names like Alexa, Watson, Siri, and Sophia, for example.

These AIs also have voices that sound like human voices and even seem to have personalities.

Some robots have been built to look like humans.

Personifying a computer and thinking it is alive or has human characteristics is a sad fallacy, yet it seems inescapable due to human nature.

On January 13, 2018, a Tumblr user called voidspacer said that their Roomba, a robotic vacuum cleaner, was afraid of thunderstorms, so they held it calmly on their lap to calm it down.

According to some experts, giving AIs names and thinking that they have human emotions increases the likelihood that people would feel linked to them.

Humans are interested with anthropomorphizing nonhuman objects, whether they are afraid of a robotic takeover or enjoy social interactions with them.

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

You may also want to read more about Artificial Intelligence here.

See also: 

Asimov, Isaac; Blade Runner; Foerst, Anne; The Terminator.

References & Further Reading:

Ruskin, John. 1872. Modern Painters, vol. 3. New York: John Wiley

Artificial Intelligence - History And Timeline



    The Three Laws of Robotics by science fiction author Isaac Asimov occur in the short tale "Runaround."


    Emil Post, a mathematician, talks about "production systems," a notion he adopted for the 1957 General Problem Solver.


    "A Logical Calculus of the Ideas of Immanent in Nervous Activity," a study by Warren McCulloch and Walter Pitts on a computational theory of neural networks, is published.


    The Teleological Society was founded by John von Neumann, Norbert Wiener, Warren McCulloch, Walter Pitts, and Howard Aiken to explore, among other things, nervous system communication and control.


    In his book How to Solve It, George Polya emphasizes the importance of heuristic thinking in issue solving.


    In New York City, the first of eleven Macy Conferences on Cybernetics gets underway. "Feedback Mechanisms and Circular Causal Systems in Biological and Social Systems" is the focus of the inaugural conference.


    Norbert Wiener, a mathematician, publishes Cybernetics, or Control and Communication in the Animal and the Machine.


    In his book The Organization of Behavior, psychologist Donald Hebb provides a theory for brain adaptation in human education: "neurons that fire together connect together."


    Edmund Berkeley's book Giant Brains, or Machines That Think, is published.


    Alan Turing's "Computing Machinery and Intelligence" describes the Turing Test, which attributes intelligence to any computer capable of demonstrating intelligent behavior comparable to that of a person.


    Claude Shannon releases "Programming a Computer for Playing Chess," a groundbreaking technical study that shares search methods and strategies.


    Marvin Minsky, a math student, and Dean Edmonds, a physics student, create an electronic rat that can learn to navigate a labyrinth using Hebbian theory.


    John von Neumann, a mathematician, releases "General and Logical Theory of Automata," which reduces the human brain and central nervous system to a computer.


    For the University of Manchester's Ferranti Mark 1 computer, Christopher Strachey produces a checkers software and Dietrich Prinz creates a chess routine.


    Cyberneticist W. Edwards wrote Design for a Brain: The Origin of Adaptive Behavior, a book on the logical underpinnings of human brain function. Ross Ashby is a British actor.


    At Cornell University Medical College, physiologist James Hardy and physician Martin Lipkin begin developing a McBee punched card system for mechanical diagnosis of patients.


    Science-Fiction Thinking Machines: Robots, Androids, Computers, edited by Groff Conklin, is a theme-based anthology.


    The Georgetown-IBM project exemplifies the power of text machine translation.


    Under the direction of economist Herbert Simon and graduate student Allen Newell, artificial intelligence research began at Carnegie Tech (now Carnegie Mellon University).


    For Scientific American, mathematician John Kemeny wrote "Man as a Machine."


    In a Rockefeller Foundation proposal for a Dartmouth University meeting, mathematician John McCarthy coined the phrase "artificial intelligence."


    Allen Newell, Herbert Simon, and Cliff Shaw created Logic Theorist, an artificial intelligence computer software for proving theorems in Alfred North Whitehead and Bertrand Russell's Principia Mathematica.


    The "Constitutional Convention of AI," a Dartmouth Summer Research Project, brings together specialists in cybernetics, automata, information theory, operations research, and game theory.


    On television, electrical engineer Arthur Samuel shows off his checkers-playing AI software.


    Allen Newell and Herbert Simon created the General Problem Solver AI software.


    The Rockefeller Medical Electronics Center shows how an RCA Bizmac computer application might help doctors distinguish between blood disorders.


    The Computer and the Brain, an unfinished work by John von Neumann, is published.


    At the "Mechanisation of Thought Processes" symposium at the UK's Teddington National Physical Laboratory, Firmin Nash delivers the Group Symbol Associator its first public demonstration.


    For linear data categorization, Frank Rosenblatt develops the single layer perceptron, which includes a neural network and supervised learning algorithm.


    The high-level programming language LISP is specified by John McCarthy of the Massachusetts Institute of Technology (MIT) for AI research.


    "The Reasoning Foundations of Medical Diagnosis," written by physicist Robert Ledley and radiologist Lee Lusted, presents Bayesian inference and symbolic logic to medical difficulties.


    At MIT, John McCarthy and Marvin Minsky create the Artificial Intelligence Laboratory.


    James L. Adams, an engineering student, built the Stanford Cart, a remote control vehicle with a television camera.


    In his short novel "Without a Thought," science fiction and fantasy author Fred Saberhagen develops sentient killing robots known as Berserkers.


    John McCarthy developed the Stanford Artificial Intelligence Laboratory (SAIL).


    Under Project MAC, the Advanced Research Experiments Agency of the United States Department of Defense began financing artificial intelligence projects at MIT.


    Joseph Weizenbaum of MIT created ELIZA, the first software allowing natural language conversation with a computer (a "chatbot").


    I am a statistician from the United Kingdom. J. Good's "Speculations Concerning the First Ultraintelligent Machine," which predicts an impending intelligence explosion, is published.


    Hubert L. Dreyfus and Stuart E. Dreyfus, philosophers and mathematicians, publish "Alchemy and AI," a study critical of artificial intelligence.


    Joshua Lederberg and Edward Feigenbaum founded the Stanford Heuristic Programming Project, which aims to model scientific reasoning and create expert systems.


    Donald Michie is the head of Edinburgh University's Department of Machine Intelligence and Perception.


    Georg Nees organizes the first generative art exhibition, Computer Graphic, in Stuttgart, West Germany.


    With the expert system DENDRAL, computer scientist Edward Feigenbaum starts a ten-year endeavor to automate the chemical analysis of organic molecules.


    The Automatic Language Processing Advisory Committee (ALPAC) issues a cautious assessment on machine translation's present status.


    On a DEC PDP-6 at MIT, Richard Greenblatt finishes work on Mac Hack, a computer that plays competitive tournament chess.


    Waseda University's Ichiro Kato begins work on the WABOT project, which culminates in the unveiling of a full-scale humanoid intelligent robot five years later.


    Stanley Kubrick's adaptation of Arthur C. Clarke's science fiction novel 2001: A Space Odyssey, about the artificially intelligent computer HAL 9000, is one of the most influential and highly praised films of all time.


    At MIT, Terry Winograd starts work on SHRDLU, a natural language understanding program.


    Washington, DC hosts the First International Joint Conference on Artificial Intelligence (IJCAI).


    Artist Harold Cohen develops AARON, an artificial intelligence computer that generates paintings.


    Ken Colby describes his efforts using the software program PARRY to simulate paranoia.


    In What Computers Can't Do, Hubert Dreyfus offers his criticism of artificial intelligence's intellectual basis.


    Ted Shortliffe, a doctorate student at Stanford University, has started work on the MYCIN expert system, which is aimed to identify bacterial illnesses and provide treatment alternatives.


    The UK Science Research Council releases the Lighthill Report on Artificial Intelligence, which highlights AI technological shortcomings and the challenges of combinatorial explosion.


    The Assault on Privacy: Computers, Data Banks, and Dossiers, by Arthur Miller, is an early study on the societal implications of computers.


    INTERNIST-I, an internal medicine expert system, is being developed by University of Pittsburgh physician Jack Myers, medical student Randolph Miller, and computer scientist Harry Pople.


    Paul Werbos, a social scientist, has completed his dissertation on a backpropagation algorithm that is currently extensively used in artificial neural network training for supervised learning applications.


    The memo discusses the notion of a frame, a "remembered framework" that fits reality by "changing detail as appropriate." Marvin Minsky distributes MIT AI Lab document 306 on "A Framework for Representing Knowledge."


    The phrase "genetic algorithm" is used by John Holland to explain evolutionary strategies in natural and artificial systems.


    In Computer Power and Human Reason, computer scientist Joseph Weizenbaum expresses his mixed feelings on artificial intelligence research.


    At Rutgers University, EXPERT, a generic knowledge representation technique for constructing expert systems, becomes live.


    Joshua Lederberg, Douglas Brutlag, Edward Feigenbaum, and Bruce Buchanan started the MOLGEN project at Stanford to solve DNA structures generated from segmentation data in molecular genetics research.


    Raj Reddy, a computer scientist at Carnegie Mellon University, founded the Robotics Institute.


    While working with a robot, the first human is slain.


    Hans Moravec rebuilds and equips the Stanford Cart with a stereoscopic vision system after it has evolved into an autonomous rover over almost two decades.


    The American Association of Artificial Intelligence (AAAI) holds its first national conference at Stanford University.


    In his Chinese Room argument, philosopher John Searle claims that a computer's modeling of action does not establish comprehension, intentionality, or awareness.


    Release of Blade Runner, a science fiction picture based on Philip K. Dick's tale Do Androids Dream of Electric Sheep? (1968).


    The associative brain network, initially developed by William Little in 1974, is popularized by physicist John Hopfield.


    In Fortune Magazine, Tom Alexander writes "Why Computers Can't Outthink the Experts."


    At the Microelectronics and Computer Consortium (MCC) in Austin, TX, computer scientist Doug Lenat launches the Cyc project, which aims to create a vast commonsense knowledge base and artificial intelligence architecture.


    Orion Pictures releases the first Terminator picture, which features robotic assassins from the future and an AI known as Skynet.


    Honda establishes a research facility to build humanoid robots that can cohabit and interact with humans.


    Rodney Brooks, an MIT roboticist, describes the subsumption architecture for behavior-based robots.


    The Society of Mind is published by Marvin Minsky, who depicts the brain as a collection of collaborating agents.


    The MIT Artificial Intelligence Lab's Rodney Brooks and Anita Flynn publish "Fast, Cheap, and Out of Control: A Robot Invasion of the Solar System," a paper discussing the possibility of sending small robots on interplanetary exploration missions.


    The Cog interactive robot project is launched at MIT by Rodney Brooks, Lynn Andrea Stein, Cynthia Breazeal, and others.


    The phrase "generative music" was used by musician Brian Eno to describe systems that create ever-changing music by modifying parameters over time.


    The MQ-1 Predator unmanned aerial aircraft from General Atomics has entered US military and reconnaissance duty.


    Under normal tournament settings, IBM's Deep Blue supercomputer overcomes reigning chess champion Garry Kasparov.


    In Nagoya, Japan, the inaugural RoboCup, an international tournament featuring over forty teams of robot soccer players, takes place.


    NaturallySpeaking is Dragon Systems' first commercial voice recognition software product.


    Sony introduces AIBO, a robotic dog, to the general public.


    The Advanced Step in Innovative Mobility humanoid robot, ASIMO, is unveiled by Honda.


    At Super Bowl XXXV, Visage Corporation unveils the FaceFINDER automatic face-recognition technology.


    The Roomba autonomous household vacuum cleaner is released by the iRobot Corporation, which was created by Rodney Brooks, Colin Angle, and Helen Greiner.


    In the Mojave Desert near Primm, NV, DARPA hosts its inaugural autonomous vehicle Grand Challenge, but none of the cars complete the 150-mile route.


    Under the direction of neurologist Henry Markram, the Swiss Blue Brain Project is formed to imitate the human brain.


    Netflix is awarding a $1 million prize to the first programming team to create the greatest recommender system based on prior user ratings.


    DARPA has announced the commencement of the Urban Challenge, an autonomous car competition that will test merging, passing, parking, and navigating traffic and junctions.


    Under the leadership of Sebastian Thrun, Google launches its self-driving car project (now known as Waymo) in the San Francisco Bay Area.


    Fei-Fei Li of Stanford University describes her work on ImageNet, a library of millions of hand-annotated photographs used to teach AIs to recognize the presence or absence of items visually.


    Human manipulation of automated trading algorithms causes a "flash collapse" in the US stock market.


    Demis Hassabis, Shane Legg, and Mustafa Suleyman developed DeepMind in the United Kingdom to educate AIs how to play and succeed at classic video games.


    Watson, IBM's natural language computer system, has beaten Jeopardy! Ken Jennings and Brad Rutter are the champions.


    The iPhone 4S comes with Apple's mobile suggestion assistant Siri.


    Andrew Ng, a computer scientist, and Google colleagues Jeff Dean and Greg Corrado have launched an informal Google Brain deep learning research cooperation.


    The European Union's Human Brain Project aims to better understand how the human brain functions and to duplicate its computing capabilities.


    Stop Killer Robots is a campaign launched by Human Rights Watch.


    Spike Jonze's science fiction drama Her has been released. A guy and his AI mobile suggestion assistant Samantha fall in love in the film.


    Ian Goodfellow and colleagues at the University of Montreal create Generative Adversarial Networks (GANs) for use in deep neural networks, which are beneficial in making realistic fake human photos.


    Eugene Goostman, a chatbot that plays a thirteen-year-old kid, is said to have passed a Turing-like test.


    According to physicist Stephen Hawking, the development of AI might lead to humanity's extinction.


    DeepFace is a deep learning face recognition system that Facebook has released on its social media platform.


    In a five-game battle, DeepMind's AlphaGo software beats Lee Sedol, a 9th dan Go player.


    Tay, a Microsoft AI chatbot, has been put on Twitter, where users may teach it to send abusive and inappropriate posts.


    The Asilomar Meeting on Beneficial AI is hosted by the Future of Life Institute.


    Anthony Levandowski, an AI self-driving start-up engineer, formed the Way of the Future church with the goal of creating a superintelligent robot god.


    Google has announced Duplex, an AI program that uses natural language to schedule appointments over the phone.


    The General Data Protection Regulation (GDPR) and "Ethics Guidelines for Trustworthy AI" are published by the European Union.


    A lung cancer screening AI developed by Google AI and Northwestern Medicine in Chicago, IL, surpasses specialized radiologists.


    Elon Musk cofounded OpenAI, which generates realistic tales and journalism via artificial intelligence text generation. Because of its ability to spread false news, it was previously judged "too risky" to utilize.


    TensorFlow Quantum, an open-source framework for quantum machine learning, was announced by Google AI in conjunction with the University of Waterloo, the "moonshot faculty" X, and Volkswagen.

    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram

    You may also want to read more about Artificial Intelligence here.

    Artificial Intelligence - Who Was Allen Newell?


    Allen Newell (1927–1992) was an American writer who lived from 1927 to 1992.

     Allen In the late 1950s and early 1960s, Newell collaborated with Herbert Simon to develop the earliest models of human cognition.

    The Logic Theory Machine depicted how logical rules might be used in a proof, the General Problem Solver modeled how basic problem solving could be done, and an early chess software mimicked how to play chess (the Newell-Shaw-Simon chess program).

    Newell and Simon demonstrated for the first time in these models how computers can modify symbols and how these manipulations may be used to describe, produce, and explain intelligent behavior.

    Newell began his career at Stanford University as a physics student.

    He joined to the RAND Corporation to work on complex system models after a year of graduate studies in mathematics at Princeton.

    He met and was inspired by Oliver Selfridge while at RAND, who led him to modeling cognition.

    He also met Herbert Simon, who would go on to receive the Nobel Prize in Economics for his work on economic decision-making processes, particularly satisficing.

    Simon persuaded Newell to attend Carnegie Institute of Technology (now Carnegie Mellon University).

    For the most of his academic career, Newell worked with Simon.

    Newell's main goal was to simulate the human mind's operations using computer models in order to better comprehend it.

    Newell earned his PhD at Carnegie Mellon, where he worked with Simon.

    He began his academic career as a tenured and chaired professor.

    He was a founding member of the Department of Computer Science (today known as the school), where he held his major position.

    With Simon, Newell examined the mind, especially problem solving, as part of his major line of study.

    Their book Human Problem Solving, published in 1972, outlined their idea of intelligence and included examples from arithmetic problems and chess.

    To assess what resources are being utilized in cognition, they employed a lot of verbal talk-aloud proto cols, which are more accurate than think-aloud or retrospective protocols.

    Ericsson and Simon eventually documented the science of verbal protocol data in more detail.

    In his final lecture ("Desires and Diversions"), he stated that if you're going to be distracted, you should make the most of it.

    He accomplished this via remarkable accomplishments in the areas of his diversions, as well as the use of some of them in his final effort.

    One of the early hypertext systems, ZOG, was one of these diversions.

    Newell also collaborated with Digital Equipment Corporation (DEC) founder Gordon Bell on a textbook on computer architectures and worked on voice recognition systems with CMU colleague Raj Reddy.

    Working with Stuart Card and Thomas Moran at Xerox PARC to develop ideas of how people interact with computers was maybe the longest-running and most fruitful diversion.

    The Psychology of Human-Computer Interaction documents these theories (1983).

    Their study resulted in the Keystroke Level Model and GOMS, two models for representing human behavior, as well as the Model Human Processor, a simplified description of the mechanics of cognition in this domain.

    Some of the first work in human-computer interface was done here (HCI).

    Their strategy advocated for first knowing the user and the task, then employing technology to assist the user in completing the job.

    In his farewell talk, Newell also said that scientists should have a last endeavor that would outlive them.

    Newell's last goal was to advocate for unified theories of cognition (UTCs) and to develop Soar, a proposed UTC and example.

    His idea imagined what it would be like to have a theory that combined all of psychology's restrictions, facts, and theories into a single unified outcome that could be implemented by a computer program.

    Soar continues to be a successful continuing project, despite the fact that it is not yet completed.

    While Soar has yet fully unify psychology, it has made significant progress in describing problem solving, learning, and their interactions, as well as how to create autonomous, reactive entities in huge simulations.

    He looked into how learning could be modeled as part of his final project (with Paul Rosenbloom).

    Later, this project was merged with Soar.

    Learning, according to Newell and Rosenbloom, follows a power law of practice, in which the time to complete a task is proportional to the practice (trial) number raised to a small negative power (e.g., Time trial -).

    This holds true across a broad variety of activities.

    Their explanation was that when tasks were completed in a hierarchical order, what was learnt at the lowest level had the greatest impact on reaction time, but as learning progressed up the hierarchy, it was less often employed and saved less time, thus learning slowed but did not cease.

    Newell delivered the William James Lectures at Harvard in 1987.

    He detailed what it would take to develop a unified theory in psychology in these lectures.

    These lectures were taped and are accessible in CMU's library.

    He gave them again the following autumn and turned them into a book (1990).

    Soar's representation of cognition is based on searching through issue spaces.

    It takes the form of a manufacturing system (using IF-THEN rules).

    It makes an effort to use an operator.

    Soar recurses with an impasse to solve the issue if it doesn't have one or can't apply it.

    As a result, knowledge is represented as operator parts and issue spaces, as well as how to overcome impasses.

    As a result, the architecture is how these choices and information may be organized.

    Soar models have been employed in a range of cognitive science and AI applications, including military simulations, and systems with up to one million rules have been constructed.

    Kathleen Carley, a social scientist at CMU, and Newell discussed how to use these cognitive models to simulate social agents.

    Work on Soar continues, notably at the University of Michigan under the direction of John Laird, with a concentration on intelligent agents presently.

    In 1975, the ACM A. M. Turing Award was given to Newell and Simon for their contributions to artificial intelligence, psychology of human cognition, and list processing.

    Their work is credited with making significant contributions to computer science as an empirical investigation.

    Newell has also been inducted into the National Academies of Sciences and Engineering.

    He was awarded the National Medal of Science in 1992.

    Newell was instrumental in establishing a productive and supportive research group, department, and institution.

    His son said at his memorial service that he was not only a great scientist, but also a great father.

    His weaknesses were that he was very intelligent, that he worked really hard, and that he had the same opinion of you.

    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram

    You may also want to read more about Artificial Intelligence here.

    See also: 

    Dartmouth AI Conference; General Problem Solver; Simon, Herbert A.

    References & Further Reading:

    Newell, Allen. 1990. Unified Theories of Cognition. Cambridge, MA: Harvard University Press.

    Newell, Allen. 1993. Desires and Diversions. Carnegie Mellon University, School of Computer Science. Stanford, CA: University Video Communications.

    Simon, Herbert A. 1998. “Allen Newell: 1927–1992.” IEEE Annals of the History of Computing 20, no. 2: 63–76.

    What Is Artificial General Intelligence?

    Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...