Showing posts sorted by date for query AI programs. Sort by relevance Show all posts
Showing posts sorted by date for query AI programs. Sort by relevance Show all posts

What Is Artificial General Intelligence?



Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the AGI system to solve problems when presented with new tasks. 

In other words, it's AI's capacity to learn similarly to humans.



Strong AI, full AI, and general intelligent action are some names for it. 

The phrase "strong AI," however, is only used in few academic publications to refer to computer systems that are sentient or aware. 

These definitions may change since specialists from many disciplines see human intelligence from various angles. 

For instance, computer scientists often characterize human intelligence as the capacity to accomplish objectives. 

On the other hand, general intelligence is defined by psychologists in terms of survival or adaptation.

Weak or narrow AI, in contrast to strong AI, is made up of programs created to address a single issue and lacks awareness since it is not meant to have broad cognitive capacities. 

Autonomous cars and IBM's Watson supercomputer are two examples. 

Nevertheless, AGI is defined in computer science as an intelligent system having full or comprehensive knowledge as well as cognitive computing skills.



As of right now, there are no real AGI systems; they are still the stuff of science fiction. 

The long-term objective of these systems is to perform as well as humans do. 

However, due to AGI's superior capacity to acquire and analyze massive amounts of data at a far faster rate than the human mind, it may be possible for AGI to be more intelligent than humans.



Artificial intelligence (AI) is now capable of carrying out a wide range of functions, including providing tailored suggestions based on prior web searches. 

Additionally, it can recognize various items for autonomous cars to avoid, recognize malignant cells during medical inspections, and serve as the brain of home automation. 

Additionally, it may be utilized to find possibly habitable planets, act as intelligent assistants, be in charge of security, and more.



Naturally, AGI seems to far beyond such capacities, and some scientists are concerned this may result in a dystopian future

Elon Musk said that sentient AI would be more hazardous than nuclear war, while Stephen Hawking advised against its creation because it would see humanity as a possible threat and act accordingly.


Despite concerns, most scientists agree that genuine AGI is decades or perhaps centuries away from being developed and must first meet a number of requirements (which are always changing) in order to be achieved. 

These include the capacity for logic, tact, puzzle-solving, and making decisions in the face of ambiguity. 



Additionally, it must be able to plan, learn, and communicate in natural language, as well as represent information, including common sense. 

AGI must also have the capacity to detect (hear, see, etc.) and output the ability to act, such as moving items and switching places to explore. 



How far along are we in the process of developing artificial general intelligence, and who is involved?

In accordance with a 2020 study from the Global Catastrophic Risk Institute (GCRI), academic institutions, businesses, and different governmental agencies are presently working on 72 recognized AGI R&D projects. 



According to the poll, projects nowadays are often smaller, more geographically diversified, less open-source, more focused on humanitarian aims than academic ones, and more centered in private firms than projects in 2017. 

The comparison also reveals a decline in projects with academic affiliations, an increase in projects sponsored by corporations, a rise in projects with a humanitarian emphasis, a decline in programs with ties to the military, and a decline in US-based initiatives.


In AGI R&D, particularly military initiatives that are solely focused on fundamental research, governments and organizations have very little roles to play. 

However, recent programs seem to be more varied and are classified using three criteria, including business projects that are engaged in AGI safety and have humanistic end objectives. 

Additionally, it covers tiny private enterprises with a variety of objectives including academic programs that do not concern themselves with AGI safety but rather the progress of knowledge.

One of the most well-known organizations working on AGI is Carnegie Mellon University, which has a project called ACT-R that aims to create a generic cognitive architecture based on the basic cognitive and perceptual functions that support the human mind. 

The project may be thought of as a method of describing how the brain is structured such that different processing modules can result in cognition.


Another pioneering organization testing the limits of AGI is Microsoft Research AI, which has carried out a number of research initiatives, including developing a data set to counter prejudice for machine-learning models. 

The business is also investigating ways to advance moral AI, create a responsible AI standard, and create AI strategies and evaluations to create a framework that emphasizes the advancement of mankind.


The person behind the well-known video game franchises Commander Keen and Doom has launched yet another intriguing endeavor. 

Keen Technologies, John Carmack's most recent business, is an AGI development company that has already raised $20 million in funding from former GitHub CEO Nat Friedman and Cue founder Daniel Gross. 

Carmack is one of the AGI optimists who believes that it would ultimately help mankind and result in the development of an AI mind that acts like a human, which might be used as a universal remote worker.


So what does AGI's future hold? 

The majority of specialists are doubtful that AGI will ever be developed, and others believe that the urge to even develop artificial intelligence comparable to humans will eventually go away. 

Others are working to develop it so that everyone will benefit.

Nevertheless, the creation of AGI is still in the planning stages, and in the next decades, little progress is anticipated. 

Nevertheless, throughout history, scientists have debated whether developing technologies with the potential to change people's lives will benefit society as a whole or endanger it. 

This proposal was considered before to the invention of the vehicle, during the development of AC electricity, and when the atomic bomb was still only a theory.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.

Be sure to refer to the complete & active AI Terms Glossary here.


Artificial Intelligence - Who Is Sherry Turkle?

 


 

 

Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


However, this viewpoint has given place to one that considers intelligence to be emergent.

This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

Adam appreciates the fact that he is able to create something fresh when playing.

Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

Turkle believes that this transition is critical.


She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

  • the complete complexity and inherent contradictions that define what it is to be human.


A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


  • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
  • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
  • By engaging with gadgets, one may form a relationship with them.
  • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

In other words, the most that can be anticipated is engagement.



Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

Turkle has a problem with this because these devices can only respond as if they understand what is being said.


AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


  • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
  • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

  • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
  • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
  • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



Artificial Intelligence - What Is The Turing Test?

 



 

The Turing Test is a method of determining whether or not a machine can exhibit intelligence that mimics or is equivalent and likened to Human intelligence. 

The Turing Test, named after computer scientist Alan Turing, is an AI benchmark that assigns intelligence to any machine capable of displaying intelligent behavior comparable to that of a person.

Turing's "Computing Machinery and Intelligence" (1950), which establishes a simple prototype—what Turing calls "The Imitation Game," is the test's locus classicus.

In this game, a person is asked to determine which of the two rooms is filled by a computer and which is occupied by another human based on anonymized replies to natural language questions posed by the judge to each inhabitant.

Despite the fact that the human respondent must offer accurate answers to the court's queries, the machine's purpose is to fool the judge into thinking it is human.





According to Turing, the machine may be considered intelligent to the degree that it is successful at this job.

The fundamental benefit of this essentially operationalist view of intelligence is that it avoids complex metaphysics and epistemological issues about the nature and inner experience of intelligent activities.

According to Turing's criteria, little more than empirical observation of outward behavior is required for predicting object intelligence.

This is in sharp contrast to the widely Cartesian epistemological tradition, which holds that some internal self-awareness is a need for intelligence.

Turing's method avoids the so-called "problem of other minds" that arises from such a viewpoint—namely, how to be confident of the presence of other intelligent individuals if it is impossible to know their thoughts from a presumably required first-person perspective.



Nonetheless, the Turing Test, at least insofar as it considers intelligence in a strictly formalist manner, is bound up with the spirit of Cartesian epistemol ogy.

The machine in the Imitation Game is a digital computer in the sense of Turing: a set of operations that may theoretically be implemented in any material.


A digital computer consists of three parts: a knowledge store, an executive unit that executes individual orders, and a control that regulates the executive unit.






However, as Turing points out, it makes no difference whether these components are created using electrical or mechanical means.

What matters is the formal set of rules that make up the computer's very nature.

Turing holds to the core belief that intellect is inherently immaterial.

If this is true, it is logical to assume that human intellect functions in a similar manner to a digital computer and may therefore be copied artificially.


Since Turing's work, AI research has been split into two camps: 


  1. those who embrace and 
  2. those who oppose this fundamental premise.


To describe the first camp, John Haugeland created the term "good old-fashioned AI," or GOFAI.

Marvin Minsky, Allen Newell, Herbert Simon, Terry Winograd, and, most notably, Joseph Weizenbaum, whose software ELIZA was controversially hailed as the first to pass the Turing Test in 1966.



Nonetheless, detractors of Turing's formalism have proliferated, particularly in the past three decades, and GOFAI is now widely regarded as a discredited AI technique.

John Searle's Minds, Brains, and Programs (1980), in which Searle builds his now-famous Chinese Room thought experiment, is one of the most renowned criticisms of GOFAI in general—and the assumptions of the Turing Test in particular.





In the latter, a person with no prior understanding of Chinese is placed in a room and forced to correlate Chinese characters she receives with other Chinese characters she puts out, according to an English-scripted software.


Searle thinks that, assuming adequate mastery of the software, the person within the room may pass the Turing Test, fooling a native Chinese speaker into thinking she knew Chinese.

If, on the other hand, the person in the room is a digital computer, Turing-type tests, according to Searle, fail to capture the phenomena of understanding, which he claims entails more than the functionally accurate connection of inputs and outputs.

Searle's argument implies that AI research should take materiality issues seriously in ways that Turing's Imitation Game's formalism does not.

Searle continues his own explanation of the Chinese Room thought experiment by saying that human species' physical makeup—particularly their sophisticated nerve systems, brain tissue, and so on—should not be discarded as unimportant to conceptions of intelligence.


This viewpoint has influenced connectionism, an altogether new approach to AI that aims to build computer intelligence by replicating the electrical circuitry of human brain tissue.


The effectiveness of this strategy has been hotly contested, although it looks to outperform GOFAI in terms of developing generalized kinds of intelligence.

Turing's test, on the other hand, may be criticized not just from the standpoint of materialism, but also from the one of fresh formalism.





As a result, one may argue that Turing tests are insufficient as a measure of intelligence since they attempt to reproduce human behavior, which is frequently exceedingly dumb.


According to certain variants of this argument, if criteria of rationality are to distinguish rational from illogical human conduct in the first place, they must be derived a priori rather than from real human experience.

This line of criticism has gotten more acute as AI research has shifted its focus to the potential of so-called super-intelligence: forms of generalized machine intelligence that far outperform human intellect.


Should this next level of AI be attained, Turing tests would seem to be outdated.

Furthermore, simply discussing the idea of superintelligence would seem to need additional intelligence criteria in addition to severe Turing testing.

Turing may be defended against such accusation by pointing out that establishing a universal criterion of intellect was never his goal.



Indeed, according to Turing (1997, 29–30), the purpose is to replace the metaphysically problematic issue "can machines think" with the more empirically verifiable alternative: 

"What will happen when a computer assumes the role [of the man in the Imitation Game]" (Turing 1997, 29–30).


Thus, Turing's test's above-mentioned flaw—that it fails to establish a priori rationality standards—is also part of its strength and drive.

It also explains why, since it was initially presented three-quarters of a century ago, it has had such a lengthy effect on AI research in all domains.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.

Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.

Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cam￾bridge, MA: MIT Press.



AI Glossary - What Is ARF?

 


R.R. Fikes created ARF in the late 1960s as a general problem solver.

It used a combination of constraint-satisfaction and heuristic searches.

Fikes also created REF, a problem-statement language for ARF. 


REF-ARF is a procedure-based approach for addressing problems. 


The paper by Fikes presents an attempt to create a heuristic problem-solving software that takes issues expressed in a nondeterministic programming language and finds solutions using constraint fulfillment and heuristic search approaches. 

The usage of nondeterministic programming languages for presenting issues is examined, as well as ref, the language that the problem solver ARF accepts.

Different ref extensions are examined. 

The program's basic framework is detailed in depth, and several options for expanding it are examined. 

In sixteen example problems, the usage of the input language and the behavior of the program are presented and investigated.



The paper discusses Ref2 and POPS, two heuristic problem-solving algorithms. 

Both systems take issues expressed as nondeterministic programming language programs and solve them using heuristic approaches to locate successful program executions. 

Ref2 is built on Richard Fikes' REF-ARF system and includes REF-issue-solving ARF's techniques as well as new methods based on a different representation for the problem context. 

Ref2 may also handle issues involving integer programming. POPS is an updated and expanded version of Ref2 that incorporates goal-directed procedures based on GPS ideas.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram



AI - Symbol Manipulation.

 



The broad information-processing skills of a digital stored program computer are referred to as symbol manipulation.

From the 1960s through the 1980s, seeing the computer as fundamentally a symbol manipulator became the norm, leading to the scientific study of symbolic artificial intelligence, now known as Good Old-Fashioned AI (GOFAI).

In the 1960s, the emergence of stored-program computers sparked a renewed interest in a computer's programming flexibility.

Symbol manipulation became a comprehensive theory of intelligent behavior as well as a research guideline for AI.

The Logic Theorist, created by Herbert Simon, Allen Newell, and Cliff Shaw in 1956, was one of the first computer programs to mimic intelligent symbol manipulation.

The Logic Theorist was able to prove theorems from Bertrand Russell's Principia Mathematica (1910–1913) and Alfred North Whitehead's Principia Mathematica (1910–1913).

It was presented at Dartmouth's Artificial Intelligence Summer Research Project in 1956. (the Dartmouth Conference).


John McCarthy, a Dartmouth mathematics professor who invented the phrase "artificial intelligence," convened this symposium.


The Dartmouth Conference might be dubbed the genesis of AI since it was there that the Logic Theorist first appeared, and many of the participants went on to become pioneering AI researchers.

The features of symbol manipulation, as a generic process that underpins all types of intelligent problem-solving behavior, were thoroughly explicated and provided a foundation for most of the early work in AI only in the early 1960s, when Simon and Newell had built their General Problem Solver (GPS).

In 1961, Simon and Newell took their knowledge of AI and their work on GPS to a wider audience.


"A computer is not a number-manipulating device; it is a symbol-manipulating device," they wrote in Science, "and the symbols it manipulates may represent numbers, letters, phrases, or even nonnumerical, nonverbal patterns" (Newell and Simon 1961, 2012).





Reading "symbols or patterns presented by appropriate input devices, storing symbols in memory, copying symbols from one memory location to another, erasing symbols, comparing symbols for identity, detecting specific differences between their patterns, and behaving in a manner conditional on the results of its processes," Simon and Newell continued (Newell and Simon 1961, 2012).


The growth of symbol manipulation in the 1960s was also influenced by breakthroughs in cognitive psychology and symbolic logic prior to WWII.


Starting in the 1930s, experimental psychologists like Edwin Boring at Harvard University began to advance their profession away from philosophical and behavioralist methods.





Boring challenged his colleagues to break the mind open and create testable explanations for diverse cognitive mental operations (an approach that was adopted by Kenneth Colby in his work on PARRY in the 1960s).

Simon and Newell also emphasized their debt to pre-World War II developments in formal logic and abstract mathematics in their historical addendum to Human Problem Solving—not because all thought is logical or follows the rules of deductive logic, but because formal logic treated symbols as tangible objects.

"The formalization of logic proved that symbols can be copied, compared, rearranged, and concatenated with just as much definiteness of procedure as [wooden] boards can be sawed, planed, measured, and glued [in a carpenter shop]," Simon and Newell noted (Newell and Simon 1973, 877).



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Expert Systems; Newell, Allen; PARRY; Simon, Herbert A.


References & Further Reading:


Boring, Edwin G. 1946. “Mind and Mechanism.” American Journal of Psychology 59, no. 2 (April): 173–92.

Feigenbaum, Edward A., and Julian Feldman. 1963. Computers and Thought. New York: McGraw-Hill.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman and Company

Newell, Allen, and Herbert A. Simon. 1961. “Computer Simulation of Human Thinking.” Science 134, no. 3495 (December 22): 2011–17.

Newell, Allen, and Herbert A. Simon. 1972. Human Problem Solving. Englewood Cliffs, NJ: Prentice Hall.

Schank, Roger, and Kenneth Colby, eds. 1973. Computer Models of Thought and Language. San Francisco: W. H. Freeman and Company.


What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...