Showing posts sorted by date for query Artificial General Intelligence. Sort by relevance Show all posts
Showing posts sorted by date for query Artificial General Intelligence. Sort by relevance Show all posts

What Is Artificial General Intelligence?



Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the AGI system to solve problems when presented with new tasks. 

In other words, it's AI's capacity to learn similarly to humans.



Strong AI, full AI, and general intelligent action are some names for it. 

The phrase "strong AI," however, is only used in few academic publications to refer to computer systems that are sentient or aware. 

These definitions may change since specialists from many disciplines see human intelligence from various angles. 

For instance, computer scientists often characterize human intelligence as the capacity to accomplish objectives. 

On the other hand, general intelligence is defined by psychologists in terms of survival or adaptation.

Weak or narrow AI, in contrast to strong AI, is made up of programs created to address a single issue and lacks awareness since it is not meant to have broad cognitive capacities. 

Autonomous cars and IBM's Watson supercomputer are two examples. 

Nevertheless, AGI is defined in computer science as an intelligent system having full or comprehensive knowledge as well as cognitive computing skills.



As of right now, there are no real AGI systems; they are still the stuff of science fiction. 

The long-term objective of these systems is to perform as well as humans do. 

However, due to AGI's superior capacity to acquire and analyze massive amounts of data at a far faster rate than the human mind, it may be possible for AGI to be more intelligent than humans.



Artificial intelligence (AI) is now capable of carrying out a wide range of functions, including providing tailored suggestions based on prior web searches. 

Additionally, it can recognize various items for autonomous cars to avoid, recognize malignant cells during medical inspections, and serve as the brain of home automation. 

Additionally, it may be utilized to find possibly habitable planets, act as intelligent assistants, be in charge of security, and more.



Naturally, AGI seems to far beyond such capacities, and some scientists are concerned this may result in a dystopian future

Elon Musk said that sentient AI would be more hazardous than nuclear war, while Stephen Hawking advised against its creation because it would see humanity as a possible threat and act accordingly.


Despite concerns, most scientists agree that genuine AGI is decades or perhaps centuries away from being developed and must first meet a number of requirements (which are always changing) in order to be achieved. 

These include the capacity for logic, tact, puzzle-solving, and making decisions in the face of ambiguity. 



Additionally, it must be able to plan, learn, and communicate in natural language, as well as represent information, including common sense. 

AGI must also have the capacity to detect (hear, see, etc.) and output the ability to act, such as moving items and switching places to explore. 



How far along are we in the process of developing artificial general intelligence, and who is involved?

In accordance with a 2020 study from the Global Catastrophic Risk Institute (GCRI), academic institutions, businesses, and different governmental agencies are presently working on 72 recognized AGI R&D projects. 



According to the poll, projects nowadays are often smaller, more geographically diversified, less open-source, more focused on humanitarian aims than academic ones, and more centered in private firms than projects in 2017. 

The comparison also reveals a decline in projects with academic affiliations, an increase in projects sponsored by corporations, a rise in projects with a humanitarian emphasis, a decline in programs with ties to the military, and a decline in US-based initiatives.


In AGI R&D, particularly military initiatives that are solely focused on fundamental research, governments and organizations have very little roles to play. 

However, recent programs seem to be more varied and are classified using three criteria, including business projects that are engaged in AGI safety and have humanistic end objectives. 

Additionally, it covers tiny private enterprises with a variety of objectives including academic programs that do not concern themselves with AGI safety but rather the progress of knowledge.

One of the most well-known organizations working on AGI is Carnegie Mellon University, which has a project called ACT-R that aims to create a generic cognitive architecture based on the basic cognitive and perceptual functions that support the human mind. 

The project may be thought of as a method of describing how the brain is structured such that different processing modules can result in cognition.


Another pioneering organization testing the limits of AGI is Microsoft Research AI, which has carried out a number of research initiatives, including developing a data set to counter prejudice for machine-learning models. 

The business is also investigating ways to advance moral AI, create a responsible AI standard, and create AI strategies and evaluations to create a framework that emphasizes the advancement of mankind.


The person behind the well-known video game franchises Commander Keen and Doom has launched yet another intriguing endeavor. 

Keen Technologies, John Carmack's most recent business, is an AGI development company that has already raised $20 million in funding from former GitHub CEO Nat Friedman and Cue founder Daniel Gross. 

Carmack is one of the AGI optimists who believes that it would ultimately help mankind and result in the development of an AI mind that acts like a human, which might be used as a universal remote worker.


So what does AGI's future hold? 

The majority of specialists are doubtful that AGI will ever be developed, and others believe that the urge to even develop artificial intelligence comparable to humans will eventually go away. 

Others are working to develop it so that everyone will benefit.

Nevertheless, the creation of AGI is still in the planning stages, and in the next decades, little progress is anticipated. 

Nevertheless, throughout history, scientists have debated whether developing technologies with the potential to change people's lives will benefit society as a whole or endanger it. 

This proposal was considered before to the invention of the vehicle, during the development of AC electricity, and when the atomic bomb was still only a theory.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.

Be sure to refer to the complete & active AI Terms Glossary here.


AI Glossary - What Is ARTMAP?


     


    What Is ARTMAP AI Algorithm?



    The supervised learning variant of the ART-1 model is ARTMAP.

    It learns binary input patterns that are given to it.


    The suffix "MAP" is used in the names of numerous supervised ART algorithms, such as Fuzzy ARTMAP.

    Both the inputs and the targets are clustered in these algorithms, and the two sets of clusters are linked.


    The ARTMAP algorithms' fundamental flaw is that they lack a way to prevent overfitting, hence they should not be utilized with noisy data.


    How Does The ARTMAP Neural Network Work?



    A novel neural network architecture called ARTMAP automatically picks out recognition categories for any numbers of arbitrarily ordered vectors depending on the accuracy of predictions. 

    A pair of Adaptive Resonance Theory modules (ARTa and ARTb) that may self-organize stable recognition categories in response to random input pattern sequences make up this supervised learning system. 

    The ARTa module gets a stream of input patterns ([a(p)]) and the ARTb module receives a stream of input patterns ([b(p)]), where b(p) is the right prediction given a (p). 

    An internal controller and an associative learning network connect these ART components to provide real-time autonomous system functioning. 

    The remaining patterns a(p) are shown during test trials without b(p), and their predictions at ARTb are contrasted with b. (p). 



    The ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms when tested on a benchmark machine learning database in both on-line and off-line simulations, and achieves 100% accuracy after training on less than half the input patterns in the database. 


    It accomplishes these features by using an internal controller that, on a trial-by-trial basis, links predictive success to category size and simultaneously optimizes predictive generalization and reduces predictive error, using only local operations. 

    By the smallest amount required to rectify a predicted inaccuracy at ARTb, this calculation raises the alertness parameter an of ARTa. 

    To accept a category or hypothesis triggered by an input a(p), rather than seeking a better one via an autonomously controlled process of hypothesis testing, ARTa must have a minimal level of confidence, which is calibrated by the parameter a. 

    The degree of agreement between parameter a and the top-down learnt expectation, or prototype, which is read out after activating an ARTa category, is compared. 

    If the degree of match is less than a, search is initiated. 


    The self-organizing expert system known as ARTMAP adjusts the selectivity of its hypotheses depending on the accuracy of its predictions. 

    As a result, even if they are identical to frequent occurrences with distinct outcomes, unusual but significant events may be promptly and clearly differentiated. 

    In the intervals between input trials, a returns to baseline alertness. 

    When is big, the system operates in a cautious mode and only makes predictions when it is certain of the result. 

    At no step of learning, therefore, do many false-alarm mistakes happen, yet the system nonetheless achieves asymptote quickly. 

    Due to the self-stabilizing nature of ARTMAP learning, it may continue to learn one or more databases without deteriorating its corpus of memories until all available memory has been used.


    What Is Fuzzy ARTMAP?



    For incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analogue or binary input vectors, which may represent fuzzily or crisply defined sets of characteristics, a neural network architecture is developed. 

    By taking advantage of a close formal resemblance between the computations of fuzzy subsethood and ART category choosing, resonance, and learning, the architecture, dubbed fuzzy ARTMAP, accomplishes a synthesis of fuzzy logic and adaptive resonance theory (ART) neural networks. 



    In comparison to benchmark backpropagation and general algorithm systems, fuzzy ARTMAP performance was shown using four simulation classes. 



    A letter recognition database, learning to distinguish between two spirals, identifying locations inside and outside of a circle, and incremental approximation of a piecewise-continuous function are some of the simulations included in this list. 

    Additionally, the fuzzy ARTMAP system is contrasted with Simpson's FMMC system and Salzberg's NGE systems.



    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram



    References And Further Reading:


    • Moreira-Júnior, J.R., Abreu, T., Minussi, C.R. and Lopes, M.L., 2022. Using Aggregated Electrical Loads for the Multinodal Load Forecasting. Journal of Control, Automation and Electrical Systems, pp.1-9.
    • Ferreira, W.D.A.P., Grout, I. and da Silva, A.C.R., 2022, March. Application of a Fuzzy ARTMAP Neural Network for Indoor Air Quality Prediction. In 2022 International Electrical Engineering Congress (iEECON) (pp. 1-4). IEEE.
    • La Marca, A.F., Lopes, R.D.S., Lotufo, A.D.P., Bartholomeu, D.C. and Minussi, C.R., 2022. BepFAMN: A Method for Linear B-Cell Epitope Predictions Based on Fuzzy-ARTMAP Artificial Neural Network. Sensors22(11), p.4027.
    • Santos-Junior, C.R., Abreu, T., Lopes, M.L. and Lotufo, A.D., 2021. A new approach to online training for the Fuzzy ARTMAP artificial neural network. Applied Soft Computing113, p.107936.
    • Ferreira, W.D.A.P., 2021. Rede neural ARTMAP fuzzy implementada em hardware aplicada na previsão da qualidade do ar em ambiente interno.









    Artificial Intelligence - Who Is Sherry Turkle?

     


     

     

    Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

    While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



    She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


    Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

    She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



    She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


    However, this viewpoint has given place to one that considers intelligence to be emergent.

    This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

    The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

    In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



    Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


    Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


    She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


    This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


    Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

    Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

    These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

    She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

    Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



    In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


    Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

    In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

    She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

    The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

    Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

    In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

    This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



    Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


    She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

    Adam appreciates the fact that he is able to create something fresh when playing.

    Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

    In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

    This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

    Turkle believes that this transition is critical.


    She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

    • the complete complexity and inherent contradictions that define what it is to be human.


    A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


    Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


    • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
    • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
    • By engaging with gadgets, one may form a relationship with them.
    • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

    In other words, the most that can be anticipated is engagement.



    Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


    Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

    For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

    Turkle has a problem with this because these devices can only respond as if they understand what is being said.


    AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

    They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

    There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

    A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


    While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

    Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


    Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


    • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
    • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.



    See also: 

    Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


    References And Further Reading

    • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
    • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
    • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



    Artificial Intelligence - What Is The Turing Test?

     



     

    The Turing Test is a method of determining whether or not a machine can exhibit intelligence that mimics or is equivalent and likened to Human intelligence. 

    The Turing Test, named after computer scientist Alan Turing, is an AI benchmark that assigns intelligence to any machine capable of displaying intelligent behavior comparable to that of a person.

    Turing's "Computing Machinery and Intelligence" (1950), which establishes a simple prototype—what Turing calls "The Imitation Game," is the test's locus classicus.

    In this game, a person is asked to determine which of the two rooms is filled by a computer and which is occupied by another human based on anonymized replies to natural language questions posed by the judge to each inhabitant.

    Despite the fact that the human respondent must offer accurate answers to the court's queries, the machine's purpose is to fool the judge into thinking it is human.





    According to Turing, the machine may be considered intelligent to the degree that it is successful at this job.

    The fundamental benefit of this essentially operationalist view of intelligence is that it avoids complex metaphysics and epistemological issues about the nature and inner experience of intelligent activities.

    According to Turing's criteria, little more than empirical observation of outward behavior is required for predicting object intelligence.

    This is in sharp contrast to the widely Cartesian epistemological tradition, which holds that some internal self-awareness is a need for intelligence.

    Turing's method avoids the so-called "problem of other minds" that arises from such a viewpoint—namely, how to be confident of the presence of other intelligent individuals if it is impossible to know their thoughts from a presumably required first-person perspective.



    Nonetheless, the Turing Test, at least insofar as it considers intelligence in a strictly formalist manner, is bound up with the spirit of Cartesian epistemol ogy.

    The machine in the Imitation Game is a digital computer in the sense of Turing: a set of operations that may theoretically be implemented in any material.


    A digital computer consists of three parts: a knowledge store, an executive unit that executes individual orders, and a control that regulates the executive unit.






    However, as Turing points out, it makes no difference whether these components are created using electrical or mechanical means.

    What matters is the formal set of rules that make up the computer's very nature.

    Turing holds to the core belief that intellect is inherently immaterial.

    If this is true, it is logical to assume that human intellect functions in a similar manner to a digital computer and may therefore be copied artificially.


    Since Turing's work, AI research has been split into two camps: 


    1. those who embrace and 
    2. those who oppose this fundamental premise.


    To describe the first camp, John Haugeland created the term "good old-fashioned AI," or GOFAI.

    Marvin Minsky, Allen Newell, Herbert Simon, Terry Winograd, and, most notably, Joseph Weizenbaum, whose software ELIZA was controversially hailed as the first to pass the Turing Test in 1966.



    Nonetheless, detractors of Turing's formalism have proliferated, particularly in the past three decades, and GOFAI is now widely regarded as a discredited AI technique.

    John Searle's Minds, Brains, and Programs (1980), in which Searle builds his now-famous Chinese Room thought experiment, is one of the most renowned criticisms of GOFAI in general—and the assumptions of the Turing Test in particular.





    In the latter, a person with no prior understanding of Chinese is placed in a room and forced to correlate Chinese characters she receives with other Chinese characters she puts out, according to an English-scripted software.


    Searle thinks that, assuming adequate mastery of the software, the person within the room may pass the Turing Test, fooling a native Chinese speaker into thinking she knew Chinese.

    If, on the other hand, the person in the room is a digital computer, Turing-type tests, according to Searle, fail to capture the phenomena of understanding, which he claims entails more than the functionally accurate connection of inputs and outputs.

    Searle's argument implies that AI research should take materiality issues seriously in ways that Turing's Imitation Game's formalism does not.

    Searle continues his own explanation of the Chinese Room thought experiment by saying that human species' physical makeup—particularly their sophisticated nerve systems, brain tissue, and so on—should not be discarded as unimportant to conceptions of intelligence.


    This viewpoint has influenced connectionism, an altogether new approach to AI that aims to build computer intelligence by replicating the electrical circuitry of human brain tissue.


    The effectiveness of this strategy has been hotly contested, although it looks to outperform GOFAI in terms of developing generalized kinds of intelligence.

    Turing's test, on the other hand, may be criticized not just from the standpoint of materialism, but also from the one of fresh formalism.





    As a result, one may argue that Turing tests are insufficient as a measure of intelligence since they attempt to reproduce human behavior, which is frequently exceedingly dumb.


    According to certain variants of this argument, if criteria of rationality are to distinguish rational from illogical human conduct in the first place, they must be derived a priori rather than from real human experience.

    This line of criticism has gotten more acute as AI research has shifted its focus to the potential of so-called super-intelligence: forms of generalized machine intelligence that far outperform human intellect.


    Should this next level of AI be attained, Turing tests would seem to be outdated.

    Furthermore, simply discussing the idea of superintelligence would seem to need additional intelligence criteria in addition to severe Turing testing.

    Turing may be defended against such accusation by pointing out that establishing a universal criterion of intellect was never his goal.



    Indeed, according to Turing (1997, 29–30), the purpose is to replace the metaphysically problematic issue "can machines think" with the more empirically verifiable alternative: 

    "What will happen when a computer assumes the role [of the man in the Imitation Game]" (Turing 1997, 29–30).


    Thus, Turing's test's above-mentioned flaw—that it fails to establish a priori rationality standards—is also part of its strength and drive.

    It also explains why, since it was initially presented three-quarters of a century ago, it has had such a lengthy effect on AI research in all domains.



    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.



    See also: 

    Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


    References And Further Reading

    Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.

    Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.

    Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cam￾bridge, MA: MIT Press.



    What Is Artificial General Intelligence?

    Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...