Showing posts sorted by relevance for query narrow AI. Sort by date Show all posts
Showing posts sorted by relevance for query narrow AI. Sort by date Show all posts

Artificial Intelligence - Who Is Ben Goertzel (1966–)?


Ben Goertzel is the founder and CEO of SingularityNET, a blockchain AI company, as well as the chairman of Novamente LLC, a research professor at Xiamen University's Fujian Key Lab for Brain-Like Intelligent Systems, the chief scientist of Mozi Health and Hanson Robotics in Shenzhen, China, and the chair of the OpenCog Foundation, Humanity+, and Artificial General Intelligence Society conference series. 

Goertzel has long wanted to create a good artificial general intelligence and use it in bioinformatics, finance, gaming, and robotics.

He claims that, despite AI's current popularity, it is currently superior than specialists in a number of domains.

Goertzel divides AI advancement into three stages, each of which represents a step toward a global brain (Goertzel 2002, 2): • the intelligent Internet • the full-fledged Singularity Goertzel presented a lecture titled "Decentralized AI: The Power and the Necessity" at TEDxBerkeley in 2019.

He examines artificial intelligence in its present form as well as its future in this discussion.

"The relevance of decentralized control in leading AI to the next stages, the strength of decentralized AI," he emphasizes (Goertzel 2019a).

In the evolution of artificial intelligence, Goertzel distinguishes three types: artificial narrow intelligence, artificial broad intelligence, and artificial superintelligence.

Artificial narrow intelligence refers to machines that can "address extremely specific issues... better than humans" (Goertzel 2019a).

In certain restricted activities, such as chess and Go, this kind of AI has outperformed a human.

Ray Kurzweil, an American futurologist and inventor, coined the phrase "narrow AI." Artificial general intelligence (AGI) refers to intelligent computers that can "generate knowledge" in a variety of fields and have "humanlike autonomy." By 2029, according to Goertzel, this kind of AI will have reached the same level of intellect as humans.

Artificial superintelligence (ASI) is based on both narrow and broad AI, but it can also reprogram itself.



By 2045, he claims, this kind of AI will be smarter than the finest human brains in terms of "scientific innovation, general knowledge, and social abilities" (Goertzel 2019a).

According to Goertzel, Facebook, Google, and a number of colleges and companies are all actively working on AGI.

According to Goertzel, the shift from AI to AGI will occur within the next five to thirty years.

Goertzel is also interested in artificial intelligence-assisted life extension.

He thinks that artificial intelligence's exponential advancement will lead to technologies that will increase human life span and health eternally.

He predicts that by 2045, a singularity featuring a drastic increase in "human health span" would have occurred (Goertzel 2012).

Vernor Vinge popularized the term "singularity" in his 1993 article "The Coming Technological Singularity." Ray Kurzweil coined the phrase in his 2005 book The Singularity is Near.

The Technological Singularity, according to both writers, is the merging of machine and human intellect as a result of a fast development in new technologies, particularly robots and AI.

The thought of an impending singularity excites Goertzel.

SingularityNET is his major current initiative, which entails the construction of a worldwide network of artificial intelligence researchers interested in developing, sharing, and monetizing AI technology, software, and services.

By developing a decentralized protocol that enables a full stack AI solution, Goertzel has made a significant contribution to this endeavor.

SingularityNET, as a decentralized marketplace, provides a variety of AI technologies, including text generation, AI Opinion, iAnswer, Emotion Recognition, Market Trends, OpenCog Pattern Miner, and its own cryptocurrency, AGI token.

SingularityNET is presently cooperating with Domino's Pizza in Malaysia and Singapore (Khan 2019).



Domino's is interested in leveraging SingularityNET technologies to design a marketing plan, with the goal of providing the finest products and services to its consumers via the use of unique algorithms.

Domino's thinks that by incorporating the AGI ecosystem into their operations, they will be able to provide value and service in the food delivery market.

Goertzel has reacted to scientist Stephen Hawking's challenge, which claimed that AI might lead to the extinction of human civilization.

Given the current situation, artificial super intelligence's mental state will be based on past AI generations, thus "selling, spying, murdering, and gambling are the key aims and values in the mind of the first super intelligence," according to Goertzel (Goertzel 2019b).

He acknowledges that if humans desire compassionate AI, they must first improve their own treatment of one another.

With four years, Goertzel worked for Hanson Robotics in Hong Kong.

He collaborated with Sophia, Einstein, and Han, three well-known robots.

"Great platforms for experimenting with AI algorithms, including cognitive architectures like OpenCog that aim at human-level AI," he added of the robots (Goertzel 2018).

Goertzel argues that essential human values may be retained for future generations in Sophia-like robot creatures after the Technological Singularity.

Decentralized networks like SingularityNET and OpenCog, according to Goertzel, provide "AIs with human-like values," reducing AI hazards to humanity (Goertzel 2018).

Because human values are complicated in nature, Goertzel feels that encoding them as a rule list is wasteful.

Brain-computer interfacing (BCI) and emotional interfacing are two ways Goertzel offers.

Humans will become "cyborgs," with their brains physically linked to computational-intelligence modules, and the machine components of the cyborgs will be able to read the moral-value-evaluation structures of the human mind directly from the biological components of the cyborgs (Goertzel 2018).

Goertzel uses Elon Musk's Neuralink as an example.

Because it entails invasive trials with human brains and a lot of unknowns, Goertzel doubts that this strategy will succeed.

"Emotional and spiritual connections between people and AIs, rather than Ethernet cables or Wifi signals, are used to link human and AI brains," according to the second method (Goertzel 2018).

To practice human values, he proposes that AIs participate in emotional and social connection with humans via face expression detection and mirroring, eye contact, and voice-based emotion recognition.

To that end, Goertzel collaborated with SingularityNET, Hanson AI, and Lia Inc on the "Loving AI" research project, which aims to assist artificial intelligences speak and form intimate connections with humans.

A funny video of actor Will Smith on a date with Sophia the Robot is presently available on the Loving AI website.

Sophia can already make sixty facial expressions and understand human language and emotions, according to the video of the date.

When linked to a network like SingularityNET, humanoid robots like Sophia obtain "ethical insights and breakthroughs...

via language," according to Goertzel (Goertzel 2018).

Then, through a shared internet "mindcloud," robots and AIs may share what they've learnt.

Goertzel is also the chair of the Artificial General Intelligence Society's Conference Series on Artificial General Intelligence, which has been conducted yearly since 2008.

The Journal of Artificial General Intelligence is a peer-reviewed open-access academic periodical published by the organization. Goertzel is the editor of the conference proceedings series.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 

General and Narrow AI; Superintelligence; Technological Singularity.


Further Reading:


Goertzel, Ben. 2002. Creating Internet Intelligence: Wild Computing, Distributed Digital Consciousness, and the Emerging Global Brain. New York: Springer.

Goertzel, Ben. 2012. “Radically Expanding the Human Health Span.” TEDxHKUST. https://www.youtube.com/watch?v=IMUbRPvcB54.

Goertzel, Ben. 2017. “Sophia and SingularityNET: Q&A.” H+ Magazine, November 5, 2017. https://hplusmagazine.com/2017/11/05/sophia-singularitynet-qa/.

Goertzel, Ben. 2018. “Emotionally Savvy Robots: Key to a Human-Friendly Singularity.” https://www.hansonrobotics.com/emotionally-savvy-robots-key-to-a-human-friendly-singularity/.

Goertzel, Ben. 2019a. “Decentralized AI: The Power and the Necessity.” TEDxBerkeley, March 9, 2019. https://www.youtube.com/watch?v=r4manxX5U-0.

Goertzel, Ben. 2019b. “Will Artificial Intelligence Kill Us?” July 31, 2019. https://www.youtube.com/watch?v=TDClKEORtko.

Goertzel, Ben, and Stephan Vladimir Bugaj. 2006. The Path to Posthumanity: 21st Century Technology and Its Radical Implications for Mind, Society, and Reality. Bethesda, MD: Academica Press.

Khan, Arif. 2019. “SingularityNET and Domino’s Pizza Announce a Strategic Partnership.” https://blog.singularitynet.io/singularitynet-and-dominos-pizza-announce-a-strategic-partnership-cbbe21f80fc7.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. NASA: Lewis Research Center





Artificial Intelligence - General and Narrow Categories Of AI.






There are two types of artificial intelligence: general (or powerful or complete) and narrow (or limited) (or weak or specialized).

In the actual world, general AI, such as that seen in science fiction, does not yet exist.

Machines with global intelligence would be capable of completing every intellectual endeavor that humans are capable of.

This sort of system would also seem to think in abstract terms, establish connections, and communicate innovative ideas in the same manner that people do, displaying the ability to think abstractly and solve problems.



Such a computer would be capable of thinking, planning, and recalling information from the past.

While the aim of general AI has yet to be achieved, there are more and more instances of narrow AI.

These are machines that perform at human (or even superhuman) levels on certain tasks.

Computers that have learnt to play complicated games have abilities, techniques, and behaviors that are comparable to, if not superior to, those of the most skilled human players.

AI systems that can translate between languages in real time, interpret and respond to natural speech (both spoken and written), and recognize images have also been developed (being able to recognize, identify, and sort photos or images based on the content).

However, the ability to generalize knowledge or skills is still largely a human accomplishment.

Nonetheless, there is a lot of work being done in the field of general AI right now.

It will be difficult to determine when a computer develops human-level intelligence.

Several serious and hilarious tests have been suggested to determine whether a computer has reached the level of General AI.

The Turing Test is arguably the most renowned of these examinations.

A machine and a person speak in the background, as another human listens in.

The human eavesdropper must figure out which speaker is a machine and which is a human.

The machine passes the test if it can fool the human evaluator a prescribed percentage of the time.

The Coffee Test is a more fantastical test in which a machine enters a typical household and brews coffee.



It has to find the coffee machine, look for the coffee, add water, boil the coffee, and pour it into a cup.

Another is the Flat Pack Furniture Test, which involves a machine receiving, unpacking, and assembling a piece of furniture based only on the instructions supplied.

Some scientists, as well as many science fiction writers and fans, believe that once intelligent machines reach a tipping point, they will be able to improve exponentially.

AI-based beings that far exceed human capabilities might be one conceivable result.

The Singularity, or artificial superintelligence, is the point at which AI assumes control of its own self-improvement (ASI).

If ASI is achieved, it will have unforeseeable consequences for human society.

Some pundits worry that ASI would jeopardize humanity's safety and dignity.

It's up for dispute whether the Singularity will ever happen, and how dangerous it may be.

Narrow AI applications are becoming more popular across the globe.

Machine learning (ML) is at the heart of most new applications, and most AI examples in the news are connected to this subset of technology.

Traditional or conventional algorithms are not the same as machine learning programs.

In programs that cannot learn, a computer programmer actively adds code to account for every action of an algorithm.

All of the decisions made along the process are governed by the programmer's guidelines.

This necessitates the programmer imagining and coding for every possible circumstance that an algorithm may face.

This kind of program code is bulky and often inadequate, especially if it is updated frequently to accommodate for new or unanticipated scenarios.

The utility of hard-coded algorithms approaches its limit in cases where the criteria for optimum judgments are unclear or impossible for a human programmer to foresee.

Machine learning is the process of training a computer to detect and identify patterns via examples rather than predefined rules.



This is achieved, according to Google engineer Jason Mayes, by reviewing incredibly huge quantities of training data or participating in some other kind of programmed learning step.

New patterns may be extracted by processing the test data.

The system may then classify newly unknown data based on the patterns it has already found.

Machine learning allows an algorithm to recognize patterns or rules underlying decision-making processes on its own.

Machine learning also allows a system's output to improve over time as it gains more experience (Mayes 2017).

A human programmer continues to play a vital role in this learning process, influencing results by making choices like developing the exact learning algorithm, selecting the training data, and choosing other design elements and settings.

Machine learning is powerful once it's up and running because it can adapt and enhance its ability to categorize new data without the need for direct human interaction.

In other words, the output quality increases as the user gains experience.

Artificial intelligence is a broad word that refers to the science of making computers intelligent.

AI is a computer system that can collect data and utilize it to make judgments or solve issues, according to scientists.

Another popular scientific definition of AI is "a software program paired with hardware that can receive (or sense) inputs from the world around it, evaluate and analyze those inputs, and create outputs and suggestions without the assistance of a person." When programmers claim an AI system can learn, they're referring to the program's ability to change its own processes in order to provide more accurate outputs or predictions.

AI-based systems are now being developed and used in practically every industry, from agriculture to space exploration, and in applications ranging from law enforcement to online banking.

The methods and techniques used in computer science are always evolving, extending, and improving.

Other terminology linked to machine learning, such as reinforcement learning and neural networks, are important components of cutting-edge artificial intelligence systems.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 

Embodiment, AI and; Superintelligence; Turing, Alan; Turing Test.


Further Reading:


Kelnar, David. 2016. “The Fourth Industrial Revolution: A Primer on Artificial Intelligence (AI).” Medium, December 2, 2016. https://medium.com/mmc-writes/the-fourth-industrial-revolution-a-primer-on-artificial-intelligence-ai-ff5e7fffcae1.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Mayes, Jason. 2017. Machine Learning 101. https://docs.google.com/presentation/d/1kSuQyW5DTnkVaZEjGYCkfOxvzCqGEFzWBy4e9Uedd9k/htmlpresent.

Müller, Vincent C., and Nick Bostrom. 2016. “Future Progress in Artificial Intelligence: A Survey of Expert Opinion.” In Fundamental Issues of Artificial Intelligence, edited by Vincent C. Müller, 553–71. New York: Springer.

Russell, Stuart, and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice Hall.

Samuel, Arthur L. 1988. “Some Studies in Machine Learning Using the Game of Checkers I.” In Computer Games I, 335–65. New York: Springer.



Artificial Intelligence - What Are Non-Player Characters And Emergent Gameplay?

 


Emergent gameplay occurs when a player in a video game encounters complicated scenarios as a result of their interactions with other players in the game.


Players may fully immerse themselves in an intricate and realistic game environment and feel the consequences of their choices in today's video games.

Players may personalize and build their character and tale.

Players take on the role of a cyborg in a dystopian metropolis in the Deus Ex series (2000), for example, one of the first emergent game play systems.

They may change the physical appearance of their character as well as their skill sets, missions, and affiliations.

Players may choose between militarized adaptations that allow for more aggressive play and stealthier options.

The plot and experience are altered by the choices made on how to customize and play, resulting in unique challenges and results for each player.


When players interact with other characters or items, emergent gameplay guarantees that the game environment reacts.



Because of many options, the tale unfolds in surprising ways as the gaming world changes.

Specific outcomes are not predetermined by the designer, and emergent gameplay can even take advantage of game flaws to generate actions in the game world, which some consider to be a form of emergence.

Artificial intelligence has become more popular among game creators in order to have the game environment respond to player actions in a timely manner.

Artificial intelligence aids the behavior of video characters and their interactions via the use of algorithms, basic rule-based forms that help in generating the game environment in sophisticated ways.

"Game AI" refers to the usage of artificial intelligence in games.

The most common use of AI algorithms is to construct the form of a non-player character (NPC), which are characters in the game world with whom the player interacts but does not control.


In its most basic form, AI will use pre-scripted actions for the characters, who will then concentrate on reacting to certain events.


Pre-scripted character behaviors performed by AI are fairly rudimentary, and NPCs are meant to respond to certain "case" events.

The NPC will evaluate its current situation before responding in a range determined by the AI algorithm.

Pac-Man is a good early and basic illustration of this (1980).

Pac-Man is controlled by the player through a labyrinth while being pursued by a variety of ghosts, who are the game's non-player characters.


Players could only interact with ghosts (NPCs) by moving about; ghosts had limited replies and their own AI-programmed pre-scripted movement.




The AI planned reaction would occur if the ghost ran into a wall.

It would then roll an AI-created die that would determine whether or not the NPC would turn toward or away from the direction of the player.

If the NPC decided to go after the player, the AI pre-scripted pro gram would then detect the player’s location and turn the ghost toward them.

If the NPC decided not to go after the player, it would turn in an opposite or a random direction.

This NPC interaction is very simple and limited; however, this was an early step in AI providing emergent gameplay.



Contemporary games provide a variety of options available and a much larger set of possible interactions for the player.


Players in contemporary role-playing games (RPGs) are given an incredibly high number of potential options, as exemplified by Fallout 3 (2008) and its sequels.

Fallout is a role-playing game, where the player takes on the role of a survivor in a post-apocalyptic America.

The story narrative gives the player a goal with no direction; as a result, the player is given the freedom to play as they see fit.

The player can punch every NPC, or they can talk to them instead.

In addition to this variety of actions by the player, there are also a variety of NPCs controlled through AI.

Some of the NPCs are key NPCs, which means they have their own unique scripted dialogue and responses.

This provides them with a personality and provides a complexity through the use of AI that makes the game environment feel more real.


When talking to key NPCs, the player is given options for what to say, and the Key NPCs will have their own unique responses.


This differs from the background character NPCs, as the key NPCs are supposed to respond in such a way that it would emulate interaction with a real personality.

These are still pre-scripted responses to the player, but the NPC responses are emergent based on the possible combination of the interaction.

As the player makes decisions, the NPC will examine this decision and decide how to respond in accordance to its script.

The NPCs that the players help or hurt and the resulting interactions shape the game world.

Game AI can emulate personalities and present emergent gameplay in a narrative setting; however, AI is also involved in challenging the player in difficulty settings.


A variety of pre-scripted AI can still be used to create difficulty.

Pre scripted AI are often made to make suboptimal decisions for enemy NPCs in games where players fight.

This helps make the game easier and also makes the NPCs seem more human.

Suboptimal pre-scripted decisions make the enemy NPCs easier to handle.

Optimal decisions however make the opponents far more difficult to handle.

This can be seen in contemporary games like Tom Clancy’s The Division (2016), where players fight multiple NPCs.

The enemy NPCs range from angry rioters to fully trained paramilitary units.

The rioter NPCs offer an easier challenge as they are not trained in combat and make suboptimal decisions while fighting the player.

The military trained NPCs are designed to have more optimal decision-making AI capabilities in order to increase the difficulty for the player fighting them.



Emergent gameplay has evolved to its full potential through use of adaptive AI.


Similar to prescript AI, the character examines a variety of variables and plans about an action.

However, unlike the prescript AI that follows direct decisions, the adaptive AI character will make their own decisions.

This can be done through computer-controlled learning.


AI-created NPCs follow rules of interactions with the players.


As players go through the game, the player interactions are analyzed, and some AI judgments become more weighted than others.

This is done in order to provide distinct player experiences.

Various player behaviors are actively examined, and modifications are made by the AI when designing future challenges.

The purpose of the adaptive AI is to challenge the players to a degree that the game is fun while not being too easy or too challenging.

Difficulty may still be changed if players seek a different challenge.

This may be observed in the Left 4 Dead game (2008) series’ AI Director.

Players navigate through a level, killing zombies and gathering resources in order to live.


The AI Director chooses which zombies to spawn, where they will spawn, and what supplies will be spawned.

The choice to spawn them is not made at random; rather, it is based on how well the players performed throughout the level.

The AI Director makes its own decisions about how to respond; as a result, the AI Director adapts to the level's player success.

The AI Director gives less resources and spawns more adversaries as the difficulty level rises.


Changes in emergent gameplay are influenced by advancements in simulation and game world design.


As virtual reality technology develops, new technologies will continue to help in this progress.

Virtual reality games provide an even more immersive gaming experience.

Players may use their own hands and eyes to interact with the environment.

Computers are growing more powerful, allowing for more realistic pictures and animations to be rendered.


Adaptive AI demonstrates the capability of really autonomous decision-making, resulting in a truly participatory gaming experience.


Game makers are continuing to build more immersive environments as AI improves to provide more lifelike behavior.

These cutting-edge technology and new AI will elevate emergent gameplay to new heights.

The importance of artificial intelligence in videogames has emerged as a crucial part of the industry for developing realistic and engrossing gaming.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Brooks, Rodney; Distributed and Swarm Intelligence; General and Narrow AI.



Further Reading:



Brooks, Rodney. 1986. “A Robust Layered Control System for a Mobile Robot.” IEEE Journal of Robotics and Automation 2, no. 1 (March): 14–23.

Brooks, Rodney. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems6, no. 1–2 (June): 3–15.

Brooks, Rodney. 1991. “Intelligence Without Representation.” Artificial Intelligence Journal 47: 139–60.

Dennett, Daniel C. 1997. “Cog as a Thought Experiment.” Robotics and Autonomous Systems 20: 251–56.

Gallagher, Shaun. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press.

Pfeifer, Rolf, and Josh Bongard. 2007. How the Body Shapes the Way We Think: A New View of Intelligence. Cambridge, MA: MIT Press.




What Is Artificial General Intelligence?



Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the AGI system to solve problems when presented with new tasks. 

In other words, it's AI's capacity to learn similarly to humans.



Strong AI, full AI, and general intelligent action are some names for it. 

The phrase "strong AI," however, is only used in few academic publications to refer to computer systems that are sentient or aware. 

These definitions may change since specialists from many disciplines see human intelligence from various angles. 

For instance, computer scientists often characterize human intelligence as the capacity to accomplish objectives. 

On the other hand, general intelligence is defined by psychologists in terms of survival or adaptation.

Weak or narrow AI, in contrast to strong AI, is made up of programs created to address a single issue and lacks awareness since it is not meant to have broad cognitive capacities. 

Autonomous cars and IBM's Watson supercomputer are two examples. 

Nevertheless, AGI is defined in computer science as an intelligent system having full or comprehensive knowledge as well as cognitive computing skills.



As of right now, there are no real AGI systems; they are still the stuff of science fiction. 

The long-term objective of these systems is to perform as well as humans do. 

However, due to AGI's superior capacity to acquire and analyze massive amounts of data at a far faster rate than the human mind, it may be possible for AGI to be more intelligent than humans.



Artificial intelligence (AI) is now capable of carrying out a wide range of functions, including providing tailored suggestions based on prior web searches. 

Additionally, it can recognize various items for autonomous cars to avoid, recognize malignant cells during medical inspections, and serve as the brain of home automation. 

Additionally, it may be utilized to find possibly habitable planets, act as intelligent assistants, be in charge of security, and more.



Naturally, AGI seems to far beyond such capacities, and some scientists are concerned this may result in a dystopian future

Elon Musk said that sentient AI would be more hazardous than nuclear war, while Stephen Hawking advised against its creation because it would see humanity as a possible threat and act accordingly.


Despite concerns, most scientists agree that genuine AGI is decades or perhaps centuries away from being developed and must first meet a number of requirements (which are always changing) in order to be achieved. 

These include the capacity for logic, tact, puzzle-solving, and making decisions in the face of ambiguity. 



Additionally, it must be able to plan, learn, and communicate in natural language, as well as represent information, including common sense. 

AGI must also have the capacity to detect (hear, see, etc.) and output the ability to act, such as moving items and switching places to explore. 



How far along are we in the process of developing artificial general intelligence, and who is involved?

In accordance with a 2020 study from the Global Catastrophic Risk Institute (GCRI), academic institutions, businesses, and different governmental agencies are presently working on 72 recognized AGI R&D projects. 



According to the poll, projects nowadays are often smaller, more geographically diversified, less open-source, more focused on humanitarian aims than academic ones, and more centered in private firms than projects in 2017. 

The comparison also reveals a decline in projects with academic affiliations, an increase in projects sponsored by corporations, a rise in projects with a humanitarian emphasis, a decline in programs with ties to the military, and a decline in US-based initiatives.


In AGI R&D, particularly military initiatives that are solely focused on fundamental research, governments and organizations have very little roles to play. 

However, recent programs seem to be more varied and are classified using three criteria, including business projects that are engaged in AGI safety and have humanistic end objectives. 

Additionally, it covers tiny private enterprises with a variety of objectives including academic programs that do not concern themselves with AGI safety but rather the progress of knowledge.

One of the most well-known organizations working on AGI is Carnegie Mellon University, which has a project called ACT-R that aims to create a generic cognitive architecture based on the basic cognitive and perceptual functions that support the human mind. 

The project may be thought of as a method of describing how the brain is structured such that different processing modules can result in cognition.


Another pioneering organization testing the limits of AGI is Microsoft Research AI, which has carried out a number of research initiatives, including developing a data set to counter prejudice for machine-learning models. 

The business is also investigating ways to advance moral AI, create a responsible AI standard, and create AI strategies and evaluations to create a framework that emphasizes the advancement of mankind.


The person behind the well-known video game franchises Commander Keen and Doom has launched yet another intriguing endeavor. 

Keen Technologies, John Carmack's most recent business, is an AGI development company that has already raised $20 million in funding from former GitHub CEO Nat Friedman and Cue founder Daniel Gross. 

Carmack is one of the AGI optimists who believes that it would ultimately help mankind and result in the development of an AI mind that acts like a human, which might be used as a universal remote worker.


So what does AGI's future hold? 

The majority of specialists are doubtful that AGI will ever be developed, and others believe that the urge to even develop artificial intelligence comparable to humans will eventually go away. 

Others are working to develop it so that everyone will benefit.

Nevertheless, the creation of AGI is still in the planning stages, and in the next decades, little progress is anticipated. 

Nevertheless, throughout history, scientists have debated whether developing technologies with the potential to change people's lives will benefit society as a whole or endanger it. 

This proposal was considered before to the invention of the vehicle, during the development of AC electricity, and when the atomic bomb was still only a theory.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.

Be sure to refer to the complete & active AI Terms Glossary here.


Artificial Intelligence - Who Is Sherry Turkle?

 


 

 

Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


However, this viewpoint has given place to one that considers intelligence to be emergent.

This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

Adam appreciates the fact that he is able to create something fresh when playing.

Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

Turkle believes that this transition is critical.


She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

  • the complete complexity and inherent contradictions that define what it is to be human.


A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


  • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
  • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
  • By engaging with gadgets, one may form a relationship with them.
  • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

In other words, the most that can be anticipated is engagement.



Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

Turkle has a problem with this because these devices can only respond as if they understand what is being said.


AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


  • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
  • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

  • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
  • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
  • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...