Showing posts sorted by date for query narrow AI. Sort by relevance Show all posts
Showing posts sorted by date for query narrow AI. Sort by relevance Show all posts

Artificial Intelligence - Speech Recognition And Natural Language Processing

 


Natural language processing (NLP) is a branch of artificial intelligence that entails mining human text and voice in order to produce or reply to human enquiries in a legible or natural manner.

To decode the ambiguities and opacities of genuine human language, NLP has needed advances in statistics, machine learning, linguistics, and semantics.

Chatbots will employ natural language processing to connect with humans across text-based and voice-based interfaces in the future.

Interactions between people with varying talents and demands will be supported by computer assistants.

By making search more natural, they will enable natural language searches of huge volumes of information, such as that found on the internet.

They may also incorporate useful ideas or nuggets of information into a variety of circumstances, including meetings, classes, and informal discussions.



They may even be able to "read" and react in real time to the emotions or moods of human speakers (so-called "sentient analysis").

By 2025, the market for NLP hardware, software, and services might be worth $20 billion per year.

Speech recognition, often known as voice recognition, has a long history.

Harvey Fletcher, a physicist who pioneered research showing the link between voice energy, frequency spectrum, and the perception of sound by a listener, initiated research into automated speech recognition and transcription at Bell Labs in the 1930s.

Most voice recognition algorithms nowadays are based on his research.

Homer Dudley, another Bell Labs scientist, received patents for a Vodor voice synthesizer that imitated human vocalizations and a parallel band pass vocodor that could take sound samples and put them through narrow band filters to identify their energy levels by 1940.

By putting the recorded energy levels through various filters, the latter gadget might convert them back into crude approximations of the original sounds.

Bell Labs researchers had found out how to make a system that could do more than mimic speech by the 1950s.

During that decade, digital technology had progressed to the point that the system could detect individual spoken word portions by comparing their frequencies and energy levels to a digital sound reference library.

In essence, the system made an informed guess about the expression being expressed.

The pace of change was gradual.

Bell Labs robots could distinguish around 10 syllables uttered by a single person by the mid-1950s.

Researchers at MIT, IBM, Kyoto University, and University College London were working on recognizing computers that employed statistics to detect words with numerous phonemes toward the end of the decade.

Phonemes are sound units that are perceived as separate from one another by listeners.



Additionally, progress was being made on systems that could recognize the voice of many speakers.

Allen Newell headed the first professional automated speech recognition group, which was founded in 1971.

The research team split their time between acoustics, parametrics, phonemics, lexical ideas, sentence processing, and semantics, among other levels of knowledge generation.

Some of the issues examined by the group were investigated via funds from the Defense Advanced Research Project Agency in the 1970s (DARPA).

DARPA was intrigued in the technology because it might be used to handle massive amounts of spoken data generated by multiple government departments and transform that data into insights and strategic solutions to challenges.

Techniques like dynamic temporal warping and continuous voice recognition have made progress.

Computer technology progressed significantly, and numerous mainframe and minicomputer manufacturers started to perform research in natural language processing and voice recognition.

The Speech Understanding Research (SUR) project at Carnegie Mellon University was one of the DARPA-funded projects.



The SUR project, directed by Raj Reddy, produced numerous groundbreaking speech recognition systems, including Hearsay, Dragon, Harpy, and Sphinx.

Harpy is notable in that it employs the beam search approach, which has been a standard in such systems for decades.

Beam search is a heuristic search technique that examines a network by extending the most promising node among a small number of possibilities.

Beam search is an improved version of best-first search that uses less memory.

It's a greedy algorithm in the sense that it uses the problem-solving heuristic of making the locally best decision at each step in the hopes of obtaining a global best choice.

In general, graph search algorithms have served as the foundation for voice recognition research for decades, just as they have in the domains of operations research, game theory, and artificial intelligence.

By the 1980s and 1990s, data processing and algorithms had advanced to the point where researchers could use statistical models to identify whole strings of words, even phrases.

The Pentagon remained the field's leader, but IBM's work had progressed to the point where the corporation was on the verge of manufacturing a computerized voice transcription device for its corporate clients.

Bell Labs had developed sophisticated digital systems for automatic voice dialing of telephone numbers.

Other applications that seemed to be within reach were closed captioned transcription of television broadcasts and personal automatic reservation systems.

The comprehension of spoken language has dramatically improved.

The Air Travel Information System was the first commercial system to emerge from DARPA funding (ATIS).

New obstacles arose, such as "disfluencies," or natural pauses, corrections, casual speech, interruptions, and verbal fillers like "oh" and "um" that organically formed from conversational speaking.

Every Windows 95 operating system came with the Speech Application Programming Interface (SAPI) in 1995.

SAPI (which comprised subroutine definitions, protocols, and tools) made it easier for programmers and developers to include speech recognition and voice synthesis into Windows programs.

Other software developers, in particular, were given the option to construct and freely share their own speech recognition engines thanks to SAPI.

It gave NLP technology a big boost in terms of increasing interest and generating wider markets.

The Dragon line of voice recognition and dictation software programs is one of the most well-known mass-market NLP solutions.

The popular Dragon NaturallySpeaking program aims to provide automatic real-time, large-vocabulary, continuous-speech dictation with the use of a headset or microphone.

The software took fifteen years to create and was initially published in 1997.

It is still widely regarded as the gold standard for personal computing today.

One hour of digitally recorded speech takes the program roughly 4–8 hours to transcribe, although dictation on screen is virtually instantaneous.

Similar software is packaged with voice dictation functions in smart phones, which converts regular speech into text for usage in text messages and emails.

The large amount of data accessible on the cloud, as well as the development of gigantic archives of voice recordings gathered from smart phones and electronic peripherals, have benefited industry tremendously in the twenty-first century.

Companies have been able to enhance acoustic and linguistic models for voice processing as a result of these massive training data sets.

To match observed and "classified" sounds, traditional speech recognition systems employed statistical learning methods.

Since the 1990s, more Markovian and hidden Markovian systems with reinforcement learning and pattern recognition algorithms have been used in speech processing.

Because of the large amounts of data available for matching and the strength of deep learning algorithms, error rates have dropped dramatically in recent years.

Despite the fact that linguists argue that natural languages need flexibility and context to be effectively comprehended, these approximation approaches and probabilistic functions are exceptionally strong in deciphering and responding to human voice inputs.

The n-gram, a continuous sequence of n elements from a given sample of text or voice, is now the foundation of computational linguistics.

Depending on the application, the objects might be pho nemes, syllables, letters, words, or base pairs.

N-grams are usually gathered from text or voice.

In terms of proficiency, no other method presently outperforms this one.

For their virtual assistants, Google and Bing have indexed the whole internet and incorporate user query data in their language models for voice search applications.

Today's systems are starting to identify new terms from their datasets on the fly, which is referred to as "lifelong learning" by humans, although this is still a novel technique.

Companies working in natural language processing will desire solutions that are portable (not reliant on distant servers), deliver near-instantaneous response, and provide a seamless user experience in the future.

Richard Socher, a deep learning specialist and the founder and CEO of the artificial intelligence start-up MetaMind, is working on a strong example of next-generation NLP.

Based on massive chunks of natural language information, the company's technology employs a neural networking architecture and reinforcement learning algorithms to provide responses to specific and highly broad inquiries.

Salesforce, the digital marketing powerhouse, just purchased the startup.

Text-to-speech analysis and advanced conversational interfaces in automobiles will be in high demand in the future, as will speech recognition and translation across cultures and languages, automatic speech understanding in noisy environments like construction sites, and specialized voice systems to control office and home automation processes and internet-connected devices.

To work on, any of these applications to enhance human speech will need the collection of massive data sets of natural language.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Natural Language Generation; Newell, Allen; Workplace Automation.


References & Further Reading:


Chowdhury, Gobinda G. 2003. “Natural Language Processing.” Annual Review of Information Science and Technology 37: 51–89.

Jurafsky, Daniel, and James H. Martin. 2014. Speech and Language Processing. Second edition. Upper Saddle River, NJ: Pearson Prentice Hall.

Mahavan, Radhika. n.d. “Natural Language Processing: Current Applications and Future Possibilities.” https://www.techemergence.com/nlp-current-applications-and-future-possibilities/.

Manning, Christopher D., and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press.

Metz, Cade. 2015. “AI’s Next Frontier: Machines That Understand Language.” Wired, June 24, 2015. https://www.wired.com/2015/06/ais-next-frontier-machines-understand-language/.

Nusca, Andrew. 2011. “Say Command: How Speech Recognition Will Change the World.” 

ZDNet, November 2, 2011. https://www.zdnet.com/article/say-command-how-speech-recognition-will-change-the-world/.





Artificial Intelligence - Who Is Ray Kurzweil (1948–)?




Ray Kurzweil is a futurist and inventor from the United States.

He spent the first half of his career developing the first CCD flat-bed scanner, the first omni-font optical character recognition device, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed, large-vocabulary speech recognition machine.

He has earned several awards for his contributions to technology, including the Technical Grammy Award in 2015 and the National Medal of Technology.

Kurzweil is the cofounder and chancellor of Singularity University, as well as the director of engineering at Google, where he leads a team that works on artificial intelligence and natural language processing.

Singularity University is a non-accredited graduate school founded on the premise of tackling great issues like renewable energy and space travel by gaining a deep understanding of the opportunities presented by technology progress's current acceleration.

The university, which is headquartered in Silicon Valley, has evolved to include one hundred chapters in fifty-five countries, delivering seminars, educational programs, and business acceleration programs.

While at Google, Kurzweil published the book How to Create a Mind (2012).

He claims that the neo cortex is a hierarchical structure of pattern recognizers in his Pattern Recognition Theory of Mind.

Kurzweil claims that replicating this design in machines might lead to the creation of artificial superintelligence.

He believes that by doing so, he will be able to bring natural language comprehension to Google.

Kurzweil's popularity stems from his work as a futurist.

Futurists are those who specialize in or are interested in the near-to-long-term future and associated topics.

They use well-established methodologies like scenario planning to carefully examine forecasts and construct future possibilities.

Kurzweil is the author of five national best-selling books, including The Singularity Is Near, which was named a New York Times best-seller (2005).

He has an extensive list of forecasts.

Kurzweil predicted the enormous development of international internet usage in the second part of the decade in his debut book, The Age of Intelligent Machines (1990).

He correctly predicted that computers will soon exceed humans in making the greatest investing choices in his second extremely important book, The Age of Spiritual Machines (where "spiritual" stands for "aware"), published in 1999.

Kurzweil prophesied in the same book that computers would one day "appear to have their own free will" and perhaps have "spiritual experiences" (Kurz weil 1999, 6).

Human-machine barriers will dissolve to the point that they will basically live forever as combined human-machine hybrids.

Scientists and philosophers have slammed Kurzweil's forecast of a sentient computer, claiming that awareness cannot be created by calculations.

Kurzweil tackles the phenome non of the Technological Singularity in his third book, The Singularity Is Near.

John von Neumann, a famous mathematician, created the word singularity.

In a 1950s chat with his colleague Stanislaw Ulam, von Neumann proposed that the ever-accelerating speed of technological progress "appears to be reaching some essential singularity in the history of the race beyond which human activities as we know them could not continue" (Ulam 1958, 5).

To put it another way, technological development would alter the course of human history.

Vernor Vinge, a computer scientist, math professor, and science fiction writer, rediscovered the word in 1993 and utilized it in his article "The Coming Technological Singularity." In Vinge's article, technological progress is more accurately defined as an increase in processing power.

Vinge investigates the idea of a self-improving artificial intelligence agent.

According to this theory, the artificial intelligent agent continues to update itself and grow technologically at an unfathomable pace, eventually resulting in the birth of a superintelligence—that is, an artificial intelligence that far exceeds all human intelligence.

In Vinge's apocalyptic vision, robots first become autonomous, then superintelligent, to the point where humans lose control of technology and machines seize control of their own fate.

Machines will rule the planet because technology is more intelligent than humans.

According to Vinge, the Singularity is the end of the human age.

Kurzweil presents an anti-dystopic Singularity perspective.

Kurzweil's core premise is that humans can develop something smarter than themselves; in fact, exponential advances in computer power make the creation of an intelligent machine all but inevitable, to the point that the machine will surpass humans in intelligence.

Kurzweil believes that machine intelligence and human intellect will converge at this moment.

The subtitle of The Singularity Is Near is When Humans Transcend Biology, which is no coincidence.

Kurzweil's overarching vision is based on discontinuity: no lesson from the past, or even the present, can aid humans in determining the way to the future.

This also explains why new types of education, such as Singularity University, are required.

Every sentimental look back to history, every memory of the past, renders humans more susceptible to technological change.

With the arrival of a new superintelligent, almost immortal race, history as a human construct will soon come to an end.

Posthumans, the next phase in human development, are known as immortals.

Kurzweil believes that posthumanity will be made up of sentient robots rather than people with mechanical bodies.

He claims that the future should be formed on the assumption that mankind is in the midst of an extraordinary period of technological advancement.

The Singularity, he believes, would elevate humanity beyond its wildest dreams.

While Kurzweil claims that artificial intelligence is now outpacing human intellect on certain activities, he also acknowledges that the moment of superintelligence, often known as the Technological Singularity, has not yet arrived.

He believes that individuals who embrace the new age of human-machine synthesis and are daring to go beyond evolution's boundaries would view humanity's future as positive. 




Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


General and Narrow AI; Superintelligence; Technological Singularity.



Further Reading:




Kurzweil, Ray. 1990. The Age of Intelligent Machines. Cambridge, MA: MIT Press.

Kurzweil, Ray. 1999. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Penguin.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Ulam, Stanislaw. 1958. “Tribute to John von Neumann.” Bulletin of the American Mathematical Society 64, no. 3, pt. 2 (May): 1–49.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. Cleveland, OH: NASA Lewis Research Center.



 

Artificial Intelligence - Who Is Ben Goertzel (1966–)?


Ben Goertzel is the founder and CEO of SingularityNET, a blockchain AI company, as well as the chairman of Novamente LLC, a research professor at Xiamen University's Fujian Key Lab for Brain-Like Intelligent Systems, the chief scientist of Mozi Health and Hanson Robotics in Shenzhen, China, and the chair of the OpenCog Foundation, Humanity+, and Artificial General Intelligence Society conference series. 

Goertzel has long wanted to create a good artificial general intelligence and use it in bioinformatics, finance, gaming, and robotics.

He claims that, despite AI's current popularity, it is currently superior than specialists in a number of domains.

Goertzel divides AI advancement into three stages, each of which represents a step toward a global brain (Goertzel 2002, 2): • the intelligent Internet • the full-fledged Singularity Goertzel presented a lecture titled "Decentralized AI: The Power and the Necessity" at TEDxBerkeley in 2019.

He examines artificial intelligence in its present form as well as its future in this discussion.

"The relevance of decentralized control in leading AI to the next stages, the strength of decentralized AI," he emphasizes (Goertzel 2019a).

In the evolution of artificial intelligence, Goertzel distinguishes three types: artificial narrow intelligence, artificial broad intelligence, and artificial superintelligence.

Artificial narrow intelligence refers to machines that can "address extremely specific issues... better than humans" (Goertzel 2019a).

In certain restricted activities, such as chess and Go, this kind of AI has outperformed a human.

Ray Kurzweil, an American futurologist and inventor, coined the phrase "narrow AI." Artificial general intelligence (AGI) refers to intelligent computers that can "generate knowledge" in a variety of fields and have "humanlike autonomy." By 2029, according to Goertzel, this kind of AI will have reached the same level of intellect as humans.

Artificial superintelligence (ASI) is based on both narrow and broad AI, but it can also reprogram itself.



By 2045, he claims, this kind of AI will be smarter than the finest human brains in terms of "scientific innovation, general knowledge, and social abilities" (Goertzel 2019a).

According to Goertzel, Facebook, Google, and a number of colleges and companies are all actively working on AGI.

According to Goertzel, the shift from AI to AGI will occur within the next five to thirty years.

Goertzel is also interested in artificial intelligence-assisted life extension.

He thinks that artificial intelligence's exponential advancement will lead to technologies that will increase human life span and health eternally.

He predicts that by 2045, a singularity featuring a drastic increase in "human health span" would have occurred (Goertzel 2012).

Vernor Vinge popularized the term "singularity" in his 1993 article "The Coming Technological Singularity." Ray Kurzweil coined the phrase in his 2005 book The Singularity is Near.

The Technological Singularity, according to both writers, is the merging of machine and human intellect as a result of a fast development in new technologies, particularly robots and AI.

The thought of an impending singularity excites Goertzel.

SingularityNET is his major current initiative, which entails the construction of a worldwide network of artificial intelligence researchers interested in developing, sharing, and monetizing AI technology, software, and services.

By developing a decentralized protocol that enables a full stack AI solution, Goertzel has made a significant contribution to this endeavor.

SingularityNET, as a decentralized marketplace, provides a variety of AI technologies, including text generation, AI Opinion, iAnswer, Emotion Recognition, Market Trends, OpenCog Pattern Miner, and its own cryptocurrency, AGI token.

SingularityNET is presently cooperating with Domino's Pizza in Malaysia and Singapore (Khan 2019).



Domino's is interested in leveraging SingularityNET technologies to design a marketing plan, with the goal of providing the finest products and services to its consumers via the use of unique algorithms.

Domino's thinks that by incorporating the AGI ecosystem into their operations, they will be able to provide value and service in the food delivery market.

Goertzel has reacted to scientist Stephen Hawking's challenge, which claimed that AI might lead to the extinction of human civilization.

Given the current situation, artificial super intelligence's mental state will be based on past AI generations, thus "selling, spying, murdering, and gambling are the key aims and values in the mind of the first super intelligence," according to Goertzel (Goertzel 2019b).

He acknowledges that if humans desire compassionate AI, they must first improve their own treatment of one another.

With four years, Goertzel worked for Hanson Robotics in Hong Kong.

He collaborated with Sophia, Einstein, and Han, three well-known robots.

"Great platforms for experimenting with AI algorithms, including cognitive architectures like OpenCog that aim at human-level AI," he added of the robots (Goertzel 2018).

Goertzel argues that essential human values may be retained for future generations in Sophia-like robot creatures after the Technological Singularity.

Decentralized networks like SingularityNET and OpenCog, according to Goertzel, provide "AIs with human-like values," reducing AI hazards to humanity (Goertzel 2018).

Because human values are complicated in nature, Goertzel feels that encoding them as a rule list is wasteful.

Brain-computer interfacing (BCI) and emotional interfacing are two ways Goertzel offers.

Humans will become "cyborgs," with their brains physically linked to computational-intelligence modules, and the machine components of the cyborgs will be able to read the moral-value-evaluation structures of the human mind directly from the biological components of the cyborgs (Goertzel 2018).

Goertzel uses Elon Musk's Neuralink as an example.

Because it entails invasive trials with human brains and a lot of unknowns, Goertzel doubts that this strategy will succeed.

"Emotional and spiritual connections between people and AIs, rather than Ethernet cables or Wifi signals, are used to link human and AI brains," according to the second method (Goertzel 2018).

To practice human values, he proposes that AIs participate in emotional and social connection with humans via face expression detection and mirroring, eye contact, and voice-based emotion recognition.

To that end, Goertzel collaborated with SingularityNET, Hanson AI, and Lia Inc on the "Loving AI" research project, which aims to assist artificial intelligences speak and form intimate connections with humans.

A funny video of actor Will Smith on a date with Sophia the Robot is presently available on the Loving AI website.

Sophia can already make sixty facial expressions and understand human language and emotions, according to the video of the date.

When linked to a network like SingularityNET, humanoid robots like Sophia obtain "ethical insights and breakthroughs...

via language," according to Goertzel (Goertzel 2018).

Then, through a shared internet "mindcloud," robots and AIs may share what they've learnt.

Goertzel is also the chair of the Artificial General Intelligence Society's Conference Series on Artificial General Intelligence, which has been conducted yearly since 2008.

The Journal of Artificial General Intelligence is a peer-reviewed open-access academic periodical published by the organization. Goertzel is the editor of the conference proceedings series.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 

General and Narrow AI; Superintelligence; Technological Singularity.


Further Reading:


Goertzel, Ben. 2002. Creating Internet Intelligence: Wild Computing, Distributed Digital Consciousness, and the Emerging Global Brain. New York: Springer.

Goertzel, Ben. 2012. “Radically Expanding the Human Health Span.” TEDxHKUST. https://www.youtube.com/watch?v=IMUbRPvcB54.

Goertzel, Ben. 2017. “Sophia and SingularityNET: Q&A.” H+ Magazine, November 5, 2017. https://hplusmagazine.com/2017/11/05/sophia-singularitynet-qa/.

Goertzel, Ben. 2018. “Emotionally Savvy Robots: Key to a Human-Friendly Singularity.” https://www.hansonrobotics.com/emotionally-savvy-robots-key-to-a-human-friendly-singularity/.

Goertzel, Ben. 2019a. “Decentralized AI: The Power and the Necessity.” TEDxBerkeley, March 9, 2019. https://www.youtube.com/watch?v=r4manxX5U-0.

Goertzel, Ben. 2019b. “Will Artificial Intelligence Kill Us?” July 31, 2019. https://www.youtube.com/watch?v=TDClKEORtko.

Goertzel, Ben, and Stephan Vladimir Bugaj. 2006. The Path to Posthumanity: 21st Century Technology and Its Radical Implications for Mind, Society, and Reality. Bethesda, MD: Academica Press.

Khan, Arif. 2019. “SingularityNET and Domino’s Pizza Announce a Strategic Partnership.” https://blog.singularitynet.io/singularitynet-and-dominos-pizza-announce-a-strategic-partnership-cbbe21f80fc7.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. NASA: Lewis Research Center





Artificial Intelligence - General and Narrow Categories Of AI.






There are two types of artificial intelligence: general (or powerful or complete) and narrow (or limited) (or weak or specialized).

In the actual world, general AI, such as that seen in science fiction, does not yet exist.

Machines with global intelligence would be capable of completing every intellectual endeavor that humans are capable of.

This sort of system would also seem to think in abstract terms, establish connections, and communicate innovative ideas in the same manner that people do, displaying the ability to think abstractly and solve problems.



Such a computer would be capable of thinking, planning, and recalling information from the past.

While the aim of general AI has yet to be achieved, there are more and more instances of narrow AI.

These are machines that perform at human (or even superhuman) levels on certain tasks.

Computers that have learnt to play complicated games have abilities, techniques, and behaviors that are comparable to, if not superior to, those of the most skilled human players.

AI systems that can translate between languages in real time, interpret and respond to natural speech (both spoken and written), and recognize images have also been developed (being able to recognize, identify, and sort photos or images based on the content).

However, the ability to generalize knowledge or skills is still largely a human accomplishment.

Nonetheless, there is a lot of work being done in the field of general AI right now.

It will be difficult to determine when a computer develops human-level intelligence.

Several serious and hilarious tests have been suggested to determine whether a computer has reached the level of General AI.

The Turing Test is arguably the most renowned of these examinations.

A machine and a person speak in the background, as another human listens in.

The human eavesdropper must figure out which speaker is a machine and which is a human.

The machine passes the test if it can fool the human evaluator a prescribed percentage of the time.

The Coffee Test is a more fantastical test in which a machine enters a typical household and brews coffee.



It has to find the coffee machine, look for the coffee, add water, boil the coffee, and pour it into a cup.

Another is the Flat Pack Furniture Test, which involves a machine receiving, unpacking, and assembling a piece of furniture based only on the instructions supplied.

Some scientists, as well as many science fiction writers and fans, believe that once intelligent machines reach a tipping point, they will be able to improve exponentially.

AI-based beings that far exceed human capabilities might be one conceivable result.

The Singularity, or artificial superintelligence, is the point at which AI assumes control of its own self-improvement (ASI).

If ASI is achieved, it will have unforeseeable consequences for human society.

Some pundits worry that ASI would jeopardize humanity's safety and dignity.

It's up for dispute whether the Singularity will ever happen, and how dangerous it may be.

Narrow AI applications are becoming more popular across the globe.

Machine learning (ML) is at the heart of most new applications, and most AI examples in the news are connected to this subset of technology.

Traditional or conventional algorithms are not the same as machine learning programs.

In programs that cannot learn, a computer programmer actively adds code to account for every action of an algorithm.

All of the decisions made along the process are governed by the programmer's guidelines.

This necessitates the programmer imagining and coding for every possible circumstance that an algorithm may face.

This kind of program code is bulky and often inadequate, especially if it is updated frequently to accommodate for new or unanticipated scenarios.

The utility of hard-coded algorithms approaches its limit in cases where the criteria for optimum judgments are unclear or impossible for a human programmer to foresee.

Machine learning is the process of training a computer to detect and identify patterns via examples rather than predefined rules.



This is achieved, according to Google engineer Jason Mayes, by reviewing incredibly huge quantities of training data or participating in some other kind of programmed learning step.

New patterns may be extracted by processing the test data.

The system may then classify newly unknown data based on the patterns it has already found.

Machine learning allows an algorithm to recognize patterns or rules underlying decision-making processes on its own.

Machine learning also allows a system's output to improve over time as it gains more experience (Mayes 2017).

A human programmer continues to play a vital role in this learning process, influencing results by making choices like developing the exact learning algorithm, selecting the training data, and choosing other design elements and settings.

Machine learning is powerful once it's up and running because it can adapt and enhance its ability to categorize new data without the need for direct human interaction.

In other words, the output quality increases as the user gains experience.

Artificial intelligence is a broad word that refers to the science of making computers intelligent.

AI is a computer system that can collect data and utilize it to make judgments or solve issues, according to scientists.

Another popular scientific definition of AI is "a software program paired with hardware that can receive (or sense) inputs from the world around it, evaluate and analyze those inputs, and create outputs and suggestions without the assistance of a person." When programmers claim an AI system can learn, they're referring to the program's ability to change its own processes in order to provide more accurate outputs or predictions.

AI-based systems are now being developed and used in practically every industry, from agriculture to space exploration, and in applications ranging from law enforcement to online banking.

The methods and techniques used in computer science are always evolving, extending, and improving.

Other terminology linked to machine learning, such as reinforcement learning and neural networks, are important components of cutting-edge artificial intelligence systems.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 

Embodiment, AI and; Superintelligence; Turing, Alan; Turing Test.


Further Reading:


Kelnar, David. 2016. “The Fourth Industrial Revolution: A Primer on Artificial Intelligence (AI).” Medium, December 2, 2016. https://medium.com/mmc-writes/the-fourth-industrial-revolution-a-primer-on-artificial-intelligence-ai-ff5e7fffcae1.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Mayes, Jason. 2017. Machine Learning 101. https://docs.google.com/presentation/d/1kSuQyW5DTnkVaZEjGYCkfOxvzCqGEFzWBy4e9Uedd9k/htmlpresent.

Müller, Vincent C., and Nick Bostrom. 2016. “Future Progress in Artificial Intelligence: A Survey of Expert Opinion.” In Fundamental Issues of Artificial Intelligence, edited by Vincent C. Müller, 553–71. New York: Springer.

Russell, Stuart, and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice Hall.

Samuel, Arthur L. 1988. “Some Studies in Machine Learning Using the Game of Checkers I.” In Computer Games I, 335–65. New York: Springer.



Artificial Intelligence - What Are Non-Player Characters And Emergent Gameplay?

 


Emergent gameplay occurs when a player in a video game encounters complicated scenarios as a result of their interactions with other players in the game.


Players may fully immerse themselves in an intricate and realistic game environment and feel the consequences of their choices in today's video games.

Players may personalize and build their character and tale.

Players take on the role of a cyborg in a dystopian metropolis in the Deus Ex series (2000), for example, one of the first emergent game play systems.

They may change the physical appearance of their character as well as their skill sets, missions, and affiliations.

Players may choose between militarized adaptations that allow for more aggressive play and stealthier options.

The plot and experience are altered by the choices made on how to customize and play, resulting in unique challenges and results for each player.


When players interact with other characters or items, emergent gameplay guarantees that the game environment reacts.



Because of many options, the tale unfolds in surprising ways as the gaming world changes.

Specific outcomes are not predetermined by the designer, and emergent gameplay can even take advantage of game flaws to generate actions in the game world, which some consider to be a form of emergence.

Artificial intelligence has become more popular among game creators in order to have the game environment respond to player actions in a timely manner.

Artificial intelligence aids the behavior of video characters and their interactions via the use of algorithms, basic rule-based forms that help in generating the game environment in sophisticated ways.

"Game AI" refers to the usage of artificial intelligence in games.

The most common use of AI algorithms is to construct the form of a non-player character (NPC), which are characters in the game world with whom the player interacts but does not control.


In its most basic form, AI will use pre-scripted actions for the characters, who will then concentrate on reacting to certain events.


Pre-scripted character behaviors performed by AI are fairly rudimentary, and NPCs are meant to respond to certain "case" events.

The NPC will evaluate its current situation before responding in a range determined by the AI algorithm.

Pac-Man is a good early and basic illustration of this (1980).

Pac-Man is controlled by the player through a labyrinth while being pursued by a variety of ghosts, who are the game's non-player characters.


Players could only interact with ghosts (NPCs) by moving about; ghosts had limited replies and their own AI-programmed pre-scripted movement.




The AI planned reaction would occur if the ghost ran into a wall.

It would then roll an AI-created die that would determine whether or not the NPC would turn toward or away from the direction of the player.

If the NPC decided to go after the player, the AI pre-scripted pro gram would then detect the player’s location and turn the ghost toward them.

If the NPC decided not to go after the player, it would turn in an opposite or a random direction.

This NPC interaction is very simple and limited; however, this was an early step in AI providing emergent gameplay.



Contemporary games provide a variety of options available and a much larger set of possible interactions for the player.


Players in contemporary role-playing games (RPGs) are given an incredibly high number of potential options, as exemplified by Fallout 3 (2008) and its sequels.

Fallout is a role-playing game, where the player takes on the role of a survivor in a post-apocalyptic America.

The story narrative gives the player a goal with no direction; as a result, the player is given the freedom to play as they see fit.

The player can punch every NPC, or they can talk to them instead.

In addition to this variety of actions by the player, there are also a variety of NPCs controlled through AI.

Some of the NPCs are key NPCs, which means they have their own unique scripted dialogue and responses.

This provides them with a personality and provides a complexity through the use of AI that makes the game environment feel more real.


When talking to key NPCs, the player is given options for what to say, and the Key NPCs will have their own unique responses.


This differs from the background character NPCs, as the key NPCs are supposed to respond in such a way that it would emulate interaction with a real personality.

These are still pre-scripted responses to the player, but the NPC responses are emergent based on the possible combination of the interaction.

As the player makes decisions, the NPC will examine this decision and decide how to respond in accordance to its script.

The NPCs that the players help or hurt and the resulting interactions shape the game world.

Game AI can emulate personalities and present emergent gameplay in a narrative setting; however, AI is also involved in challenging the player in difficulty settings.


A variety of pre-scripted AI can still be used to create difficulty.

Pre scripted AI are often made to make suboptimal decisions for enemy NPCs in games where players fight.

This helps make the game easier and also makes the NPCs seem more human.

Suboptimal pre-scripted decisions make the enemy NPCs easier to handle.

Optimal decisions however make the opponents far more difficult to handle.

This can be seen in contemporary games like Tom Clancy’s The Division (2016), where players fight multiple NPCs.

The enemy NPCs range from angry rioters to fully trained paramilitary units.

The rioter NPCs offer an easier challenge as they are not trained in combat and make suboptimal decisions while fighting the player.

The military trained NPCs are designed to have more optimal decision-making AI capabilities in order to increase the difficulty for the player fighting them.



Emergent gameplay has evolved to its full potential through use of adaptive AI.


Similar to prescript AI, the character examines a variety of variables and plans about an action.

However, unlike the prescript AI that follows direct decisions, the adaptive AI character will make their own decisions.

This can be done through computer-controlled learning.


AI-created NPCs follow rules of interactions with the players.


As players go through the game, the player interactions are analyzed, and some AI judgments become more weighted than others.

This is done in order to provide distinct player experiences.

Various player behaviors are actively examined, and modifications are made by the AI when designing future challenges.

The purpose of the adaptive AI is to challenge the players to a degree that the game is fun while not being too easy or too challenging.

Difficulty may still be changed if players seek a different challenge.

This may be observed in the Left 4 Dead game (2008) series’ AI Director.

Players navigate through a level, killing zombies and gathering resources in order to live.


The AI Director chooses which zombies to spawn, where they will spawn, and what supplies will be spawned.

The choice to spawn them is not made at random; rather, it is based on how well the players performed throughout the level.

The AI Director makes its own decisions about how to respond; as a result, the AI Director adapts to the level's player success.

The AI Director gives less resources and spawns more adversaries as the difficulty level rises.


Changes in emergent gameplay are influenced by advancements in simulation and game world design.


As virtual reality technology develops, new technologies will continue to help in this progress.

Virtual reality games provide an even more immersive gaming experience.

Players may use their own hands and eyes to interact with the environment.

Computers are growing more powerful, allowing for more realistic pictures and animations to be rendered.


Adaptive AI demonstrates the capability of really autonomous decision-making, resulting in a truly participatory gaming experience.


Game makers are continuing to build more immersive environments as AI improves to provide more lifelike behavior.

These cutting-edge technology and new AI will elevate emergent gameplay to new heights.

The importance of artificial intelligence in videogames has emerged as a crucial part of the industry for developing realistic and engrossing gaming.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Brooks, Rodney; Distributed and Swarm Intelligence; General and Narrow AI.



Further Reading:



Brooks, Rodney. 1986. “A Robust Layered Control System for a Mobile Robot.” IEEE Journal of Robotics and Automation 2, no. 1 (March): 14–23.

Brooks, Rodney. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems6, no. 1–2 (June): 3–15.

Brooks, Rodney. 1991. “Intelligence Without Representation.” Artificial Intelligence Journal 47: 139–60.

Dennett, Daniel C. 1997. “Cog as a Thought Experiment.” Robotics and Autonomous Systems 20: 251–56.

Gallagher, Shaun. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press.

Pfeifer, Rolf, and Josh Bongard. 2007. How the Body Shapes the Way We Think: A New View of Intelligence. Cambridge, MA: MIT Press.




What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...