Showing posts with label Turing Test. Show all posts
Showing posts with label Turing Test. Show all posts

Artificial Intelligence - What Is The Turing Test?

 



 

The Turing Test is a method of determining whether or not a machine can exhibit intelligence that mimics or is equivalent and likened to Human intelligence. 

The Turing Test, named after computer scientist Alan Turing, is an AI benchmark that assigns intelligence to any machine capable of displaying intelligent behavior comparable to that of a person.

Turing's "Computing Machinery and Intelligence" (1950), which establishes a simple prototype—what Turing calls "The Imitation Game," is the test's locus classicus.

In this game, a person is asked to determine which of the two rooms is filled by a computer and which is occupied by another human based on anonymized replies to natural language questions posed by the judge to each inhabitant.

Despite the fact that the human respondent must offer accurate answers to the court's queries, the machine's purpose is to fool the judge into thinking it is human.





According to Turing, the machine may be considered intelligent to the degree that it is successful at this job.

The fundamental benefit of this essentially operationalist view of intelligence is that it avoids complex metaphysics and epistemological issues about the nature and inner experience of intelligent activities.

According to Turing's criteria, little more than empirical observation of outward behavior is required for predicting object intelligence.

This is in sharp contrast to the widely Cartesian epistemological tradition, which holds that some internal self-awareness is a need for intelligence.

Turing's method avoids the so-called "problem of other minds" that arises from such a viewpoint—namely, how to be confident of the presence of other intelligent individuals if it is impossible to know their thoughts from a presumably required first-person perspective.



Nonetheless, the Turing Test, at least insofar as it considers intelligence in a strictly formalist manner, is bound up with the spirit of Cartesian epistemol ogy.

The machine in the Imitation Game is a digital computer in the sense of Turing: a set of operations that may theoretically be implemented in any material.


A digital computer consists of three parts: a knowledge store, an executive unit that executes individual orders, and a control that regulates the executive unit.






However, as Turing points out, it makes no difference whether these components are created using electrical or mechanical means.

What matters is the formal set of rules that make up the computer's very nature.

Turing holds to the core belief that intellect is inherently immaterial.

If this is true, it is logical to assume that human intellect functions in a similar manner to a digital computer and may therefore be copied artificially.


Since Turing's work, AI research has been split into two camps: 


  1. those who embrace and 
  2. those who oppose this fundamental premise.


To describe the first camp, John Haugeland created the term "good old-fashioned AI," or GOFAI.

Marvin Minsky, Allen Newell, Herbert Simon, Terry Winograd, and, most notably, Joseph Weizenbaum, whose software ELIZA was controversially hailed as the first to pass the Turing Test in 1966.



Nonetheless, detractors of Turing's formalism have proliferated, particularly in the past three decades, and GOFAI is now widely regarded as a discredited AI technique.

John Searle's Minds, Brains, and Programs (1980), in which Searle builds his now-famous Chinese Room thought experiment, is one of the most renowned criticisms of GOFAI in general—and the assumptions of the Turing Test in particular.





In the latter, a person with no prior understanding of Chinese is placed in a room and forced to correlate Chinese characters she receives with other Chinese characters she puts out, according to an English-scripted software.


Searle thinks that, assuming adequate mastery of the software, the person within the room may pass the Turing Test, fooling a native Chinese speaker into thinking she knew Chinese.

If, on the other hand, the person in the room is a digital computer, Turing-type tests, according to Searle, fail to capture the phenomena of understanding, which he claims entails more than the functionally accurate connection of inputs and outputs.

Searle's argument implies that AI research should take materiality issues seriously in ways that Turing's Imitation Game's formalism does not.

Searle continues his own explanation of the Chinese Room thought experiment by saying that human species' physical makeup—particularly their sophisticated nerve systems, brain tissue, and so on—should not be discarded as unimportant to conceptions of intelligence.


This viewpoint has influenced connectionism, an altogether new approach to AI that aims to build computer intelligence by replicating the electrical circuitry of human brain tissue.


The effectiveness of this strategy has been hotly contested, although it looks to outperform GOFAI in terms of developing generalized kinds of intelligence.

Turing's test, on the other hand, may be criticized not just from the standpoint of materialism, but also from the one of fresh formalism.





As a result, one may argue that Turing tests are insufficient as a measure of intelligence since they attempt to reproduce human behavior, which is frequently exceedingly dumb.


According to certain variants of this argument, if criteria of rationality are to distinguish rational from illogical human conduct in the first place, they must be derived a priori rather than from real human experience.

This line of criticism has gotten more acute as AI research has shifted its focus to the potential of so-called super-intelligence: forms of generalized machine intelligence that far outperform human intellect.


Should this next level of AI be attained, Turing tests would seem to be outdated.

Furthermore, simply discussing the idea of superintelligence would seem to need additional intelligence criteria in addition to severe Turing testing.

Turing may be defended against such accusation by pointing out that establishing a universal criterion of intellect was never his goal.



Indeed, according to Turing (1997, 29–30), the purpose is to replace the metaphysically problematic issue "can machines think" with the more empirically verifiable alternative: 

"What will happen when a computer assumes the role [of the man in the Imitation Game]" (Turing 1997, 29–30).


Thus, Turing's test's above-mentioned flaw—that it fails to establish a priori rationality standards—is also part of its strength and drive.

It also explains why, since it was initially presented three-quarters of a century ago, it has had such a lengthy effect on AI research in all domains.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.

Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.

Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cam￾bridge, MA: MIT Press.



Artificial Intelligence - Who Was Alan Turing?

 


 

 Alan Mathison Turing OBE FRS(1912–1954) was a logician and mathematician from the United Kingdom.

He is known as the "Father of Artificial Intelligence" and "The Father of Computer Science." 

Turing earned a first-class honors degree in mathematics from King's College, Cambridge, in 1934.

Turing received his PhD from Princeton University after a fellowship at King's College, where he studied under American mathematician Alonzo Church.

Turing wrote numerous important publications during his studies, including "On Computable Numbers, with an Application to the Entscheidungsproblem," which proved that the so-called "decision problem" had no solution.

The decision issue asks if there is a method for determining the correctness of any assertion inside a mathematical system.

This paper also explored a hypothetical Turing machine (basically an early computer) that, if represented by an algorithm, could execute any mathematical operation.


Turing is best known for his codebreaking work at Bletchley Park's Government Code and Cypher School (GC&CS) during World War II (1939–1945).

Turing's work at GC&CS included heading Hut 8, which was tasked with cracking the German Enigma and other very difficult naval encryption.

Turing's work undoubtedly shortened the war by years, saving millions of lives, but it is hard to measure with precision.

Turing wrote "The Applications of Probability to Cryptography" and "Paper on Statistics of Repetitions" during his tenure at GC&CS, both of which were held secret for seventy years by the Government Communications Headquarters (GCHQ) until being given to the UK National Archives in 2012.



Following WWII, Turing enrolled at the Victoria University of Manchester to study mathematical biology while continuing his work in mathematics, stored-program digital computers, and artificial intelligence.

Turing's 1950 paper "Computing Machinery and Intelligence" looked into artificial intelligence and introduced the concept of the Imitation Game (also known as the Turing Test), in which a human judge uses a set of written questions and responses to try to distinguish between a computer program and a human.

If the computer program imitates a person to the point that the human judge cannot discern the difference between the computer program's and the human's replies, the program has passed the test, indicating that it is capable of intelligent reasoning.


Turochamp, a chess program written by Turing and his colleague D.G. Champernowne, was meant to be executed by a computer, but no machine with adequate capacity existed to test the program.

Turing instead manually ran the algorithms to test the software.

Turing was well-recognized during his lifetime, despite the fact that most of his work remained secret until after his death.


Turing was made a Fellow of the Royal Society in 1951 and was awarded to the Order of the British Empire in 1946.(FRS).

The Turing Award, named after him, is given annually by the Association for Computing Machinery for contributions to the area of computing.

The Turing Award, which comes with a $1 million reward, is commonly recognized as the Nobel Prize of Computing.


Turing was outspoken about his sexuality at a period when homosexuality was still illegal in the United Kingdom.

Turing was accused in 1952 under Section 11 of the Criminal Law Amendment Act 1885 with "gross indecency." 

Turing was found guilty, granted probation, and was sentenced to a year of "chemical castration," in which he was injected with synthetic estrogen.


Turing's conviction had an influence on his career as well.


His security clearance was withdrawn, and he was compelled to stop working for the GCHQ as a cryptographer.

Following successful campaigning for an apology and pardon, the British government passed the Alan Turing bill in 2016, which retrospectively pardoned hundreds of persons imprisoned under Section 11 and other historical laws.


In 1954, Turing died of cyanide poisoning.

Turing's death may have been caused by inadvertent inhalation of cyanide vapors, despite the fact that it was officially considered a suicide.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Chatbots and Loebner Prize; General and Narrow AI; Moral Turing Test; Turing Test.


References And Further Reading

Hodges, Andrew. 2004. “Turing, Alan Mathison (1912–1954).” In Oxford Dictionary of National Biography. https://www.oxforddnb.com/view/10.1093/ref:odnb/9780198614128.001.0001/odnb-9780198614128-e-36578.

Lavington, Simon. 2012. Alan Turing and His Contemporaries: Building the World’s First Computers. Swindon, UK: BCS, The Chartered Institute for IT.

Sharkey, Noel. 2012. “Alan Turing: The Experiment that Shaped Artificial Intelligence.” BBC News, June 21, 2012. https://www.bbc.com/news/technology-18475646.



Artificial Intelligence - What Is The PARRY Computer Program?




PARRY (short for paranoia) is the first computer program to imitate a mental patient, created by Stanford University psychiatrist Kenneth Colby.

PARRY is communicated with by the psychiatrist-user in simple English.

PARRY's responses are intended to mirror the cognitive (mal)functioning of a paranoid patient.

In the late 1960s and early 1970s, Colby experimented with mental patient chatbots, which led to the development of PARRY.

Colby sought to illustrate that cognition is fundamentally a symbol manipulation process and that computer simulations may help psychiatric research.

Many technical aspects of PARRY were shared with Joseph Weizenbaum's ELIZA.

Both of these applications were conversational in nature, allowing the user to submit remarks in plain English.

PARRY's underlying algorithms, like ELIZA's, examined inputted phrases for essential terms to create plausible answers.





PARRY, on the other hand, was given a history in order to imitate the right paranoid behaviors.

Parry, who was fictitious, was a gambler who had gotten into a fight with a bookie.

Parry was paranoid enough to assume that the bookie would send the Mafia after him.

Since a result, PARRY freely shared information on its crazy Mafia ideas, as it would wish to enlist the user's assistance.

PARRY was also born with the ability to be "sensitive to his parents, religion, and sex" (Colby 1975, 36).

In most other topics of conversation, the show was neutral.

If PARRY couldn't find a match in its database, it may respond with "I don't know," "Why do you ask that?" or by returning to an earlier subject (Colby 1975, 77).

Whereas ELIZA's achievements made Weizenbaum a skeptic of AI, PARRY's findings bolstered Colby's support for computer simulations in psychiatry.

Colby picked paranoia as the mental state to mimic because it has the least fluid behavior and hence is the simplest to see.

Colby felt that human cognition was a process of symbol manipulation, as did artificial intelligence pioneers Herbert Simon and Allen Newell.

PARRY's cognitive functioning resembled that of a paranoid human being as a result of this.

Colby emphasized that a psychiatrist conversing with PARRY had learnt something about human paranoia.

He saw PARRY as a tool to help novice psychiatrists get started in their careers.

PARRY's reactions might also be used to determine the most successful therapeutic discourse lines.

Colby hoped that systems like PARRY would assist confirm or refute psychiatric hypotheses while also bolstering the field's scientific credibility.

On PARRY, Colby put his shame humiliation hypothesis of paranoid insanity to the test.

In the 1970s, Colby performed a series of studies to see how effectively PARRY could simulate true paranoia.

Two of these examinations resembled the Turing Test.

To begin, practicing psychiatrists were instructed to interview patients using a teletype terminal, an antiquated electromechanical typewriter that was used to send and receive typed messages over telecommunications.

The doctors were unaware that PARRY was one of the patients who took part in the interviews.

The transcripts of these interviews were then distributed to a group of 100 psychiatrists.

These psychiatrists were tasked with determining which version was created by a computer.

Twenty psychiatrists properly recognized PARRY, whereas the other twenty did not.

A total of 100 computer scientists received transcripts.

32 of the 67 computer scientists were accurate, while 35 were incorrect.

According to Colby, the findings "are akin to tossing a coin" statistically, and PARRY was not exposed (Colby 1975, 92).



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Chatbots and Loebner Prize; ELIZA; Expert Systems; Natural Language Processing and Speech Understanding; Turing Test.


References & Further Reading:


Cerf, Vincent. 1973. “Parry Encounters the Doctor: Conversation between a Simulated Paranoid and a Simulated Psychiatrist.” Datamation 19, no. 7 (July): 62–65.

Colby, Kenneth M. 1975. Artificial Paranoia: A Computer Simulation of Paranoid Pro￾cesses. New York: Pergamon Press.

Colby, Kenneth M., James B. Watt, and John P. Gilbert. 1966. “A Computer Method of Psychotherapy: Preliminary Communication.” Journal of Nervous and Mental Disease 142, no. 2 (February): 148–52.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, 251–56, 308–28. San Francisco: W. H. Freeman and Company.

Warren, Jim. 1976. Artificial Paranoia: An NIMH Program Report. Rockville, MD: US. Department of Health, Education, and Welfare, Public Health Service, Alcohol, Drug Abuse, and Mental Health Administration, National Institute of Mental Health, Division of Scientific and Public Information, Mental Health Studies and Reports Branch.






Artificial Intelligence - Natural Language Generation Or NLG.

 




Natural Language Generation, or NLG, is the computer process by which information that cannot be easily comprehended by humans is converted into a message that is optimized for human comprehension, as well as the name of the AI area dedicated to its research and development.



In computer science and AI, the phrase "natural language" refers to what most people simply refer to as language, the mechanism by which humans interact with one another and, increasingly, with computers and robots.



Natural language is the polar opposite of "machine language," or programming language, which was created for the purpose of programming and controlling computers.

The data processed by NLG technology is some sort of data, such as scores and statistics from a sporting event, and the message created from this data may take different forms (text or voice), such as a sports game news broadcast.

The origins of NLG may be traced back to the mid-twentieth century, when computers were first introduced.

Entering data into early computers and then deciphering the results was complex, time-consuming, and needed highly specialized skills.

These difficulties with machine input and output were seen by researchers and developers as communication issues.



Communication is also essential for gaining knowledge and information, as well as exhibiting intelligence.

The answer suggested by researchers was to work toward adapting human-machine communication to the most "natural" form of communication, that is, people's own languages.

Natural Language Processing is concerned with how robots can understand human language, while Natural Language Generation is concerned with the creation of communications customized to people.

Some researchers in this field, like those working in artificial intelligence, are interested in developing systems that generate messages from data, while others are interested in studying the human process of language and message formation.

NLG is a subfield of Computational Linguistics, as well as being a branch of artificial intelligence.

The rapid expansion of NLG technologies has been facilitated by the proliferation of technology for producing, collecting, and linking enormous swaths of data, as well as advancements in processing power.



NLG has a wide range of applications in a variety of sectors, including journalism and media.

Large international and national news organizations throughout the globe have begun to use automated news-writing tools based on NLG technology into their news production.

Journalists utilize the program in this context to create informative reports from diverse datasets, such as lists of local crimes, corporate earnings reports, and synopses of athletic events.

Companies and organizations may also utilize NLG systems to create automated summaries of their own or external data.

Computational narrative and the development of automated narrative generating systems that concentrate on the production of fictitious stories and characters for use in media and entertainment, such as video games, as well as education and learning, are two related areas of study.



NLG is likely to improve further in the future, allowing future technologies to create more sophisticated and nuanced messages over a wider range of convention texts.

NLG's development and use are still in their early stages, thus it's unclear what the entire influence of NLG-based technologies will be on people, organizations, industries, and society.

Current concerns include whether NLG technologies will have a beneficial or detrimental impact on the workforce in the sectors where they are being implemented, as well as the legal and ethical ramifications of having computers rather than people generate factual and fiction.

There are also bigger philosophical questions around the connection between communication, language usage, and how humans have defined what it means to be human socially and culturally.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 



Natural Language Processing and Speech Understanding; Turing Test; Work￾place Automation.


References & Further Reading:


Guzman, Andrea L. 2018. “What Is Human-Machine Communication, Anyway?” In Human-Machine Communication: Rethinking Communication, Technology, and Ourselves, edited by Andrea L. Guzman, 1–28. New York: Peter Lang.

Lewis, Seth C., Andrea L. Guzman, and Thomas R. Schmidt. 2019. “Automation, Journalism, and Human-Machine Communication: Rethinking Roles and Relationships of Humans and Machines in News.” Digital Journalism 7, no. 4: 409–27.

Licklider, J. C. R. 1968. “The Computer as Communication Device.” In In Memoriam: J. C. R. Licklider, 1915–1990, edited by Robert W. Taylor, 21–41. Palo Alto, CA: Systems Research Center.

Marconi, Francesco, Alex Siegman, and Machine Journalist. 2017. The Future of Aug￾mented Journalism: A Guide for Newsrooms in the Age of Smart Machines. New York: Associated Press. https://insights.ap.org/uploads/images/the-future-of-augmented-journalism_ap-report.pdf.

Paris, Cecile L., William R. Swartout, and William C. Mann, eds. 1991. Natural Language Generation in Artificial Intelligence and Computational Linguistics. Norwell, MA: Kluwer Academic Publishers.

Riedl, Mark. 2017. “Computational Narrative Intelligence: Past, Present, and Future.” Medium, October 25, 2017. https://medium.com/@mark_riedl/computational-narrative-intelligence-past-present-and-future-99e58cf25ffa.





Artificial Intelligence - What Is The The Moral Turing Test, Or Ethical Turing Test?



 


The Moral Turing Test, also known as the Ethical Turing Test. 

Ethical (Moral) Turing Test, or MTT, is a variant of the Turing Test created by Alan Turing (1912–1954), a mathematician and computer scientist.

A human judge uses a series of written questions and replies to try to tell the difference between a computer program and a person.

If the computer program imitates a person to the point that the human judge cannot discern the difference between the computer program's and the human's replies, the program has passed the test, indicating that it is capable of intelligent reasoning.

The Moral Turing Test is a more precise version of the Turing Test that is used to assess a machine's ethical decision-making.

The machine is initially taught broad ethical standards and how to obey them.

When faced with an ethical problem, the computer should be able to make judgments based on those ethical standards.

The choices of the computer are then contrasted to those of a human control, usually an ethicist.

The Moral Turing Test is usually only used in certain settings that are relevant to a specific area of study.

If the machine is presented with an ethical problem about health care, for example, its choice will be compared to that of a human health-care professional rather than a generic human control.



The Moral Turing Test has been regarded as a flawed method of determining a machine's capacity to exercise moral agency.

The Turing Test uses imitation to determine if a computer can think, but detractors of the Moral Turing Test argue that in an ethical issue, imitation may be performed by misleading replies rather than moral thinking.

However, some say that morality cannot be determined only on the basis of vocal reactions.

Rather, in a classic Turing Test, the judge must be able to see what is going on in the background—the reasoning, analysis of alternatives, decision-making, and actual action—all of which would be concealed from view.

The comparative Moral Turing Test (cMTT), the Total Turing Test, and verification are all alternatives and modifications to the Moral Turing Test.



In a comparative Moral Turing Test, the judge compares the machine's narrated acts to a human control rather than its spoken replies.

In a Total Turing Test, the judge may see the machine's real activities and interactions in comparison to the human control.

Verification takes a different approach than testing, concentrating on the method behind the machine's reaction rather than the result.

Verification is assessing the design and performance of the machine to determine how it makes decisions.

Verification proponents argue that focusing on the process rather than the outcome acknowledges that moral questions rarely have a single correct answer, and that the process by which the machine arrived at an outcome reveals more about the machine's ability to make ethical decisions than the decision itself.




~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Turing, Alan; Turing Test


References & Further Reading:


Arnold, Thomas, and Matthias Scheutz. 2016. “Against the Moral Turing Test: Accountable Design and the Moral Reasoning of Autonomous Systems.” Ethics and Infor￾mation Technology 18:103–15.

Gerdes, Anne, and Peter Øhrstrøm. 2015. “Issues in Robot Ethics Seen through the Lens of a Moral Turing Test.” Journal of Information, Communication, and Ethics in Society 13, no. 2: 98–109.

Luxton, David D., Susan Leigh Anderson, and Michael Anderson. 2016. “Ethical Issues and Artificial Intelligence Technologies in Behavioral and Mental Health Care.” In Artificial Intelligence in Behavioral and Mental Health Care, edited by David D. Luxton, 255–76. Amsterdam: Elsevier Academic Press.




Artificial Intelligence - General and Narrow Categories Of AI.






There are two types of artificial intelligence: general (or powerful or complete) and narrow (or limited) (or weak or specialized).

In the actual world, general AI, such as that seen in science fiction, does not yet exist.

Machines with global intelligence would be capable of completing every intellectual endeavor that humans are capable of.

This sort of system would also seem to think in abstract terms, establish connections, and communicate innovative ideas in the same manner that people do, displaying the ability to think abstractly and solve problems.



Such a computer would be capable of thinking, planning, and recalling information from the past.

While the aim of general AI has yet to be achieved, there are more and more instances of narrow AI.

These are machines that perform at human (or even superhuman) levels on certain tasks.

Computers that have learnt to play complicated games have abilities, techniques, and behaviors that are comparable to, if not superior to, those of the most skilled human players.

AI systems that can translate between languages in real time, interpret and respond to natural speech (both spoken and written), and recognize images have also been developed (being able to recognize, identify, and sort photos or images based on the content).

However, the ability to generalize knowledge or skills is still largely a human accomplishment.

Nonetheless, there is a lot of work being done in the field of general AI right now.

It will be difficult to determine when a computer develops human-level intelligence.

Several serious and hilarious tests have been suggested to determine whether a computer has reached the level of General AI.

The Turing Test is arguably the most renowned of these examinations.

A machine and a person speak in the background, as another human listens in.

The human eavesdropper must figure out which speaker is a machine and which is a human.

The machine passes the test if it can fool the human evaluator a prescribed percentage of the time.

The Coffee Test is a more fantastical test in which a machine enters a typical household and brews coffee.



It has to find the coffee machine, look for the coffee, add water, boil the coffee, and pour it into a cup.

Another is the Flat Pack Furniture Test, which involves a machine receiving, unpacking, and assembling a piece of furniture based only on the instructions supplied.

Some scientists, as well as many science fiction writers and fans, believe that once intelligent machines reach a tipping point, they will be able to improve exponentially.

AI-based beings that far exceed human capabilities might be one conceivable result.

The Singularity, or artificial superintelligence, is the point at which AI assumes control of its own self-improvement (ASI).

If ASI is achieved, it will have unforeseeable consequences for human society.

Some pundits worry that ASI would jeopardize humanity's safety and dignity.

It's up for dispute whether the Singularity will ever happen, and how dangerous it may be.

Narrow AI applications are becoming more popular across the globe.

Machine learning (ML) is at the heart of most new applications, and most AI examples in the news are connected to this subset of technology.

Traditional or conventional algorithms are not the same as machine learning programs.

In programs that cannot learn, a computer programmer actively adds code to account for every action of an algorithm.

All of the decisions made along the process are governed by the programmer's guidelines.

This necessitates the programmer imagining and coding for every possible circumstance that an algorithm may face.

This kind of program code is bulky and often inadequate, especially if it is updated frequently to accommodate for new or unanticipated scenarios.

The utility of hard-coded algorithms approaches its limit in cases where the criteria for optimum judgments are unclear or impossible for a human programmer to foresee.

Machine learning is the process of training a computer to detect and identify patterns via examples rather than predefined rules.



This is achieved, according to Google engineer Jason Mayes, by reviewing incredibly huge quantities of training data or participating in some other kind of programmed learning step.

New patterns may be extracted by processing the test data.

The system may then classify newly unknown data based on the patterns it has already found.

Machine learning allows an algorithm to recognize patterns or rules underlying decision-making processes on its own.

Machine learning also allows a system's output to improve over time as it gains more experience (Mayes 2017).

A human programmer continues to play a vital role in this learning process, influencing results by making choices like developing the exact learning algorithm, selecting the training data, and choosing other design elements and settings.

Machine learning is powerful once it's up and running because it can adapt and enhance its ability to categorize new data without the need for direct human interaction.

In other words, the output quality increases as the user gains experience.

Artificial intelligence is a broad word that refers to the science of making computers intelligent.

AI is a computer system that can collect data and utilize it to make judgments or solve issues, according to scientists.

Another popular scientific definition of AI is "a software program paired with hardware that can receive (or sense) inputs from the world around it, evaluate and analyze those inputs, and create outputs and suggestions without the assistance of a person." When programmers claim an AI system can learn, they're referring to the program's ability to change its own processes in order to provide more accurate outputs or predictions.

AI-based systems are now being developed and used in practically every industry, from agriculture to space exploration, and in applications ranging from law enforcement to online banking.

The methods and techniques used in computer science are always evolving, extending, and improving.

Other terminology linked to machine learning, such as reinforcement learning and neural networks, are important components of cutting-edge artificial intelligence systems.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 

Embodiment, AI and; Superintelligence; Turing, Alan; Turing Test.


Further Reading:


Kelnar, David. 2016. “The Fourth Industrial Revolution: A Primer on Artificial Intelligence (AI).” Medium, December 2, 2016. https://medium.com/mmc-writes/the-fourth-industrial-revolution-a-primer-on-artificial-intelligence-ai-ff5e7fffcae1.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Mayes, Jason. 2017. Machine Learning 101. https://docs.google.com/presentation/d/1kSuQyW5DTnkVaZEjGYCkfOxvzCqGEFzWBy4e9Uedd9k/htmlpresent.

Müller, Vincent C., and Nick Bostrom. 2016. “Future Progress in Artificial Intelligence: A Survey of Expert Opinion.” In Fundamental Issues of Artificial Intelligence, edited by Vincent C. Müller, 553–71. New York: Springer.

Russell, Stuart, and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice Hall.

Samuel, Arthur L. 1988. “Some Studies in Machine Learning Using the Game of Checkers I.” In Computer Games I, 335–65. New York: Springer.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...