Showing posts with label IBM. Show all posts
Showing posts with label IBM. Show all posts

Artificial Intelligence - Who Was John McCarthy?

 


John McCarthy  (1927–2011) was an American computer scientist and mathematician who was best known for helping to develop the subject of artificial intelligence in the late 1950s and pushing the use of formal logic in AI research.

McCarthy was a creative thinker who earned multiple accolades for his contributions to programming languages and operating systems research.

Throughout McCarthy's life, however, artificial intelligence and "formalizing common sense" remained his primary research interest (McCarthy 1990).

As a graduate student, McCarthy first met the concepts that would lead him to AI at the Hixon conference on "Cerebral Mechanisms in Behavior" in 1948.

The symposium was place at the California Institute of Technology, where McCarthy had just finished his undergraduate studies and was now enrolled in a graduate mathematics program.

In the United States, machine intelligence had become a subject of substantial academic interest under the wide term of cybernetics by 1948, and many renowned cyberneticists, notably Princeton mathematician John von Neumann, were in attendance at the symposium.

McCarthy moved to Princeton's mathematics department a year later, when he discussed some early ideas inspired by the symposium with von Neumann.

McCarthy never published the work, despite von Neumann's urging, since he believed cybernetics could not solve his problems concerning human knowing.

McCarthy finished a PhD on partial differential equations at Princeton.

He stayed at Princeton as an instructor after graduating in 1951, and in the summer of 1952, he had the chance to work at Bell Labs with cyberneticist and inventor of information theory Claude Shannon, whom he persuaded to collaborate on an edited collection of writings on machine intelligence.

Automata Studies received contributions from a variety of fields, ranging from pure mathematics to neuroscience.

McCarthy, on the other hand, felt that the published studies did not devote enough attention to the important subject of how to develop intelligent machines.

McCarthy joined the mathematics department at Stanford in 1953, but was fired two years later, maybe because he spent too much time thinking about intelligent computers and not enough time working on his mathematical studies, he speculated.

In 1955, he accepted a position at Dartmouth, just as IBM was preparing to establish the New England Computation Center at MIT.

The New England Computation Center gave Dartmouth access to an IBM computer that was installed at MIT and made accessible to a group of New England colleges.

McCarthy met IBM researcher Nathaniel Rochester via the IBM initiative, and he recruited McCarthy to IBM in the summer of 1955 to work with his research group.

McCarthy persuaded Rochester of the need for more research on machine intelligence, and he submitted a proposal to the Rockefeller Foundation for a "Summer Research Project on Artificial Intelligence" with Rochester, Shannon, and Marvin Minsky, a graduate student at Princeton, which included the first known use of the phrase "artificial intelligence." Despite the fact that the Dartmouth Project is usually regarded as a watershed moment in the development of AI, the conference did not go as McCarthy had envisioned.

The Rockefeller Foundation supported the proposal at half the proposed budget since it was for such an unique field of research with a relatively young professor as author, and because Shannon's reputation carried substantial weight with the Foundation.

Furthermore, since the event took place over many weeks in the summer of 1955, only a handful of the guests were able to attend the whole period.

As a consequence, the Dartmouth conference was a fluid affair with an ever-changing and unpredictably diverse guest list.

Despite its chaotic implementation, the meeting was crucial in establishing AI as a distinct area of research.

McCarthy won a Sloan grant to spend a year at MIT, closer to IBM's New England Computation Center, while still at Dartmouth in 1957.

McCarthy was given a post in the Electrical Engineering department at MIT in 1958, which he accepted.

Later, he was joined by Minsky, who worked in the mathematics department.

McCarthy and Minsky suggested the construction of an official AI laboratory to Jerome Wiesner, head of MIT's Research Laboratory of Electronics, in 1958.

McCarthy and Minsky agreed on the condition that Wiesner let six freshly accepted graduate students into the laboratory, and the "artificial intelligence project" started teaching its first generation of students.

McCarthy released his first article on artificial intelligence in the same year.

In his book "Programs with Common Sense," he described a computer system he named the Advice Taker that would be capable of accepting and understanding instructions in ordinary natural language from nonexpert users.

McCarthy would later define Advice Taker as the start of a study program aimed at "formalizing common sense." McCarthy felt that everyday common sense notions, such as comprehending that if you don't know a phone number, you'll need to look it up before calling, might be written as mathematical equations and fed into a computer, enabling the machine to come to the same conclusions as humans.

Such formalization of common knowledge, McCarthy felt, was the key to artificial intelligence.

McCarthy's presentation, which was presented at the United Kingdom's National Physical Laboratory's "Symposium on Mechansation of Thought Processes," helped establish the symbolic program of AI research.

McCarthy's research was focused on AI by the late 1950s, although he was also involved in a range of other computing-related topics.

In 1957, he was assigned to a group of the Association for Computing Machinery charged with developing the ALGOL programming language, which would go on to become the de facto language for academic research for the next several decades.

He created the LISP programming language for AI research in 1958, and its successors are widely used in business and academia today.

McCarthy contributed to computer operating system research via the construction of time sharing systems, in addition to his work on programming languages.

Early computers were large and costly, and they could only be operated by one person at a time.

McCarthy identified the necessity for several users throughout a major institution, such as a university or hospital, to be able to use the organization's computer systems concurrently via computer terminals in their offices from his first interaction with computers in 1955 at IBM.

McCarthy pushed for study on similar systems at MIT, serving on a university committee that looked into the issue and ultimately assisting in the development of MIT's Compatible Time-Sharing System (CTSS).

Although McCarthy left MIT before the CTSS work was completed, his advocacy with J.C.R.

Licklider, future office head at the Advanced Research Projects Agency, the predecessor to DARPA, while a consultant at Bolt Beranek and Newman in Cambridge, was instrumental in helping MIT secure significant federal support for computing research.

McCarthy was recruited to join what would become the second department of computer science in the United States, after Purdue's, by Stanford Professor George Forsythe in 1962.

McCarthy insisted on going only as a full professor, which he believed would be too much for Forsythe to handle as a young researcher.

Forsythe was able to persuade Stanford to grant McCarthy a full chair, and he moved to Stanford in 1965 to establish the Stanford AI laboratory.

Until his retirement in 2000, McCarthy oversaw research at Stanford on AI topics such as robotics, expert systems, and chess.

McCarthy was up in a family where both parents were ardent members of the Communist Party, and he had a lifetime interest in Russian events.

He maintained numerous professional relationships with Soviet cybernetics and AI experts, traveling and lecturing there in the mid-1960s, and even arranged a chess match between a Stanford chess computer and a Russian equivalent in 1965, which the Russian program won.

He developed many foundational concepts in symbolic AI theory while at Stanford, such as circumscription, which expresses the idea that a computer must be allowed to make reasonable assumptions about problems presented to it; otherwise, even simple scenarios would have to be specified in such exacting logical detail that the task would be all but impossible.

McCarthy's accomplishments have been acknowledged with various prizes, including the 1971 Turing Award, the 1988 Kyoto Prize, admission into the National Academy of Sciences in 1989, the 1990 Presidential Medal of Science, and the 2003 Benjamin Franklin Medal.

McCarthy was a brilliant thinker who continuously imagined new technologies, such as a space elevator for economically transporting stuff into orbit and a system of carts strung from wires to better urban transportation.

In a 2008 interview, McCarthy was asked what he felt the most significant topics in computing now were, and he answered without hesitation, "Formalizing common sense," the same endeavor that had inspired him from the start.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cybernetics and AI; Expert Systems; Symbolic Logic.


References & Further Reading:


Hayes, Patrick J., and Leora Morgenstern. 2007. “On John McCarthy’s 80th Birthday, in Honor of His Contributions.” AI Magazine 28, no. 4 (Winter): 93–102.

McCarthy, John. 1990. Formalizing Common Sense: Papers, edited by Vladimir Lifschitz. Norwood, NJ: Albex.

Morgenstern, Leora, and Sheila A. McIlraith. 2011. “John McCarthy’s Legacy.” Artificial Intelligence 175, no. 1 (January): 1–24.

Nilsson, Nils J. 2012. “John McCarthy: A Biographical Memoir.” Biographical Memoirs of the National Academy of Sciences. http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/mccarthy-john.pdf.



Artificial Intelligence - What Is The Deep Blue Computer?





The color deep blue Since the 1950s, artificial intelligence has been utilized to play chess.

Chess has been studied for a variety of reasons.

First, since there are a limited number of pieces that may occupy distinct spots on the board, the game is simple to represent in computers.

The game is quite challenging to play.

There are a tremendous number of alternative states (piece configurations), and exceptional chess players evaluate both their own and their opponents' actions, which means they must predict what could happen many turns in the future.

Finally, chess is a competitive sport.

When a human competes against a computer, they are comparing intellect.

Deep Blue, the first computer to beat a reigning chess world champion, demonstrated that machine intelligence was catching up to humans in 1997.





Deep Blue was first released in 1985.

Feng-Hsiung Hsu, Thomas Anantharaman, and Murray Campbell created ChipTest, a chess-playing computer, while at Carnegie Mellon University.

The computer used brute force, generating and comparing move sequences using the alpha-beta search technique in order to determine the best one.

The generated positions would be scored by an evaluation function, enabling several locations to be compared.

Furthermore, the algorithm was adversarial, anticipating the opponent's movements in order to discover a means to defeat them.

If a computer has enough time and memory to execute the calculations, it can theoretically produce and evaluate an unlimited number of moves.

When employed in tournament play, however, the machine is restricted in both directions.

ChipTest was able to generate and assess 50,000 movements per second because to the usage of a single special-purpose chip.

The search process was enhanced in 1988 to add single extensions, which may rapidly discover a move that is superior to all other options.

ChipTest could construct bigger sequences and see farther ahead in the game by swiftly deciding superior actions, testing human players' foresight.

Mike Browne and Andreas Nowatzyk joined the team as ChipTest developed into Deep Thought.

Deep Thought was able to process about 700,000 chess moves per second because to two upgraded move genera tor chips.

Deep Thought defeated Bent Larsen in 1988, becoming the first computer to defeat a chess grandmaster.

After IBM recruited the majority of the development team, work on Deep Thought continued.

The squad has now set its sights on defeating the world's finest chess player.





Garry Kasparov was the finest chess player in the world at the time, as well as one of the best in his generation.

Kasparov, who was born in Baku, Azerbaijan, in 1963, won the Soviet Junior Championship when he was twelve years old.

He was the youngest player to qualify for the Soviet Chess Championship at the age of fifteen.

He won the under-twenty world championship when he was seventeen years old.

Kasparov was also the world's youngest chess champion, having won the championship at the age of twenty-two in 1985.

He held the championship until 1993, when he was forced to relinquish it after quitting the International Chess Federation.

He instantly won the Classical World Champion, which he held from 1993 to 2000.

Kasparov was the best chess player in the world for the majority of 1986 to 2005 (when he retired).

Deep Thought faced off against Kasparov in a two-game match in 1989.

Kasparov easily overcame Deep Thought by winning both games.

Deep Thought evolved into Deep Blue, which only appeared in two bouts, both of which were versus Kasparov.

When it came to the matches, Kasparov was at a disadvantage since he was up against Deep Blue.

He would scout his opponents before matches, as do many chess players, by watching them play or reading records of tournament matches to obtain insight into their play style and methods.

Deep Blue, on the other hand, has no prior match experience, having only played in private matches against the developers before to facing Kasparov.

As a result, Kasparov was unable to scout Deep Blue.

The developers, on the other hand, had access to Kasparov's match history, allowing them to tailor Deep Blue to his playing style.

Despite this, Kasparov remained confident, claiming that no machine would ever be able to defeat him.

On February 10, 1996, Deep Blue and Kasparov played their first six-game match in Philadelphia.

Deep Blue was the first machine to defeat a reigning world champion in a single game, winning the opening game.

After two draws and three victories, Kasparov would go on to win the match.

The contest drew international notice, and a rematch was planned.

Deep Blue and Kasparov faced off in another six-game contest on May 11, 1997, at the Equitable Center in New York City, after a series of improvements.

The match had a crowd and was broadcast.

At this point, Deep Blue was com posed of 400 special-purpose chips capable of searching through 200,000,000 chess moves per second.

Kasparov won the first game, while Deep Blue won the second.

The following three games were draws.

The final game would determine the match.

In this final game, Deep Blue capitalized on a mistake by Kasparov, causing the champion to concede after nineteen moves.

Deep Blue became the first machine ever to defeat a reigning world champion in a match.

Kasparov believed that a human had interfered with the match, providing Deep Blue with winning moves.

The claim was based on a move made in the second match, where Deep Blue made a sacrifice that (to many) hinted at a different strat egy than the machine had used in prior games.

The move made a significant impact on Kasparov, upsetting him for the remainder of the match and affecting his play.

Two factors may have combined to generate the move.

First, Deep Blue underwent modifications between the first and second game to correct strategic flaws, thereby influencing its strategy.

Second, designer Murray Campbell men tioned in an interview that if the machine could not decide which move to make, it would select one at random; thus there was a chance that surprising moves would be made.

Kasparov requested a rematch and was denied.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Demis Hassabis.



Further Reading:


Campbell, Murray, A. Joseph Hoane Jr., and Feng-Hsiung Hsu. 2002. “Deep Blue.” Artificial Intelligence 134, no. 1–2 (January): 57–83.

Hsu, Feng-Hsiung. 2004. Behind Deep Blue: Building the Computer That Defeated the World Chess Champion. Princeton, NJ: Princeton University Press.

Kasparov, Garry. 2018. Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. London: John Murray.

Levy, Steven. 2017. “What Deep Blue Tells Us about AI in 2017.” Wired, May 23, 2017. https://www.wired.com/2017/05/what-deep-blue-tells-us-about-ai-in-2017/.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...