Artificial Intelligence - How Is AI Contributing To Cybernetics?

 





The study of communication and control in live creatures and machines is known as cybernetics.

Although the phrase "cybernetic thinking" is no longer generally used in the United States, it pervades computer science, engineering, biology, and the social sciences today.

Throughout the last half-century, cybernetic connectionist and artificial neural network approaches to information theory and technology have often clashed, and in some cases hybridized, with symbolic AI methods.

Norbert Wiener (1894–1964), who coined the term "cybernetics" from the Greek word for "steersman," saw the field as a unifying force that brought disparate topics like game theory, operations research, theory of automata, logic, and information theory together and elevated them.

Winer argued in Cybernetics, or Control and Communication in the Animal and the Machine (1948), that contemporary science had become too much of a specialist's playground as a consequence of tendencies dating back to the early Enlightenment.

Wiener envisioned a period when experts might collaborate "not as minions of some great administrative officer, but united by the desire, indeed by the spiritual imperative, to comprehend the area as a whole, and to give one another the power of that knowledge" (Weiner 1948b, 3).

For Wiener, cybernetics provided researchers with access to many sources of knowledge while maintaining their independence and unbiased detachment.

Wiener also believed that man and machine should be seen as basically interchangeable epistemologically.

The biological sciences and medicine, according to Wiener, would remain semi-exact and dependent on observer subjectivity until these common components were discovered.



In the setting of World War II (1939– 1945), Wiener developed his cybernetic theory.

Operations research and game theory, for example, are interdisciplinary sciences rich in mathematics that have previously been utilized to identify German submarines and create the best feasible solutions to complex military decision-making challenges.

Wiener committed himself into the job of implementing modern cybernetic weapons against the Axis countries in his role as a military adviser.

To that purpose, Wiener focused on deciphering the feedback processes involved in curvilinear flight prediction and applying these concepts to the development of advanced fire-control systems for shooting down enemy aircraft.

Claude Shannon, a long-serving Bell Labs researcher, went even further than Wiener in attempting to bring cybernetic ideas to life, most notably in his experiments with Theseus, an electromechanical mouse that used digital relays and a feedback process to learn how to navigate mazes based on previous experience.

Shannon created a slew of other automata that mimicked the behavior of thinking machines.

Shannon's mentees, including AI pioneers John McCarthy and Marvin Minsky, followed in his footsteps and labeled him a symbolic information processor.

McCarthy, who is often regarded with establishing the field of artificial intelligence, studied the mathematical logic that underpins human thought.



Minsky opted to research neural network models as a machine imitation of human vision.

The so-called McCulloch-Pitts neurons were the core components of cybernetic understanding of human cognitive processing.

These neurons were strung together by axons for communication, establishing a cybernated system encompassing a crude simulation of the wet science of the brain, and were named after Warren McCulloch and Walter Pitts.

Pitts admired Wiener's straightforward analogy of cerebral tissue to vacuum tube technology, and saw these switching devices as metallic analogues to organic cognitive components.

McCulloch-Pitts neurons were believed to be capable of mimicking basic logical processes required for learning and memory.

Pitts perceived a close binary equivalence between the electrical discharges produced by these devices and the electrochemical nerve impulses generated in the brain in the 1940s.

McCulloch-Pitts inputs may be either a zero or a one, and the output can also be a zero or a one in their most basic form.

Each input may be categorized as excitatory or inhibitory.

It was therefore merely a short step from artificial to animal memory for Pitts and Wiener.

Donald Hebb, a Canadian neuropsychologist, made even more significant contributions to the research of artificial neurons.

These were detailed in his book The Organization of Behavior, published in 1949.

Associative learning is explained by Hebbian theory as a process of neural synaptic cells firing and connecting together.

In his study of the artificial "perceptron," a model and algorithm that weighted inputs so that it could be taught to detect particular kinds of patterns, U.S.

Navy researcher Frank Rosenblatt expanded the metaphor.

The eye and cerebral circuitry of the perceptron could approximately discern between pictures of cats and dogs.

The navy saw the perceptron as "the embryo of an electronic computer that it anticipates to be able to walk, speak, see, write, reproduce itself, and be cognizant of its existence," according to a 1958 interview with Rosenblatt (New York Times, July 8, 1958, 25).

Wiener, Shannon, McCulloch, Pitts, and other cyberneticists were nourished by the famed Macy Conferences on Cybernetics in the 1940s and 1950s, which attempted to automate human comprehension of the world and the learning process.

The gatherings also acted as a forum for discussing artificial intelligence issues.

The divide between the areas developed over time, but it was visible during the 1956 Dartmouth Summer Research Project on ArtificialIntelligence.

Organic cybernetics research was no longer well-defined in American scientific practice by 1970.

Computing sciences and technology evolved from machine cybernetics.

Cybernetic theories are now on the periphery of social and hard scientific disciplines such as cognitive science, complex systems, robotics, systems theory, and computer science, but they were critical to the information revolution of the twentieth and twenty-first centuries.

In recent studies of artificial neural networks and unsupervised machine learning, Hebbian theory has seen a resurgence of attention.

Cyborgs—beings made up of biological and mechanical pieces that augment normal functions—could be regarded a subset of cybernetics (which was once known as "medical cybernetics" in the 1960s).


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; Macy Conferences; Warwick, Kevin.


Further Reading


Ashby, W. Ross. 1956. An Introduction to Cybernetics. London: Chapman & Hall.

Galison, Peter. 1994. “The Ontology of the Enemy: Norbert Weiner and the Cybernetic Vision.” Critical Inquiry 21, no. 1 (Autumn): 228–66.

Kline, Ronald R. 2017. The Cybernetics Moment: Or Why We Call Our Age the Information Age. Baltimore, MD: Johns Hopkins University Press.

Mahoney, Michael S. 1990. “Cybernetics and Information Technology.” In Companion to the History of Modern Science, edited by R. C. Olby, G. N. Cantor, J. R. R. Christie, and M. J. S. Hodge, 537–53. London: Routledge.

“New Navy Device Learns by Doing; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” 1958. New York Times, July 8, 25.

Weiner, Norbert. 1948a. “Cybernetics.” Scientific American 179, no. 5 (November): 14–19.

Weiner, Norbert. 1948b. Cybernetics, or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press.



Artificial Intelligence - What Was The DENDRAL Expert System?

 



DENDRAL was an early expert system intended at analyzing and recognizing complicated chemical substances, developed by Nobel Laureate Joshua Lederberg and computer scientist Edward Feigenbaum.

DENDRAL (meaning tree in Greek) was created by Feigenbaum and Lederberg at Stanford University's Artificial Intelligence Laboratory in the 1960s.

There was considerable expectation at the time that computers capable of analyzing alien buildings for evidence of life might aid NASA's 1975 Viking Mission to Mars.

DEN DRAL relocated to Stanford's Chemistry Department in the 1970s, where it was directed by Carl Djerassi, a well-known scientist in the area of mass spectrometry, until 1983.

Because there was no overarching theory of mass spectrometry, molecular chemists used rules of thumb to analyze the raw data obtained by a mass spectrometer to identify organic chemicals.

Computers, according to Lederberg, might make organic chemistry more methodical and predictable.

He began by creating a comprehensive search engine.

The provision of heuristic search criteria was Feigenbaum's first contribution to the project.

These guidelines codified what scientists already knew about mass spectrometry.

As a consequence, a groundbreaking AI system was created that provided the most likely responses rather than all potential ones.

According to Timothy Lenoir, a historian of science, DENDRAL "would analyze the data, generate a list of candidate structures, predict the mass spectra of those structures using mass spectrometry theory, and select as a hypothesis the structure whose spectrum most closely matched the data" (Lenoir 1998, 31).

Around 1968, Feigenbaum said, he created the phrase "expert system." Because it incorporates scientific competence, DENDRAL is called an expert system.

Computer scientists took the information that human chemists had retained in their working minds and made it explicit in DEN DRAL's IF-THEN search criteria.

An expert system, in technical terms, is a computer system that has a clear separation between the knowledge base and the inference engine.

This, in theory, enables human specialists to examine the rules of a software like DENDRAL, comprehend its structure, and provide suggestions on how to improve it.

Starting in the mid-1970s, the favorable findings of DENDRAL led to a steady quadrupling of Feigenbaum's Defense Advanced Research Projects Agency funding for artificial intelligence research.

DENDRAL grew at the same rate as the field of mass spectrometry.

After outgrowing Lederberg's expertise, the system started to absorb Djerassi's and others' information from his lab.

As a result, both chemists and computer scientists gained a better understanding of the underlying structure of organic chemistry and mass spectrometry, enabling the area to take a significant stride toward theory development.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Expert Systems; MYCIN MOLGEN.


Further Reading:


Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books.

Feigenbaum, Edward. October 12, 2000. Oral History. Minneapolis, MN: Charles Babbage Institute.

Lenoir, Timothy. 1998. “Shaping Biomedicine as an Information Science.” In Proceedings of the 1998 Conference on the History and Heritage of Science Information Systems, edited by Mary Ellen Bowden, Trudi Bellardo Hahn, and Robert V. Williams, 27–45. Medford, NJ: Information Today



Artificial Intelligence - What Is Deep Learning?

 



Deep learning is a subset of methods, tools, and techniques in artificial intelligence or machine learning.

Learning in this case involves the ability to derive meaningful information from various layers or representations of any given data set in order to complete tasks without human instruction.

Deep refers to the depth of a learning algorithm, which usually involves many layers.

Machine learning networks involving many layers are often considered to be deep, while those with only a few layers are considered shallow.

The recent rise of deep learning over the 2010s is largely due to computer hardware advances that permit the use of computationally expensive algorithms and allow storage of immense datasets.

Deep learning has produced exciting results in the fields of computer vision, natural language, and speech recognition.

Notable examples of its application can be found in personal assistants such as Apple’s Siri or Amazon Alexa and search, video, and product recommendations.

Deep learning has been used to beat human champions at popular games such as Go and Chess.

Artificial neural networks are the most common form of deep learning.

Neural networks extract information through multiple stacked layers commonly known as hidden layers.





These layers contain artificial neurons, which are connected independently via weights to neurons in other layers.

Neural networks often involve dense or fully connected layers, meaning that each neuron in any given layer will connect to every neuron of its preceding layer.

This allows the network to learn increasingly intricate details or be trained by the data passing through each subsequent layer.

Part of what separates deep learning from other forms of machine learning is its ability to work with unstructured data.

There are no pre-arranged labels or characteristics in unstructured data.

Deep learning algorithms can learn to link their own features with unstructured inputs using several stacked layers.

This is done by the hierarchical approach in which a deep multi-layered learning algorithm offers more detailed information with each successive layer, enabling it to break down a very complicated issue into a succession of lesser ones.

This enables the network to learn more complex information or to be taught by data provided via successive layers.

The following steps are used to train a network: Small batches of tagged data are sent over the network first.

The loss of the network is determined by comparing predictions to real labels.

Back propagation is used to compute and transmit any inconsistencies to the weights.

Weights are tweaked gradually in order to keep losses to a minimum throughout each round of predictions.

The method is repeated until the network achieves optimum loss reduction and high accuracy of accurate predictions.

Deep learning has an advantage over many machine learning approaches and shallow learning networks since it can self-optimize its layers.

Machine or shallow learning methods need human participation in the preparation of unstructured data for input, often known as feature engineering, since they only have a few layers at most.





This may be a lengthy procedure that takes much too much time to be profitable, particularly if the dataset is enormous.

As a result of these factors, machine learning algorithms may seem to be a thing of the past.

Deep learning algorithms, on the other hand, come at a price.

Finding their own characteristics requires a large quantity of data, which isn't always accessible.

Furthermore, as data volumes get larger, so do the processing power and training time requirements, since the network will be dealing with a lot more data.

Depending on the number and kinds of layers utilized, training time will also rise.

Fortunately, online computing, which lets anybody to rent powerful machines for a price, allows anyone to run some of the most demanding deep learning networks.

Convolutional neural networks need hidden layers that are not included in the standard neural network design.

Deep learning of this kind is most often connected with computer vision projects, and it is now the most extensively used approach in that sector.

In order to obtain information from an image, basic convnet networks would typically utilize three kinds of layers: convolutional layers, pooling layers, and dense layers.

Convolutional layers gather information from low-level features such as edges and curves by sliding a window, or convolutional kernel, over the picture.

Subsequent stacked convolutional layers will repeat this procedure over the freshly generated layers of low-level features, looking for increasingly higher-level characteristics until the picture is fully understood.

Different hyperparameters may be modified to find different sorts of features, such as the size of the kernel or the distance it glides over the picture.

Pooling layers enable a network to learn higher-level elements of an image in a progressive manner by down sampling the picture along the way.

The network may become too computationally costly without a pooling layer built amid convolutional layers as each successive layer examines more detailed data.

In addition, the pooling layer reduces the size of an image while preserving important details.

These characteristics become translation invariant, which means that a feature seen in one portion of an image may be identified in a totally other region of the same picture.

The ability of a convolutional neural network to retain positional information is critical for image classification.

The ability of deep learning to automatically parse through unstructured data to find local features that it deems important while retaining positional information about how these features interact with one another demonstrates the power of convolutional neural networks.

Recurrent neural networks excel at sequence-based tasks like sentence completion and stock price prediction.

The essential idea is that, unlike previous instances of networks in which neurons just transmit information forward, neurons in recurrent neural networks feed information forward while also periodically looping the output back to itself throughout a time step.

Recurrent neural networks may be regarded of as having a rudimentary type of memory since each time step includes recurrent information from all previous time steps.

This is often utilized in natural language processing projects because recurrent neural networks can handle text in a way that is more human-like.

Instead of seeing a phrase as a collection of isolated words, a recurrent neural network may begin to analyse the mood of the statement or even create the following sentence autonomously depending on what has already been stated.

In many respects akin to human talents, deep learning may give strong techniques of evaluating unstructured data.

Unlike humans, deep learning networks never get tired.

Deep learning may substantially outperform standard machine learning techniques when given enough training data and powerful computers, particularly given its autonomous feature engineering capabilities.

Image classification, voice recognition, and self-driving vehicles are just a few of the fields that have benefited tremendously from deep learning research over the previous decade.

Many new exciting deep learning applications will emerge if current enthusiasm and computer hardware upgrades continue to grow.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Automatic Film Editing; Berger-Wolf, Tanya; Cheng, Lili; Clinical Decision Support Systems; Hassabis, Demis; Tambe, Milind.


Further Reading:


Chollet, François. 2018. Deep Learning with Python. Shelter Island, NY: Manning Publications.

Géron, Aurélien. 2019. Hands-On Machine Learning with Scikit-Learn, Keras and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. Second edition. Sebastopol, CA: O’Reilly Media.

Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. 2017. Deep Learning. Cambridge, MA: MIT Press.

Artificial Intelligence - Who Is Hugo de Garis?

 


Hugo de Garis (1947–) is an expert in genetic algorithms, artificial intelligence, and topological quantum computing.

He is the creator of the concept of evolvable hardware, which uses evolutionary algorithms to produce customized electronics that can alter structural design and performance dynamically and autonomously in response to their surroundings.

De Garis is most known for his 2005 book The Artilect Battle, in which he describes what he thinks will be an unavoidable twenty-first-century worldwide war between mankind and ultraintelligent robots.

In the 1980s, De Garis got fascinated in genetic algorithms, neural networks, and the idea of artificial brains.

In artificial intelligence, genetic algorithms include the use of software to model and apply Darwinian evolutionary ideas to search and optimization issues.

The "fittest" candidate simulations of axons, dendrites, signals, and synapses in artificial neural networks were evolved using evolutionary algorithms developed by de Garis.

De Garis developed artificial neural systems that resembled those seen in organic brains.

In the 1990s, his work with a new type of programmable computer chips spawned the subject of computer science known as evolvable hardware.

The use of programmable circuits allowed neural networks to grow and evolve at high rates.

De Garis also started playing around with cellular automata, which are mathematical models of complex systems that emerge generatively from basic units and rules.

The coding of around 11,000 fundamental rules was required in an early version of his modeling of cellular automata that acted like brain networks.

About 60,000 such rules were encoded in a subsequent version.

De Garis called his neural networks-on-a-chip a Cellular Automata Machine in the 2000s.

De Garis started to hypothesize that the period of "Brain Building on the Cheap" had come as the price of chips dropped (de Garis 2005, 45).

He started referring to himself as the "Father of Artificial Intelligence." He claims that in the future decades, whole artificial brains with billions of neurons will be built utilizing information acquired from molecular scale robot probes of human brain tissues and the advent of new path breaking brain imaging tools.

Topological quantum computing is another enabling technology that de Garis thinks will accelerate the creation of artificial brains.

He claims that once the physical boundaries of standard silicon chip manufacturing are approached, quantum mechanical phenomena must be harnessed.

Inventions in reversible heatless computing will also be significant in dissipating the harmful temperature effects of tightly packed circuits.

De Garis also supports the development of artificial embryology, often known as "embryofacture," which involves the use of evolutionary engineering and self-assembly methods to mimic the development of fully aware beings from single fertilized eggs.

According to De Garis, because to fast breakthroughs in artificial intelligence technology, a conflict over our last innovation will be unavoidable before the end of the twenty-first century.

He thinks the battle will finish with a catastrophic human extinction catastrophe he refers to as "gigadeath." De Garis speculates in his book The Artilect War that continued Moore's Law doubling of transistors packed on computer chips, accompanied by the development of new technologies such as femtotechnology (the achievement of femtometer-scale struc turing of matter), quantum computing, and neuroengineering, will almost certainly lead to gigadeath.

De Garis felt compelled to create The Artilect War as a cautionary tale and as a self-admitted architect of the impending calamity.

The Cosmists and the Terrans are two antagonistic worldwide political parties that De Garis uses to frame his discussion of an impending Artilect War.

The Cosmists will be apprehensive of the immense power of future superintelligent machines, but they will regard their labor in creating them with such veneration that they will experience a near-messianic enthusiasm in inventing and unleashing them into the world.

Regardless of the hazards to mankind, the Cosmists will strongly encourage the development and nurturing of ever-more sophisticated and powerful artificial minds.

The Terrans, on the other hand, will fight against the creation of artificial minds once they realize they represent a danger to human civilization.

They will feel compelled to fight these artificial intelligences because they constitute an existential danger to humanity.

De Garis dismisses a Cyborgian compromise in which humans and their technological creations blend.

He thinks that robots will grow so powerful and intelligent that only a small percentage of humanity would survive the confrontation.

China and the United States, geopolitical adversaries, will be forced to exploit these technology to develop more complex and autonomous economies, defense systems, and military robots.

Artificial intelligence's dominance in the world will be welcomed by the Cosmists, who will come to see them as near-gods deserving of worship.

The Terrans, on the other hand, will fight the transfer of global economic, social, and military dominance to our machine overlords.

They will see the new situation as a terrible tragedy that has befallen humanity.

His case for a future battle over superintelligent robots has sparked a lot of discussion and controversy among scientific and engineering specialists, as well as a lot of criticism in popular science journals.

In his 2005 book, de Garis implicates himself as a cause of the approaching conflict and as a hidden Cosmist, prompting some opponents to question his intentions.

De Garis has answered that he feels compelled to issue a warning now because he thinks there will be enough time for the public to understand the full magnitude of the danger and react when they begin to discover substantial intelligence hidden in household equipment.

If De Garis' warning is taken seriously, he presents a variety of eventualities.

First, he suggests that the Terrans may be able to defeat Cosmist thinking before a superintelligence takes control, though this is unlikely.

De Garis suggests a second scenario in which artilects quit the earth as irrelevant, leaving human civilisation more or less intact.

In a third possibility, the Cosmists grow so terrified of their own innovations that they abandon them.

Again, de Garis believes this is improbable.

In a fourth possibility, he imagines that all Terrans would transform into Cyborgs.

In a fifth scenario, the Terrans will aggressively seek down and murder the Cosmists, maybe even in outer space.

The Cosmists will leave Earth, construct artilects, and ultimately vanish from the solar system to conquer the cosmos in a sixth scenario.

In a seventh possibility, the Cosmists will flee to space and construct artilects that will fight each other until none remain.

In the eighth scenario, the artilects will go to space and be destroyed by an alien super-artilect.

De Garis has been criticized of believing that The Terminator's nightmarish vision would become a reality, rather than contemplating that superintelligent computers may just as well bring world peace.

De Garis answered that there is no way to ensure that artificial brains operate ethically (humanely).

He also claims that it is difficult to foretell whether or not a superintelligence would be able to bypass an implanted death switch or reprogram itself to disobey orders aimed at instilling human respect.

Hugo de Garis was born in 1947 in Sydney, Australia.

In 1970, he graduated from Melbourne University with a bachelor's degree in Applied Mathematics and Theoretical Physics.

He joined the global electronics corporation Philips as a software and hardware architect after teaching undergraduate mathematics at Cambridge University for four years.

He worked at locations in the Netherlands and Belgium.

In 1992, De Garis received a doctorate in Artificial Life and Artificial Intelligence from the Université Libre de Bruxelles in Belgium.

"Genetic Programming: GenNets, Artificial Nervous Systems, Artificial Embryos," was the title of his thesis.

De Garis directed the Center for Data Analysis and Stochastic Processes at the Artificial Intelligence and Artificial Life Research Unit at Brussels as a graduate student, where he explored evolutionary engineering, which uses genetic algorithms to develop complex systems.

He also worked as a senior research associate at George Mason University's Artificial Intelligence Center in Northern Virginia, where he worked with machine learning pioneer Ryszard Michalski.

De Garis did a postdoctoral fellowship at Tsukuba's Electrotechnical Lab.

He directed the Brain Builder Group at the Advanced Telecommunications Research Institute International in Kyoto, Japan, for the following eight years, while they attempted a moon-shot quest to develop a billion-neuron artificial brain.

De Garis returned to Brussels, Belgium, in 2000 to oversee Star Lab's Brain Builder Group, which was working on a rival artificial brain project.

When the dot-com bubble burst in 2001, De Garis' lab went bankrupt while working on a life-size robot cat.

De Garis then moved on to Utah State University as an Associate Professor of Computer Science, where he stayed until 2006.

De Garis was the first to teach advanced research courses on "brain building" and "quantum computing" at Utah State.

He joined Wuhan University's International School of Software in China as Professor of Computer Science and Mathematical Physics in 2006, where he also served as the leader of the Artificial Intelligence group.

De Garis kept working on artificial brains, but he also started looking into topological quantum computing.

De Garis joined the advisory board of Novamente, a commercial business that aims to develop artificial general intelligence, in the same year.

Two years later, Chinese authorities gave his Wuhan University Brain Builder Group a significant funding to begin building an artificial brain.

The China-Brain Project was the name given to the initiative.

De Garis relocated to Xiamen University in China in 2008, where he ran the Artificial Brain Lab in the School of Information Science and Technology's Artificial Intelligence Institute until his retirement in 2010.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Superintelligence; Technological Singularity; The Terminator.


Further Reading:


de Garis, Hugo. 1989. “What If AI Succeeds? The Rise of the Twenty-First Century Artilect.” AI Magazine 10, no. 2 (Summer): 17–22.

de Garis, Hugo. 1990. “Genetic Programming: Modular Evolution for Darwin Machines.” In Proceedings of the International Joint Conference on Neural Networks, 194–97. Washington, DC: Lawrence Erlbaum.

de Garis, Hugo. 2005. The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. ETC Publications.

de Garis, Hugo. 2007. “Artificial Brains.” In Artificial General Intelligence: Cognitive Technologies, edited by Ben Goertzel and Cassio Pennachin, 159–74. Berlin: Springer.

Geraci, Robert M. 2008. “Apocalyptic AI: Religion and the Promise of Artificial Intelligence.” Journal of the American Academy of Religion 76, no. 1 (March): 138–66.

Spears, William M., Kenneth A. De Jong, Thomas Bäck, David B. Fogel, and Hugo de Garis. 1993. “An Overview of Evolutionary Computation.” In Machine Learning: ECML-93, Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence), vol. 667, 442–59. Berlin: Springer.


Artificial Intelligence - What Is The Deep Blue Computer?





The color deep blue Since the 1950s, artificial intelligence has been utilized to play chess.

Chess has been studied for a variety of reasons.

First, since there are a limited number of pieces that may occupy distinct spots on the board, the game is simple to represent in computers.

The game is quite challenging to play.

There are a tremendous number of alternative states (piece configurations), and exceptional chess players evaluate both their own and their opponents' actions, which means they must predict what could happen many turns in the future.

Finally, chess is a competitive sport.

When a human competes against a computer, they are comparing intellect.

Deep Blue, the first computer to beat a reigning chess world champion, demonstrated that machine intelligence was catching up to humans in 1997.





Deep Blue was first released in 1985.

Feng-Hsiung Hsu, Thomas Anantharaman, and Murray Campbell created ChipTest, a chess-playing computer, while at Carnegie Mellon University.

The computer used brute force, generating and comparing move sequences using the alpha-beta search technique in order to determine the best one.

The generated positions would be scored by an evaluation function, enabling several locations to be compared.

Furthermore, the algorithm was adversarial, anticipating the opponent's movements in order to discover a means to defeat them.

If a computer has enough time and memory to execute the calculations, it can theoretically produce and evaluate an unlimited number of moves.

When employed in tournament play, however, the machine is restricted in both directions.

ChipTest was able to generate and assess 50,000 movements per second because to the usage of a single special-purpose chip.

The search process was enhanced in 1988 to add single extensions, which may rapidly discover a move that is superior to all other options.

ChipTest could construct bigger sequences and see farther ahead in the game by swiftly deciding superior actions, testing human players' foresight.

Mike Browne and Andreas Nowatzyk joined the team as ChipTest developed into Deep Thought.

Deep Thought was able to process about 700,000 chess moves per second because to two upgraded move genera tor chips.

Deep Thought defeated Bent Larsen in 1988, becoming the first computer to defeat a chess grandmaster.

After IBM recruited the majority of the development team, work on Deep Thought continued.

The squad has now set its sights on defeating the world's finest chess player.





Garry Kasparov was the finest chess player in the world at the time, as well as one of the best in his generation.

Kasparov, who was born in Baku, Azerbaijan, in 1963, won the Soviet Junior Championship when he was twelve years old.

He was the youngest player to qualify for the Soviet Chess Championship at the age of fifteen.

He won the under-twenty world championship when he was seventeen years old.

Kasparov was also the world's youngest chess champion, having won the championship at the age of twenty-two in 1985.

He held the championship until 1993, when he was forced to relinquish it after quitting the International Chess Federation.

He instantly won the Classical World Champion, which he held from 1993 to 2000.

Kasparov was the best chess player in the world for the majority of 1986 to 2005 (when he retired).

Deep Thought faced off against Kasparov in a two-game match in 1989.

Kasparov easily overcame Deep Thought by winning both games.

Deep Thought evolved into Deep Blue, which only appeared in two bouts, both of which were versus Kasparov.

When it came to the matches, Kasparov was at a disadvantage since he was up against Deep Blue.

He would scout his opponents before matches, as do many chess players, by watching them play or reading records of tournament matches to obtain insight into their play style and methods.

Deep Blue, on the other hand, has no prior match experience, having only played in private matches against the developers before to facing Kasparov.

As a result, Kasparov was unable to scout Deep Blue.

The developers, on the other hand, had access to Kasparov's match history, allowing them to tailor Deep Blue to his playing style.

Despite this, Kasparov remained confident, claiming that no machine would ever be able to defeat him.

On February 10, 1996, Deep Blue and Kasparov played their first six-game match in Philadelphia.

Deep Blue was the first machine to defeat a reigning world champion in a single game, winning the opening game.

After two draws and three victories, Kasparov would go on to win the match.

The contest drew international notice, and a rematch was planned.

Deep Blue and Kasparov faced off in another six-game contest on May 11, 1997, at the Equitable Center in New York City, after a series of improvements.

The match had a crowd and was broadcast.

At this point, Deep Blue was com posed of 400 special-purpose chips capable of searching through 200,000,000 chess moves per second.

Kasparov won the first game, while Deep Blue won the second.

The following three games were draws.

The final game would determine the match.

In this final game, Deep Blue capitalized on a mistake by Kasparov, causing the champion to concede after nineteen moves.

Deep Blue became the first machine ever to defeat a reigning world champion in a match.

Kasparov believed that a human had interfered with the match, providing Deep Blue with winning moves.

The claim was based on a move made in the second match, where Deep Blue made a sacrifice that (to many) hinted at a different strat egy than the machine had used in prior games.

The move made a significant impact on Kasparov, upsetting him for the remainder of the match and affecting his play.

Two factors may have combined to generate the move.

First, Deep Blue underwent modifications between the first and second game to correct strategic flaws, thereby influencing its strategy.

Second, designer Murray Campbell men tioned in an interview that if the machine could not decide which move to make, it would select one at random; thus there was a chance that surprising moves would be made.

Kasparov requested a rematch and was denied.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Demis Hassabis.



Further Reading:


Campbell, Murray, A. Joseph Hoane Jr., and Feng-Hsiung Hsu. 2002. “Deep Blue.” Artificial Intelligence 134, no. 1–2 (January): 57–83.

Hsu, Feng-Hsiung. 2004. Behind Deep Blue: Building the Computer That Defeated the World Chess Champion. Princeton, NJ: Princeton University Press.

Kasparov, Garry. 2018. Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. London: John Murray.

Levy, Steven. 2017. “What Deep Blue Tells Us about AI in 2017.” Wired, May 23, 2017. https://www.wired.com/2017/05/what-deep-blue-tells-us-about-ai-in-2017/.



State Of An Emerging Quantum Computing Technology Ecosystem And Areas Of Business Applications.






    Quantum Computing Hardware.


    The ecosystem's hardware is a major barrier. The problem is both technical and structural in nature. 


    • The first issue is growing the number of qubits in a quantum computer while maintaining a high degree of qubit quality. 
    • Hardware has a high barrier to entry because it requires a rare mix of cash, experimental and theoretical quantum physics competence, and deep knowledge—particularly domain knowledge of the necessary implementation possibilities. 

    Several quantum-computing hardware platforms are presently in the works. 



    The realization of completely error-corrected, fault-tolerant quantum computing will be the most significant milestone, since a quantum computer cannot give precise, mathematically accurate outputs without it. 



    • Experts argue over whether quantum computers can provide substantial commercial value until they are entirely fault resilient. 
    • Many argue, however, that a lack of fault tolerance does not render quantum-computing systems unworkable. 



    When will we be able to tolerate flaws as in produce viable fault-tolerant quantum computing systems? 


    Most hardware companies are cautious to publish their development intentions, although a handful have done so openly. 

    By 2030, five manufacturers have said that they will have fault-tolerant quantum computing hardware. 

    If this timeframe holds true, the industry will most likely have established a distinct quantum advantage for many applications by then. 




    Quantum Computing Software.


    The number of software-focused startups is growing at a higher rate than any other part of the quantum-computing value chain. 


    • Sector players in the software industry today provide bespoke services and want to provide turnkey services as the industry matures. 
    • Organizations will be able to update their software tools and ultimately adopt completely quantum tools as quantum-computing software develops. 
    • Quantum computing, in the meanwhile, necessitates a new programming paradigm—as well as a new software stack. 
    • The bigger industry players often distribute their software-development kits for free in order to foster developer communities around their goods. 



    Quantum Computing Cloud-Based Services. 


    In the end, cloud-based quantum-computing services may become the most important aspect of the ecosystem, and those who manage them may reap enormous riches. 


    • Most cloud computing service providers now give access to quantum computers on their platforms, allowing prospective customers to try out the technology. 
    • Due to the impossibility of personal or mobile quantum computing this decade, early users may have to rely on the cloud to get a taste of the technology before the wider ecosystem grows. 



    Ecosystem of Quantum Computing.




    The foundations for a quantum-computing business have started to take shape. 

    According to our analysis, the value at stake for quantum-computing businesses is close to $80 billion (not to be confused with the value that quantum-computing use cases could generate). 



    Private And Public Funding For Quantum Computing




    Because quantum computing is still a relatively new topic, the bulk of funding for fundamental research is currently provided by the government. 

    Private financing, on the other hand, is fast expanding. 


    Investments in quantum computing start-ups have topped $1.7 billion in 2021 alone, more than double the amount raised in 2020. 

    • As quantum computer commercialization gathers steam, I anticipate private financing to increase dramatically. 
    • If leaders prepare now, a blossoming quantum-computing ecosystem and developing commercial use cases promise to produce enormous value for sectors. 



    Quantum computing's fast advancements serve as potent reminders that the technology is soon approaching commercial viability. 


    • For example, a Japanese research institute recently revealed a breakthrough in entangling qubits (quantum's fundamental unit of information, equivalent to bits in conventional computers) that might enhance error correction in quantum systems and pave the way for large-scale quantum computers. 
    • In addition, an Australian business has created software that has been demonstrated to boost the performance of any quantum-computing hardware in trials. 
    • Investment funds are flowing in, and quantum-computing start-ups are sprouting as advancements speed. 
    • Quantum computing is still being developed by major technological firms, with Alibaba, Amazon, IBM, Google, and Microsoft having already introduced commercial quantum-computing cloud services. 


    Of course, all of this effort does not always equate to commercial success. 



    While quantum computing has the potential to help organizations tackle challenges that are beyond the reach and speed of traditional high-performance computers, application cases are still mostly experimental and conceptual. 


    • Indeed, academics are still disputing the field's most fundamental concerns (for more on these unresolved questions, see the sidebar "Quantum Computing Debates"). 
    • Nonetheless, the behavior shows that CIOs and other executives who have been keeping an eye on quantum-computing developments may no longer be considered spectators. 
    • Leaders should begin to plan their quantum-computing strategy, particularly in businesses like pharmaceuticals that might profit from commercial quantum computing early on. 
    • Change might arrive as early as 2030, according to some firms, who anticipate that practical quantum technologies will be available by then. 


    I conducted extensive research and interviewed experts from around the world about quantum hardware, software, and applications; the emerging quantum-computing ecosystem; possible business use cases; and the most important drivers of the quantum-computing market to help leaders get started planning. 


    ~ Jai Krishna Ponnappan


    Further Reading:



    You may also want to read more about Quantum Computing here.






    Quantum Computing's Future Outlook.

     



    Corporate executives from all sectors should plan for quantum computing's development. 


    I predict that quantum-computing use cases will have a hybrid operating model that is a mix of quantum and traditional high-performance computing until about 2030. 


    • Quantum-inspired algorithms, for example, may improve traditional high-performance computers. 
    • In order to develop quantum hardware and allow greater—and more complex—use cases beyond 2030, intensive continuous research by private enterprises and governmental organizations will be required
    • The route to commercialization of the technology will be determined by six important factors: finance, accessibility, standards, industry consortia, talent, and digital infrastructure. 


    Outsiders to the quantum-computing business should take five tangible measures to prepare for quantum computing's maturation: 


    • With an in-house team of quantum-computing specialists or by partnering with industry organizations and joining a quantum-computing consortium, keep up with industry advances and actively screen quantum-computing application cases. 
    • Recognize the most important risks, disruptions, and opportunities in their respective businesses. 
    • Consider partnering with or investing in quantum-computing players (mainly software) to make knowledge and expertise more accessible. 
    • Consider hiring quantum-computing experts in-house. Even a small team of up to three specialists may be sufficient to assist a company in exploring prospective use cases and screening potential quantum computing strategic investments. 
    • Build a digital infrastructure that can handle the fundamental operational needs of quantum computing, store important data in digital databases, and configure traditional computing processes to be quantum-ready whenever more powerful quantum hardware becomes available. 



    Every industry's leaders have a once-in-a-lifetime chance to keep on top of a generation-defining technology. 

    The reward might be strategic insights and increased company value.



    ~ Jai Krishna Ponnappan


    You may also want to read more about Quantum Computing here.





    What Is Artificial General Intelligence?

    Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...