Showing posts sorted by relevance for query narrow AI. Sort by date Show all posts
Showing posts sorted by relevance for query narrow AI. Sort by date Show all posts

Artificial Intelligence - What Is The Turing Test?

 



 

The Turing Test is a method of determining whether or not a machine can exhibit intelligence that mimics or is equivalent and likened to Human intelligence. 

The Turing Test, named after computer scientist Alan Turing, is an AI benchmark that assigns intelligence to any machine capable of displaying intelligent behavior comparable to that of a person.

Turing's "Computing Machinery and Intelligence" (1950), which establishes a simple prototype—what Turing calls "The Imitation Game," is the test's locus classicus.

In this game, a person is asked to determine which of the two rooms is filled by a computer and which is occupied by another human based on anonymized replies to natural language questions posed by the judge to each inhabitant.

Despite the fact that the human respondent must offer accurate answers to the court's queries, the machine's purpose is to fool the judge into thinking it is human.





According to Turing, the machine may be considered intelligent to the degree that it is successful at this job.

The fundamental benefit of this essentially operationalist view of intelligence is that it avoids complex metaphysics and epistemological issues about the nature and inner experience of intelligent activities.

According to Turing's criteria, little more than empirical observation of outward behavior is required for predicting object intelligence.

This is in sharp contrast to the widely Cartesian epistemological tradition, which holds that some internal self-awareness is a need for intelligence.

Turing's method avoids the so-called "problem of other minds" that arises from such a viewpoint—namely, how to be confident of the presence of other intelligent individuals if it is impossible to know their thoughts from a presumably required first-person perspective.



Nonetheless, the Turing Test, at least insofar as it considers intelligence in a strictly formalist manner, is bound up with the spirit of Cartesian epistemol ogy.

The machine in the Imitation Game is a digital computer in the sense of Turing: a set of operations that may theoretically be implemented in any material.


A digital computer consists of three parts: a knowledge store, an executive unit that executes individual orders, and a control that regulates the executive unit.






However, as Turing points out, it makes no difference whether these components are created using electrical or mechanical means.

What matters is the formal set of rules that make up the computer's very nature.

Turing holds to the core belief that intellect is inherently immaterial.

If this is true, it is logical to assume that human intellect functions in a similar manner to a digital computer and may therefore be copied artificially.


Since Turing's work, AI research has been split into two camps: 


  1. those who embrace and 
  2. those who oppose this fundamental premise.


To describe the first camp, John Haugeland created the term "good old-fashioned AI," or GOFAI.

Marvin Minsky, Allen Newell, Herbert Simon, Terry Winograd, and, most notably, Joseph Weizenbaum, whose software ELIZA was controversially hailed as the first to pass the Turing Test in 1966.



Nonetheless, detractors of Turing's formalism have proliferated, particularly in the past three decades, and GOFAI is now widely regarded as a discredited AI technique.

John Searle's Minds, Brains, and Programs (1980), in which Searle builds his now-famous Chinese Room thought experiment, is one of the most renowned criticisms of GOFAI in general—and the assumptions of the Turing Test in particular.





In the latter, a person with no prior understanding of Chinese is placed in a room and forced to correlate Chinese characters she receives with other Chinese characters she puts out, according to an English-scripted software.


Searle thinks that, assuming adequate mastery of the software, the person within the room may pass the Turing Test, fooling a native Chinese speaker into thinking she knew Chinese.

If, on the other hand, the person in the room is a digital computer, Turing-type tests, according to Searle, fail to capture the phenomena of understanding, which he claims entails more than the functionally accurate connection of inputs and outputs.

Searle's argument implies that AI research should take materiality issues seriously in ways that Turing's Imitation Game's formalism does not.

Searle continues his own explanation of the Chinese Room thought experiment by saying that human species' physical makeup—particularly their sophisticated nerve systems, brain tissue, and so on—should not be discarded as unimportant to conceptions of intelligence.


This viewpoint has influenced connectionism, an altogether new approach to AI that aims to build computer intelligence by replicating the electrical circuitry of human brain tissue.


The effectiveness of this strategy has been hotly contested, although it looks to outperform GOFAI in terms of developing generalized kinds of intelligence.

Turing's test, on the other hand, may be criticized not just from the standpoint of materialism, but also from the one of fresh formalism.





As a result, one may argue that Turing tests are insufficient as a measure of intelligence since they attempt to reproduce human behavior, which is frequently exceedingly dumb.


According to certain variants of this argument, if criteria of rationality are to distinguish rational from illogical human conduct in the first place, they must be derived a priori rather than from real human experience.

This line of criticism has gotten more acute as AI research has shifted its focus to the potential of so-called super-intelligence: forms of generalized machine intelligence that far outperform human intellect.


Should this next level of AI be attained, Turing tests would seem to be outdated.

Furthermore, simply discussing the idea of superintelligence would seem to need additional intelligence criteria in addition to severe Turing testing.

Turing may be defended against such accusation by pointing out that establishing a universal criterion of intellect was never his goal.



Indeed, according to Turing (1997, 29–30), the purpose is to replace the metaphysically problematic issue "can machines think" with the more empirically verifiable alternative: 

"What will happen when a computer assumes the role [of the man in the Imitation Game]" (Turing 1997, 29–30).


Thus, Turing's test's above-mentioned flaw—that it fails to establish a priori rationality standards—is also part of its strength and drive.

It also explains why, since it was initially presented three-quarters of a century ago, it has had such a lengthy effect on AI research in all domains.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.

Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.

Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cam￾bridge, MA: MIT Press.



Artificial Intelligence - Who Is Daniel Dennett?

 



At Tufts University, Daniel Dennett(1942–) is the Austin B. Fletcher Professor of Philosophy and Co-Director of the Center for Cognitive Studies.

Philosophy of mind, free will, evolutionary biology, cognitive neuroscience, and artificial intelligence are his main areas of study and publishing.

He has written over a dozen books and hundreds of articles.

Much of this research has focused on the origins and nature of consciousness, as well as how naturalistically it may be described.

Dennett is also an ardent atheist, and one of the New Atheism's "Four Horsemen." Richard Dawkins, Sam Harris, and Christopher Hitchens are the others.

Dennett's worldview is naturalistic and materialistic throughout.

He opposes Cartesian dualism, which holds that the mind and body are two distinct things that merge.

Instead, he contends that the brain is a form of computer that has developed through time due to natural selection.

Dennett also opposes the homunculus theory of the mind, which holds that the brain has a central controller or "little man" who performs all of the thinking and emotion.

Dennett, on the other hand, argues for a viewpoint he refers to as the numerous drafts model.

According to his theory, which he lays out in his 1991 book Consciousness Explained, the brain is constantly sifting through, interpreting, and editing sensations and inputs, forming overlapping drafts of experience.

Dennett later used the metaphor of "fame in the brain" to describe how various aspects of ongoing neural processes are periodically emphasized at different times and under different circumstances.

Consciousness is a story made up of these varied interpretations of human events.

Dennett dismisses the assumption that these ideas coalesce or are structured in a central portion of the brain, which he mockingly refers to as "Cartesian theater." The brain's story is made up of a never-ending, un-centralized flow of bottom-up awareness that spans time and place.

Dennett denies the existence of qualia, which are subjective individual experiences such as how colors seem to the human eye or how food feels.

He does not deny that colors and tastes exist; rather, he claims that the sensation of color and taste does not exist as a separate thing in the human mind.

He claims that there is no difference between human and computer "sensation experiences." According to Dennett, just as some robots can discern between colors without people deciding that they have qualia, so can the human brain.

For Dennett, the color red is just the quality that brains sense and which is referred to as red in the English language.

It has no extra, indescribable quality.

This is a crucial consideration for artificial intelligence because the ability to experience qualia is frequently seen as a barrier to the development of Strong AI (AI that is functionally equivalent to that of a human) and as something that will invariably distinguish human and machine intelligence.

However, if qualia do not exist, as Dennett contends, it cannot constitute a stumbling block to the creation of machine intelligence comparable to that of humans.

Dennett compares our brains to termite colonies in another metaphor.

Termites do not join together and plot to form a mound, but their individual activities cause it to happen.

The mound is the consequence of natural selection producing uncomprehending expertise in cooperative mound-building rather than intellectual design by the termites.

To create a mound, termites don't need to comprehend what they're doing.

Likewise, comprehension is an emergent attribute of such abilities.

Brains, according to Dennett, are control centers that have evolved to respond swiftly and effectively to threats and opportunities in the environment.

As the demands of responding to the environment grow more complicated, understanding emerges as a tool for dealing with them.

On a sliding scale, comprehension is a question of degree.

Dennett, for example, considers bacteria's quasi-comprehension in response to diverse stimuli and computers' quasi-comprehension in response to coded instructions to be on the low end of the range.

On the other end of the spectrum, he placed Jane Austen's comprehension of human social processes and Albert Einstein's understanding of relativity.

However, they are just changes in degree, not in type.

Natural selection has shaped both extremes of the spectrum.

Comprehension is not a separate mental process arising from the brain's varied abilities.

Rather, understanding is a collection of these skills.

Consciousness is an illusion to the extent that we recognize it as an additional element of the mind in the shape of either qualia or cognition.

In general, Dennett advises mankind to avoid positing understanding when basic competence would suffice.

Humans, on the other hand, often adopt what Dennett refers to as a "intentional position" toward other humans and, in some cases, animals.

When individuals perceive acts as the outcome of mind-directed thoughts, emotions, wants, or other mental states, they adopt the intentional viewpoint.

This is in contrast to the "physical posture" and the "design stance," according to him.

The physical stance is when anything is seen as the outcome of simply physical forces or natural principles.

Gravity causes a stone to fall when it is dropped, not any conscious purpose to return to the ground.

An action is seen as the mindless outcome of a preprogrammed, or predetermined, purpose in the design stance.

An alarm clock, for example, beeps at a certain time because it was built to do so, not because it chose to do so on its own.

In contrast to both the physical and design stances, the intentional stance considers behaviors and acts as though they are the consequence of the agent's deliberate decision.

It might be difficult to decide whether to apply the purposeful or design perspective to computers.

A chess-playing computer has been created with the goal of winning.

However, its movements are often indistinguishable from those of a human chess player who wants or intends to win.

In fact, having a purposeful posture toward the computer's behavior, rather than a design stance, improves human interpretation of its behavior and capacity to respond to it.

Dennett claims that the purposeful perspective is the greatest strategy to adopt toward both humans and computers since it works best in describing both human and computer behavior.

Furthermore, there is no need to differentiate them in any way.

Though the intentional attitude considers behavior as agent-driven, it is not required to take a position on what is truly going on inside the human or machine's internal workings.

This posture provides a neutral starting point from which to investigate cognitive competency without presuming a certain explanation of what's going on behind the scenes.

Dennett sees no reason why AI should be impossible in theory since human mental abilities have developed organically.

Furthermore, by abandoning the concept of qualia and adopting an intentional posture that relieves people of the responsibility of speculating about what is going on in the background of cognition, two major impediments to solving the hard issue of consciousness have been eliminated.

Dennett argues that since the human brain and computers are both machines, there is no good theoretical reason why humans should be capable of acquiring competence-driven understanding while AI should be intrinsically unable.

Consciousness in the traditional sense is illusory, hence it is not a need for Strong AI.

Dennett does not believe that Strong AI is theoretically impossible.

He feels that society's technical sophistication is still at least fifty years away from producing it.

Strong AI development, according to Dennett, is not desirable.

Humans should strive to build AI tools, but Dennett believes that attempting to make computer pals or colleagues would be a mistake.

Such robots, he claims, would lack human moral intuitions and understanding, and hence would not be able to integrate into human society.

Humans do not need robots to provide friendship since they have each other.

Robots, even AI-enhanced machines, should be seen as tools to be utilized by humans alone.


 


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Cognitive Computing; General and Narrow AI.


Further Reading:


Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, MA: MIT Press.

Dennett, Daniel C. 1993. Consciousness Explained. London: Penguin.

Dennett, Daniel C. 1998. Brainchildren: Essays on Designing Minds. Cambridge, MA: MIT Press.

Dennett, Daniel C. 2008. Kinds of Minds: Toward an Understanding of Consciousness. New York: Basic Books.

Dennett, Daniel C. 2017. From Bacteria to Bach and Back: The Evolution of Minds. New York: W. W. Norton.

Dennett, Daniel C. 2019. “What Can We Do?” In Possible Minds: Twenty-Five Ways of Looking at AI, edited by John Brockman, 41–53. London: Penguin Press.

Artificial Intelligence - Who Is Steve Omohundro?

 




In the field of artificial intelligence, Steve Omohundro  (1959–) is a well-known scientist, author, and entrepreneur.

He is the inventor of Self-Aware Systems, the chief scientist of AIBrain, and an adviser to the Machine Intelligence Research Institute (MIRI).

Omohundro is well-known for his insightful, speculative studies on the societal ramifications of AI and the safety of smarter-than-human computers.

Omohundro believes that a fully predictive artificial intelligence science is required.

He thinks that if goal-driven artificial general intelligences are not carefully created in the future, they would likely generate negative activities, cause conflicts, or even lead to the extinction of humanity.

Indeed, Omohundro argues that AIs with inadequate programming might act psychopathically.

He claims that programmers often create flaky software and programs that "manipulate bits" without knowing why.

Omohundro wants AGIs to be able to monitor and comprehend their own operations, spot flaws, and rewrite themselves to improve performance.

This is what genuine machine learning looks like.

The risk is that AIs may evolve into something that humans will be unable to comprehend, make incomprehensible judgments, or have unexpected repercussions.

As a result, Omohundro contends, artificial intelligence must evolve into a discipline that is more predictive and anticipatory.

Omohundro also suggests in "The Nature of Self-Improving Artificial Intelligence," one of his widely available online papers, that a future self-aware system that will most likely access the internet will be influenced by the scientific papers it reads, which recursively justifies writing the paper in the first place.

AGI agents must be programmed with value sets that drive them to pick objectives that benefit mankind as they evolve.

Self-improving systems like the ones Omohundro is working on don't exist yet.

Inventive minds, according to Omohundro, have only produced inert systems (chairs and coffee mugs), reactive systems (mousetraps and thermostats), adaptive systems (advanced speech recognition systems and intelligent virtual assistants), and deliberative systems (advanced speech recognition systems and intelligent virtual assistants) (the Deep Blue chess-playing computer).

Self-improving systems, as described by Omohundro, would have to actively think and make judgments in the face of uncertainty regarding the effects of self-modification.

The essential natures of self-improving AIs, according to Omohundro, may be understood as rational agents, a notion he draws from microeconomic theory.

Because humans are only imperfectly rational, the discipline of behavioral economics has exploded in popularity in recent decades.

AI agents, on the other hand, must eventually establish logical objectives and preferences ("utility functions") that sharpen their ideas about their surroundings due to their self-improving cognitive architectures.

These beliefs will then assist them in forming new aims and preferences.

Omohundro draws influence from mathematician John von Neumann and economist Oskar Morgenstern's contributions to the anticipated utility hypothesis.

Completeness, transitivity, continuity, and independence are the axioms of rational behavior proposed by von Neumann and Morgenstern.

For artificial intelligences, Omohundro proposes four "fundamental drives": efficiency, self-preservation, resource acquisition, and creativity.

These motivations are expressed as "behaviors" by future AGIs with self-improving, rational agency.

Both physical and computational operations are included in the efficiency drive.

Artificial intelligences will strive to make effective use of limited resources such as space, mass, energy, processing time, and computer power.

To prevent losing resources to other agents and enhance goal fulfillment, the self-preservation drive will use powerful artificial intelligences.

A passively behaving artificial intelligence is unlikely to survive.

The acquisition drive is the process of locating new sources of resources, trading for them, cooperating with other agents, or even stealing what is required to reach the end objective.

The creative drive encompasses all of the innovative ways in which an AGI may boost anticipated utility in order to achieve its many objectives.

This motivation might include the development of innovative methods for obtaining and exploiting resources.

Signaling, according to Omohundro, is a singular human source of creative energy, variation, and divergence.

Humans utilize signaling to express their intentions regarding other helpful tasks they are doing.

If A is more likely to be true when B is true than when B is false, then A signals B.

Employers, for example, are more likely to hire potential workers who are enrolled in a class that looks to offer benefits that the company desires, even if this is not the case.

The fact that the potential employee is enrolled in class indicates to the company that he or she is more likely to learn useful skills than the candidate who is not.

Similarly, a billionaire does not need to gift another billionaire a billion dollars to indicate that they are among the super-wealthy.

A huge bag containing several million dollars could suffice.

Omohundro's notion of fundamental AI drives was included into Oxford philosopher Nick Bostrom's instrumental convergence thesis, which claims that a few instrumental values are sought in order to accomplish an ultimate objective, often referred to as a terminal value.

Self-preservation, goal content integrity (retention of preferences over time), cognitive improvement, technical perfection, and resource acquisition are among Bostrom's instrumental values (he prefers not to call them drives).

Future AIs might have a reward function or a terminal value of optimizing some utility function.

Omohundro wants designers to construct artificial general intelligence with kindness toward people as its ultimate objective.

Military conflicts and economic concerns, on the other hand, he believes, make the development of destructive artificial general intelligence more plausible.

Drones are increasingly being used by military forces to deliver explosives and conduct surveillance.

He also claims that future battles will almost certainly be informational in nature.

In a future where cyberwar is a possibility, a cyberwar infrastructure will be required.

Energy encryption, a unique wireless power transmission method that scrambles energy so that it stays safe and cannot be exploited by rogue devices, is one way to counter the issue.

Another area where information conflict is producing instability is the employment of artificial intelligence in fragile financial markets.

Digital cryptocurrencies and crowdsourcing marketplace systems like Mechanical Turk are ushering in a new era of autonomous capitalism, according to Omohundro, and we are unable to deal with the repercussions.

Omohundro has spoken about the need for a complete digital provenance for economic and cultural recordkeeping to prevent AI deception, fakery, and fraud from overtaking human society as president of the company Possibility Research, advocate of a new cryptocurrency called Pebble, and advisory board member of the Institute for Blockchain Studies.

In order to build a verifiable "blockchain civilization based on truth," he suggests that digital provenance methods and sophisticated cryptography techniques monitor autonomous technology and better check the history and structure of any alterations being performed.

Possibility Smart technologies that enhance computer programming, decision-making systems, simulations, contracts, robotics, and governance are the focus of research.

Omohundro has advocated for the creation of so-called Safe AI scaffolding solutions to counter dangers in recent years.

The objective is to create self-contained systems that already have temporary scaffolding or staging in place.

The scaffolding assists programmers who are assisting in the development of a new artificial general intelligence.

The virtual scaffolding may be removed after the AI has been completed and evaluated for stability.

The initial generation of restricted safe systems created in this manner might be used to develop and test less constrained AI agents in the future.

Utility functions aligned with agreed-upon human philosophical imperatives, human values, and democratic principles would be included in advanced scaffolding.

Self-improving AIs may eventually have inscribed the Universal Declaration of Human Rights or a Universal Constitution into their fundamental fabric, guiding their growth, development, choices, and contributions to mankind.

Omohundro graduated from Stanford University with degrees in mathematics and physics, as well as a PhD in physics from the University of California, Berkeley.

In 1985, he co-created StarLisp, a high-level programming language for the Thinking Machines Corporation's Connection Machine, a massively parallel supercomputer in construction.

On differential and symplectic geometry, he wrote the book Geometric Perturbation Theory in Physics (1986).

He was an associate professor of computer science at the University of Illinois in Urbana-Champaign from 1986 to 1988.

He cofounded the Center for Complex Systems Research with Stephen Wolfram and Norman Packard.

He also oversaw the university's Vision and Learning Group.

He created the Mathematica 3D graphics system, which is a symbolic mathematical calculation application.

In 1990, he led an international team at the University of California, Berkeley's International Computer Science Institute (ICSI) to develop Sather, an object-oriented, functional programming language.

Automated lip-reading, machine vision, machine learning algorithms, and other digital technologies have all benefited from his work.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


General and Narrow AI; Superintelligence.



References & Further Reading:



Bostrom, Nick. 2012. “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.” Minds and Machines 22, no. 2: 71–85.

Omohundro, Stephen M. 2008a. “The Basic AI Drives.” In Proceedings of the 2008 Conference on Artificial General Intelligence, 483–92. Amsterdam: IOS Press.

Omohundro, Stephen M. 2008b. “The Nature of Self-Improving Artificial Intelligence.” https://pdfs.semanticscholar.org/4618/cbdfd7dada7f61b706e4397d4e5952b5c9a0.pdf.

Omohundro, Stephen M. 2012. “The Future of Computing: Meaning and Values.” https://selfawaresystems.com/2012/01/29/the-future-of-computing-meaning-and-values.

Omohundro, Stephen M. 2013. “Rational Artificial Intelligence for the Greater Good.” In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart, 161–79. Berlin: Springer.

Omohundro, Stephen M. 2014. “Autonomous Technology and the Greater Human Good.” Journal of Experimental and Theoretical Artificial Intelligence 26, no. 3: 303–15.

Shulman, Carl. 2010. Omohundro’s ‘Basic AI Drives’ and Catastrophic Risks. Berkeley, CA: Machine Intelligence Research Institute




Artificial Intelligence - Quantum AI.

 



Artificial intelligence and quantum computing, according to Johannes Otterbach, a physicist at Rigetti Computing in Berkeley, California, are natural friends since both technologies are essentially statistical.

Airbus, Atos, Baidu, b|eit, Cambridge Quantum Computing, Elyah, Hewlett-Packard (HP), IBM, Microsoft Research QuArC, QC Ware, Quantum Benchmark Inc., R QUANTECH, Rahko, and Zapata Computing are among the organizations that have relocated to the region.

Bits are used to encode and modify data in traditional general-purpose computer systems.

Bits may only be in one of two states: 0 or 1.

Quantum computers use the actions of subatomic particles like electrons and photons to process data.

Superposition—particles residing in all conceivable states at the same time—and entanglement—the pairing and connection of particles such that they cannot be characterized independently of the state of others, even at long distances—are two of the most essential phenomena used by quantum computers.

Such entanglement was dubbed "spooky activity at a distance" by Albert Einstein.

Quantum computers use quantum registers, which are made up of a number of quantum bits or qubits, to store data.

While a clear explanation is elusive, qubits might be understood to reside in a weighted combination of two states at the same time to yield many states.

Each qubit that is added to the system doubles the processing capability of the system.

More than one quadrillion classical bits might be processed by a quantum computer with just fifty entangled qubits.

In a single year, sixty qubits could carry all of humanity's data.

Three hundred qubits might compactly encapsulate a quantity of data comparable to the observable universe's classical information content.

Quantum computers can operate in parallel on large quantities of distinct computations, collections of data, or operations.

True autonomous transportation would be possible if a working artificially intelligent quantum computer could monitor and manage all of a city's traffic in real time.

By comparing all of the photographs to the reference photo at the same time, quantum artificial intelligence may rapidly match a single face to a library of billions of photos.

Our understanding of processing, programming, and complexity has radically changed with the development of quantum computing.

A series of quantum state transformations is followed by a measurement in most quantum algorithms.

The notion of quantum computing goes back to the 1980s, when physicists such as Yuri Manin, Richard Feynman, and David Deutsch realized that by using so-called quantum gates, a concept taken from linear algebra, researchers would be able to manipulate information.

They hypothesized qubits might be controlled by different superpositions and entanglements into quantum algorithms, the outcomes of which could be observed, by mixing many kinds of quantum gates into circuits.

Some quantum mechanical processes could not be efficiently replicated on conventional computers, which presented a problem to these early researchers.

They thought that quantum technology (perhaps included in a universal quantum Turing computer) would enable quantum simulations.

In 1993, Umesh Vazirani and Ethan Bernstein of the University of California, Berkeley, hypothesized that quantum computing will one day be able to effectively solve certain problems quicker than traditional digital computers, in violation of the extended Church-Turing thesis.

In computational complexity theory, Vazirani and Bernstein argue for a special class of bounded-error quantum polynomial time choice problems.

These are issues that a quantum computer can solve in polynomial time with a one-third error probability in most cases.

The frequently proposed threshold for Quantum Supremacy is fifty qubits, the point at which quantum computers would be able to tackle problems that would be impossible to solve on conventional machines.

Although no one believes quantum computing would be capable of solving all NP-hard issues, quantum AI researchers think the machines will be capable of solving specific types of NP intermediate problems.

Creating quantum machine algorithms that do valuable work has proved to be a tough task.

In 1994, AT&T Laboratories' Peter Shor devised a polynomial time quantum algorithm that beat conventional methods in factoring big numbers, possibly allowing for the speedy breakage of current kinds of public key encryption.

Since then, intelligence services have been stockpiling encrypted material passed across networks in the hopes that quantum computers would be able to decipher it.

Another technique devised by Shor's AT&T Labs colleague Lov Grover allows for quick searches of unsorted datasets.

Quantum neural networks are similar to conventional neural networks in that they label input, identify patterns, and learn from experience using layers of millions or billions of linked neurons.

Large matrices and vectors produced by neural networks can be processed exponentially quicker by quantum computers than by classical computers.

Aram Harrow of MIT and Avinatan Hassidum gave the critical algorithmic insight for rapid classification and quantum inversion of the matrix in 2008.

Michael Hartmann, a visiting researcher at Google AI Quantum and Associate Professor of Photonics and Quantum Sciences at Heriot-Watt University, is working on a quantum neural network computer.

Hartmann's Neuromorphic Quantum Computing (Quromorphic) Project employs superconducting electrical circuits as hardware.

Hartmann's artificial neural network computers are inspired by the brain's neuronal organization.

They are usually stored in software, with each artificial neuron being programmed and connected to a larger network of neurons.

Hardware that incorporates artificial neural networks is also possible.

Hartmann estimates that a workable quantum computing artificial intelligence might take 10 years to develop.

D-Wave, situated in Vancouver, British Columbia, was the first business to mass-produce quantum computers in commercial numbers.

In 2011, D-Wave started producing annealing quantum computers.

Annealing processors are special-purpose products used for a restricted set of problems with multiple local minima in a discrete search space, such as combinatorial optimization issues.

The D-Wave computer isn't polynomially equal to a universal quantum computer, hence it can't run Shor's algorithm.

Lockheed Martin, the University of Southern California, Google, NASA, and the Los Alamos National Laboratory are among the company's clients.

Universal quantum computers are being pursued by Google, Intel, Rigetti, and IBM.

Each one has a quantum processor with fifty qubits.

In 2018, the Google AI Quantum lab, led by Hartmut Neven, announced the introduction of their newest 72-qubit Bristlecone processor.

Intel also debuted its 49-qubit Tangle Lake CPU last year.

The Aspen-1 processor from Rigetti Computing has sixteen qubits.

The IBM Q Experience quantum computing facility is situated in Yorktown Heights, New York, inside the Thomas J.

Watson Research Center.

To create quantum commercial applications, IBM is collaborating with a number of corporations, including Honda, JPMorgan Chase, and Samsung.

The public is also welcome to submit experiments to be processed on the company's quantum computers.

Quantum AI research is also highly funded by government organizations and universities.

The NASA Quantum Artificial Intelligence Laboratory (QuAIL) has a D-Wave 2000Q quantum computer with 2,048 qubits that it wants to use to tackle NP-hard problems in data processing, anomaly detection and decision-making, air traffic management, and mission planning and coordination.

The NASA team has chosen to concentrate on the most difficult machine learning challenges, such as generative models in unsupervised learning, in order to illustrate the technology's full potential.

In order to maximize the value of D-Wave resources and skills, NASA researchers have opted to focus on hybrid quantum-classical techniques.

Many laboratories across the globe are investigating completely quantum machine learning.

Quantum Learning Theory proposes that quantum algorithms might be utilized to address machine learning problems, hence improving traditional machine learning techniques.

Classical binary data sets are supplied into a quantum computer for processing in quantum learning theory.

The NIST Joint Quantum Institute and the University of Maryland's Joint Center for Quantum Information and Computer Science are also bridging the gap between machine learning and quantum computing.

Workshops bringing together professionals in mathematics, computer science, and physics to use artificial intelligence algorithms in quantum system control are hosted by the NIST-UMD.

Engineers are also encouraged to employ quantum computing to boost the performance of machine learning algorithms as part of the alliance.

The Quantum Algorithm Zoo, a collection of all known quantum algorithms, is likewise housed at NIST.

Scott Aaronson is the director of the University of Texas at Austin's Quantum Information Center.

The department of computer science, the department of electrical and computer engineering, the department of physics, and the Advanced Research Laboratories have collaborated to create the center.

The University of Toronto has a quantum machine learning start-up incubator.

Peter Wittek is the head of the Quantum Machine Learning Program of the Creative Destruction Lab, which houses the QML incubator.

Materials discovery, optimization, and logistics, reinforcement and unsupervised machine learning, chemical engineering, genomics and drug discovery, systems design, finance, and security are all areas where the University of Toronto incubator is fostering innovation.

In December 2018, President Donald Trump signed the National Quantum Initiative Act into law.

The legislation establishes a partnership of the National Institute of Standards and Technology (NIST), the National Science Foundation (NSF), and the Department of Energy (DOE) for quantum information science research, commercial development, and education.

The statute anticipates the NSF and DOE establishing many competitively awarded research centers as a result of the endeavor.

Due to the difficulties of running quantum processing units (QPUs), which must be maintained in a vacuum at temperatures near to absolute zero, no quantum computer has yet outperformed a state-of-the-art classical computer on a challenging job.

Because quantum computing is susceptible to external environmental impacts, such isolation is required.

Qubits are delicate; a typical quantum bit can only exhibit coherence for ninety microseconds before degrading and becoming unreliable.

In an isolated quantum processor with high thermal noise, communicating inputs and outputs and collecting measurements is a severe technical difficulty that has yet to be fully handled.

The findings are not totally dependable in a classical sense since the measurement is quantum and hence probabilistic.

Only one of the quantum parallel threads may be randomly accessed for results.

During the measuring procedure, all other threads are deleted.

It is believed that by connecting quantum processors to error-correcting artificial intelligence algorithms, the defect rate of these computers would be lowered.

Many machine intelligence applications, such as deep learning and probabilistic programming, rely on sampling from high-dimensional probability distributions.

Quantum sampling methods have the potential to make calculations on otherwise intractable issues quicker and more efficient.

Shor's method employs an artificial intelligence approach that alters the quantum state in such a manner that common properties of output values, such as symmetry of period of functions, can be quantified.

Grover's search method manipulates the quantum state using an amplification technique to increase the possibility that the desired output will be read off.

Quantum computers would also be able to execute many AI algorithms at the same time.

Quantum computing simulations have recently been used by scientists to examine the beginnings of biological life.

Unai Alvarez-Rodriguez of the University of the Basque Country in Spain built so-called artificial quantum living forms using IBM's QX superconducting quantum computer.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


General and Narrow AI.


References & Further Reading:


Aaronson, Scott. 2013. Quantum Computing Since Democritus. Cambridge, UK: Cambridge University Press.

Biamonte, Jacob, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. 2018. “Quantum Machine Learning.” https://arxiv.org/pdf/1611.09347.pdf.

Perdomo-Ortiz, Alejandro, Marcello Benedetti, John Realpe-Gómez, and Rupak Biswas. 2018. “Opportunities and Challenges for Quantum-Assisted Machine Learning in Near-Term Quantum Computers.” Quantum Science and Technology 3: 1–13.

Schuld, Maria, Ilya Sinayskiy, and Francesco Petruccione. 2015. “An Introduction to Quantum Machine Learning.” Contemporary Physics 56, no. 2: 172–85.

Wittek, Peter. 2014. Quantum Machine Learning: What Quantum Computing Means to Data Mining. Cambridge, MA: Academic Press




Artificial Intelligence - What Is The Blue Brain Project (BBP)?

 



The brain, with its 100 billion neurons, is one of the most complicated physical systems known.

It is an organ that takes constant effort to comprehend and interpret.

Similarly, digital reconstruction models of the brain and its activity need huge and long-term processing resources.

The Blue Brain Project, a Swiss brain research program supported by the École Polytechnique Fédérale de Lausanne (EPFL), was founded in 2005. Henry Markram is the Blue Brain Project's founder and director.



The purpose of the Blue Brain Project is to simulate numerous mammalian brains in order to "ultimately, explore the stages involved in the formation of biological intelligence" (Markram 2006, 153).


These simulations were originally powered by IBM's BlueGene/L, the world's most powerful supercomputer system from November 2004 to November 2007.




In 2009, the BlueGene/L was superseded by the BlueGene/P.

BlueGene/P was superseded by BlueGene/Q in 2014 due to a need for even greater processing capability.

The BBP picked Hewlett-Packard to build a supercomputer (named Blue Brain 5) devoted only to neuroscience simulation in 2018.

The use of supercomputer-based simulations has pushed neuroscience research away from the physical lab and into the virtual realm.

The Blue Brain Project's development of digital brain reconstructions enables studies to be carried out in a "in silico" environment, a Latin pseudo-word referring to modeling of biological systems on computing equipment, using a regulated research flow and methodology.

The possibility for supercomputers to turn the analog brain into a digital replica suggests a paradigm change in brain research.

One fundamental assumption is that the digital or synthetic duplicate will act similarly to a real or analog brain.

Michael Hines, John W. Moore, and Ted Carnevale created the software that runs on Blue Gene hardware, a simulation environment called NEURON that mimics neurons.


The Blue Brain Project may be regarded a typical example of what was dubbed Big Science following World War II (1939–1945) because of the expanding budgets, pricey equipment, and numerous interdisciplinary scientists participating.


 


Furthermore, the scientific approach to the brain via simulation and digital imaging processes creates issues such as data management.

Blue Brain joined the Human Brain Project (HBP) consortium as an initial member and submitted a proposal to the European Commission's Future & Emerging Technologies (FET) Flagship Program.

The European Union approved the Blue Brain Project's proposal in 2013, and the Blue Brain Project is now a partner in a larger effort to investigate and undertake brain simulation.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

General and Narrow AI; Human Brain Project; SyNAPSE.


Further Reading

Djurfeldt, Mikael, Mikael Lundqvist, Christopher Johansson, Martin Rehn, Örjan Ekeberg, Anders Lansner. 2008. “Brain-Scale Simulation of the Neocortex on the IBM Blue Gene/L Supercomputer.” IBM Journal of Research and Development 52, no. 1–2: 31–41.

Markram, Henry. 2006. “The Blue Brain Project.” Nature Reviews Neuroscience 7, no. 2: 153–60.

Markram, Henry, et al. 2015. “Reconstruction and Simulation of Neocortical Microcircuitry.” Cell 63, no. 2: 456–92.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...