Showing posts sorted by relevance for query computer science. Sort by date Show all posts
Showing posts sorted by relevance for query computer science. Sort by date Show all posts

Artificial Intelligence - Who Was Raj Reddy Or Dabbala Rajagopal "Raj" Reddy?

 


 


Dabbala Rajagopal "Raj" Reddy (1937–) is an Indian American who has made important contributions to artificial intelligence and has won the Turing Award.

He holds the Moza Bint Nasser Chair and University Professor of Computer Science and Robotics at Carnegie Mellon University's School of Computer Science.

He worked on the faculties of Stanford and Carnegie Mellon universities, two of the world's leading colleges for artificial intelligence research.

In the United States and in India, he has received honors for his contributions to artificial intelligence.

In 2001, the Indian government bestowed upon him the Padma Bhushan Award (the third highest civilian honor).

In 1984, he was also given the Legion of Honor, France's highest honor, which was created in 1802 by Napoleon Bonaparte himself.

In 1958, Reddy obtained his bachelor's degree from the University of Madras' Guindy Engineering College, and in 1960, he received his master's degree from the University of New South Wales in Australia.

In 1966, he came to the United States to get his doctorate in computer science at Stanford University.

He was the first in his family to get a university degree, which is typical of many rural Indian households.

He went to the academy in 1966 and joined the faculty of Stanford University as an Assistant Professor of Computer Science, where he stayed until 1969, after working in the industry as an Applied Science Representative at IBM Australia from 1960 to 1963.

He began working at Carnegie Mellon as an Associate Professor of Computer Science in 1969 and will continue to do so until 2020.

He rose up the ranks at Carnegie Mellon, eventually becoming a full professor in 1973 and a university professor in 1984.

In 1991, he was appointed as the head of the School of Computer Science, a post he held until 1999.

Many schools and institutions were founded as a result of Reddy's efforts.

In 1979, he launched the Robotics Institute and served as its first director, a position he held until 1999.

He was a driving force behind the establishment of the Language Technologies Institute, the Human Computer Interaction Institute, the Center for Automated Learning and Discovery (now the Machine Learning Department), and the Institute for Software Research at CMU during his stint as dean.

From 1999 to 2001, Reddy was a cochair of the President's Information Technology Advisory Committee (PITAC).

The President's Council of Advisors on Science and Technology (PCAST) took over PITAC in 2005.

Reddy was the president of the American Association for Artificial Intelligence (AAAI) from 1987 to 1989.

The AAAI has been renamed the Association for the Advancement of Artificial Intelligence, recognizing the worldwide character of the research community, which began with pioneers like Reddy.

The former logo, acronym (AAAI), and purpose have been retained.

Artificial intelligence, or the study of giving intelligence to computers, was the subject of Reddy's research.

He worked on voice control for robots, speech recognition without relying on the speaker, and unlimited vocabulary dictation, which allowed for continuous speech dictation.

Reddy and his collaborators have made significant contributions to computer analysis of natural sceneries, job oriented computer architectures, universal access to information (a project supported by UNESCO), and autonomous robotic systems.

Reddy collaborated on Hearsay II, Dragon, Harpy, and Sphinx I/II with his coworkers.

The blackboard model, one of the fundamental concepts that sprang from this study, has been extensively implemented in many fields of AI.

Reddy was also interested in employing technology for the sake of society, and he worked as the Chief Scientist at the Centre Mondial Informatique et Ressource Humaine in France.

He aided the Indian government in the establishment of the Rajiv Gandhi University of Knowledge Technologies, which focuses on low-income rural youth.

He is a member of the International Institute of Information Technology (IIIT) in Hyderabad's governing council.

IIIT is a non-profit public-private partnership (N-PPP) that focuses on technological research and applied research.

He was on the board of directors of the Emergency Management and Research Institute, a nonprofit public-private partnership that offers public emergency medical services.

EMRI has also aided in the emergency management of its neighboring nation, Sri Lanka.

In addition, he was a member of the Health Care Management Research Institute (HMRI).

HMRI provides non-emergency health-care consultation to rural populations, particularly in Andhra Pradesh, India.

In 1994, Reddy and Edward A. Feigenbaum shared the Turing Award, the top honor in artificial intelligence, and Reddy became the first person of Indian/Asian descent to receive the award.

In 1991, he received the IBM Research Ralph Gomory Fellow Award, the Okawa Foundation's Okawa Prize in 2004, the Honda Foundation's Honda Prize in 2005, and the Vannevar Bush Award from the United States National Science Board in 2006.

Reddy has received fellowships from the Institute of Electronic and Electrical Engineers (IEEE), the Acoustical Society of America, and the American Association for Artificial Intelligence, among other prestigious organizations.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Autonomous and Semiautonomous Systems; Natural Language Processing and Speech Understanding.


References & Further Reading:


Reddy, Raj. 1988. “Foundations and Grand Challenges of Artificial Intelligence.” AI Magazine 9, no. 4 (Winter): 9–21.

Reddy, Raj. 1996. “To Dream the Possible Dream.” Communications of the ACM 39, no. 5 (May): 105–12.






Artificial Intelligence - Who Is Rudy Rucker?

 




Rudolf von Bitter (German: Rudolf von Bitter) Rucker (1946–) is an American novelist, mathematician, and computer scientist who is the great-great-great-grandson of philosopher Georg Wilhelm Friedrich Hegel (1770–1831).

Rucker is most recognized for his sarcastic, mathematics-heavy science fiction, while having written in a variety of fictional and nonfictional genres.

His Ware tetralogy (1982–2000) is regarded as one of the cyberpunk literary movement's fundamental works.

Rucker graduated from Rutgers University with a Ph.D. in mathematics in 1973.

He shifted from teaching mathematics in colleges in the US and Germany to teaching computer science at San José State University, where he ultimately became a professor before retiring in 2004.

Rucker has forty publications to his credit, including science fiction novels, short story collections, and nonfiction works.

His nonfiction works span the disciplines of mathematics, cognitive science, philosophy, and computer science, with topics such as the fourth dimension and the meaning of computation among them.

The popular mathematics book Infinity and the Mind: The Science and Philosophy of the Infinite (1982), which he wrote, is still in print at Princeton University Press.

Rucker established himself in the cyberpunk genre with the Ware series (Software 1982, Wetware 1988, Freeware 1997, and Realware 2000).

Since Dick's death in 1983, the famous American science fiction award has been handed out every year since Software received the inaugural Philip K. Dick Award.

Wetware was also awarded this prize in 1988, in a tie with Paul J. McAuley's Four Hundred Billion Stars.

The Ware Tetralogy, which Rucker has made accessible for free online as an e-book under a Creative Commons license, was reprinted in 2010 as a single volume.

Cobb Anderson, a retired roboticist who has fallen from favor for creating sentient robots with free agency, known as boppers, is the protagonist of the Ware series.

The boppers want to reward him by giving him immortality via mind uploading; unfortunately, this procedure requires the full annihilation of Cobb's brain, which the boppers do not consider necessary hardware.

In Wetware, a bopper named Berenice wants to impregnate Cobb's niece in order to produce a human-machine hybrid.

Humanity retaliates by unleashing a mold that kills boppers, but this chipmould thrives on the cladding that covers the boppers' exteriors, resulting in the creation of an organic-machine hybrid in the end.

Freeware is based on these lifeforms, which are now known as moldies and are generally detested by biological people.

This story also includes extraterrestrial intelligences, who in Realware provide superior technology and the power to change reality to different types of human and artificial entities.

The book Postsingular, published in 2007, was the first of Rucker's works to be distributed under a Creative Commons license.

The book, set in San Francisco, addresses the emergence of nanotechnology, first in a dystopian and later in a utopian scenario.

In the first section, a rogue engineer creates nants, which convert Earth into a virtual replica of itself, destroying the planet in the process, until a youngster is able to reverse their programming.

The narrative then goes on to depict orphids, a new kind of nanotechnology that allows people to become cognitively enhanced, hyperintelligent creatures.

Although the Ware tetralogy and Postsingular have been classified as cyberpunk books, Rucker's literature has been seen as difficult to label, since it combines hard science with humor, graphic sex, and constant drug use.

"Happily, Rucker himself has established a phrase to capture his unusual mix of commonplace reality and outraeous fantasy: transrealism," writes science fiction historian Rob Latham (Latham 2005, 4).

"Transrealism is not so much a form of SF as it is a sort of avant-garde literature," Rucker writes in "A Transrealist Manifesto," published in 1983.  (Rucker 1983, 7).


"This means writing SF about yourself, your friends, and your local surroundings, transmuted in some science-fictional fashion," he noted in a 2002 interview. Using actual life as a basis lends your writing a literary quality and keeps you from using clichés" (Brunsdale 2002, 48).


Rucker worked on the short story collection Transreal Cyberpunk with cyberpunk author Bruce Sterling, which was released in 2016.

Rucker chose to publish his book Nested Scrolls after suffering a brain hemorrhage in 2008.

It won the Emperor Norton Award for "amazing innovation and originality unconstrained by the constraints of petty reason" when it was published in 2011.

Million Mile Road Trip (2019), a science fiction book about a group of human and nonhuman characters on an intergalactic road trip, is his most recent work.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Digital Immortality; Nonhuman Rights and Personhood; Robot Ethic.


References & Further Reading:


Brunsdale, Mitzi. 2002. “PW talks with Rudy Rucker.” Publishers Weekly 249, no. 17 (April 29): 48. https://archive.publishersweekly.com/?a=d&d=BG20020429.1.82&srpos=1&e=-------en-20--1--txt-txIN%7ctxRV-%22PW+talks+with+Rudy+Rucker%22---------1.

Latham, Rob. 2005. “Long Live Gonzo: An Introduction to Rudy Rucker.” Journal of the Fantastic in the Arts 16, no. 1 (Spring): 3–5.

Rucker, Rudy. 1983. “A Transrealist Manifesto.” The Bulletin of the Science Fiction Writers of America 82 (Winter): 7–8.

Rucker, Rudy. 2007. “Postsingular.” https://manybooks.net/titles/ruckerrother07postsingular.html.

Rucker, Rudy. 2010. The Ware Tetralogy. Gaithersburg, MD: Prime Books, 2010.




Artificial Intelligence - Who Is Tanya Berger-Wolf? What Is The AI For Wildlife Conservation Software Non-profit, 'Wild Me'?

 


Tanya Berger-Wolf (1972–) is a professor at the University of Illinois at Chicago's Department of Computer Science (UIC).

Her contributions to computational ecology and biology, data science and network analysis, and artificial intelligence for social benefit have earned her acclaim.

She is a pioneer in the subject of computational population biology, which employs artificial intelligence algorithms, computational methodologies, social science research, and data collecting to answer questions about plants, animals, and people.

Berger-Wolf teaches multidisciplinary field courses with engineering students from UIC and biology students from Prince ton University at the Mpala Research Centre in Kenya.

She works in Africa because of its vast genetic variety and endangered species, which are markers of the health of life on the planet as a whole.

Her group is interested in learning more about the effects of the environment on social animal behavior, as well as what puts a species at danger.

Wildbook, a charity that develops animal conservation software, is her cofounder and director.

Berger-work Wolf's for Wildbook included a crowd-sourced project to photograph as many Grevy's zebras as possible in order to complete a full census of the endangered animals.

The group can identify each individual Grevy's zebra by its distinctive pattern of stripes, which acts as a natural bar code or fingerprint, after analyzing the photographs using artificial intelligence systems.

Using convolutional neural networks and matching algorithms, the Wildbook program recognizes animals from hundreds of thousands of images.

The census data is utilized to focus and invest resources in the zebras' preservation and survival.

The Wildbook deep learning program may be used to identify individual mem bers of any striped, spotted, notched, or wrinkled species.

Giraffe Spotter is Wild book software for giraffe populations.

Wildbook's website, which contains gallery photographs from handheld cameras and camera traps, crowdsources citizen-scientist accounts of giraffe encounters.

An intelligent agent extracts still images of tail flukes from uploaded YouTube videos for Wildbook's individual whale shark catalog.

The whale shark census revealed data that persuaded the International Union for Conservation of Nature to alter the status of the creatures from “vulnerable” to “endangered” on the IUCN Red List of Threatened Species.

The software is also being used by Wildbook to examine videos of hawksbill and green sea turtles.

Berger-Wolf also serves as the director of technology for the conservation organization Wild Me.

Machine vision artificial intelligence systems are used by the charity to recognize individual animals in the wild.

Wild Me keeps track of animals' whereabouts, migration patterns, and social groups.

The goal is to gain a comprehensive understanding of global diversity so that conservation policy can be informed.

Microsoft's AI for Earth initiative has partnered with Wild Me.

Berger-Wolf was born in Vilnius, Lithuania, in 1972.

She went to high school in St. Petersburg, Russia, and graduated from Hebrew University in Jerusalem with a bachelor's degree.

She received her doctorate from the University of Illinois at Urbana-Department Champaign's of Computer Science, and did postdoctoral work at the University of New Mexico and Rutgers University.

She has received the National Science Foundation CAREER Award, the Association for Women in Science Chicago Innovator Award, and the University of Illinois at Chicago Mentor of the Year Award.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Deep Learning.


Further Reading


Berger-Wolf, Tanya Y., Daniel I. Rubenstein, Charles V. Stewart, Jason A. Holmberg, Jason Parham, and Sreejith Menon. 2017. “Wildbook: Crowdsourcing, Computer Vision, and Data Science for Conservation.” Chicago, IL: Bloomberg Data for Good Exchange Conference. https://arxiv.org/pdf/1710.08880.pdf.

Casselman, Anne. 2018. “How Artificial Intelligence Is Changing Wildlife Research.” National Geographic, November. https://www.nationalgeographic.com/animals/2018/11/artificial-intelligence-counts-wild-animals/.

Snow, Jackie. 2018. “The World’s Animals Are Getting Their Very Own Facebook.” Fast 

Company, June 22, 2018. https://www.fastcompany.com/40585495/the-worlds-animals-are-getting-their-very-own-facebook.



Artificial Intelligence - Who Was Allen Newell?

 



Allen Newell (1927–1992) was an American writer who lived from 1927 to 1992.


 Allen In the late 1950s and early 1960s, Newell collaborated with Herbert Simon to develop the earliest models of human cognition.

The Logic Theory Machine depicted how logical rules might be used in a proof, the General Problem Solver modeled how basic problem solving could be done, and an early chess software mimicked how to play chess (the Newell-Shaw-Simon chess program).

Newell and Simon demonstrated for the first time in these models how computers can modify symbols and how these manipulations may be used to describe, produce, and explain intelligent behavior.

Newell began his career at Stanford University as a physics student.

He joined to the RAND Corporation to work on complex system models after a year of graduate studies in mathematics at Princeton.

He met and was inspired by Oliver Selfridge while at RAND, who led him to modeling cognition.

He also met Herbert Simon, who would go on to receive the Nobel Prize in Economics for his work on economic decision-making processes, particularly satisficing.

Simon persuaded Newell to attend Carnegie Institute of Technology (now Carnegie Mellon University).

For the most of his academic career, Newell worked with Simon.

Newell's main goal was to simulate the human mind's operations using computer models in order to better comprehend it.

Newell earned his PhD at Carnegie Mellon, where he worked with Simon.

He began his academic career as a tenured and chaired professor.

He was a founding member of the Department of Computer Science (today known as the school), where he held his major position.

With Simon, Newell examined the mind, especially problem solving, as part of his major line of study.

Their book Human Problem Solving, published in 1972, outlined their idea of intelligence and included examples from arithmetic problems and chess.

To assess what resources are being utilized in cognition, they employed a lot of verbal talk-aloud proto cols, which are more accurate than think-aloud or retrospective protocols.

Ericsson and Simon eventually documented the science of verbal protocol data in more detail.

In his final lecture ("Desires and Diversions"), he stated that if you're going to be distracted, you should make the most of it.

He accomplished this via remarkable accomplishments in the areas of his diversions, as well as the use of some of them in his final effort.

One of the early hypertext systems, ZOG, was one of these diversions.

Newell also collaborated with Digital Equipment Corporation (DEC) founder Gordon Bell on a textbook on computer architectures and worked on voice recognition systems with CMU colleague Raj Reddy.

Working with Stuart Card and Thomas Moran at Xerox PARC to develop ideas of how people interact with computers was maybe the longest-running and most fruitful diversion.

The Psychology of Human-Computer Interaction documents these theories (1983).

Their study resulted in the Keystroke Level Model and GOMS, two models for representing human behavior, as well as the Model Human Processor, a simplified description of the mechanics of cognition in this domain.

Some of the first work in human-computer interface was done here (HCI).

Their strategy advocated for first knowing the user and the task, then employing technology to assist the user in completing the job.

In his farewell talk, Newell also said that scientists should have a last endeavor that would outlive them.

Newell's last goal was to advocate for unified theories of cognition (UTCs) and to develop Soar, a proposed UTC and example.

His idea imagined what it would be like to have a theory that combined all of psychology's restrictions, facts, and theories into a single unified outcome that could be implemented by a computer program.

Soar continues to be a successful continuing project, despite the fact that it is not yet completed.

While Soar has yet fully unify psychology, it has made significant progress in describing problem solving, learning, and their interactions, as well as how to create autonomous, reactive entities in huge simulations.

He looked into how learning could be modeled as part of his final project (with Paul Rosenbloom).

Later, this project was merged with Soar.

Learning, according to Newell and Rosenbloom, follows a power law of practice, in which the time to complete a task is proportional to the practice (trial) number raised to a small negative power (e.g., Time trial -).

This holds true across a broad variety of activities.

Their explanation was that when tasks were completed in a hierarchical order, what was learnt at the lowest level had the greatest impact on reaction time, but as learning progressed up the hierarchy, it was less often employed and saved less time, thus learning slowed but did not cease.

Newell delivered the William James Lectures at Harvard in 1987.

He detailed what it would take to develop a unified theory in psychology in these lectures.

These lectures were taped and are accessible in CMU's library.

He gave them again the following autumn and turned them into a book (1990).

Soar's representation of cognition is based on searching through issue spaces.

It takes the form of a manufacturing system (using IF-THEN rules).

It makes an effort to use an operator.

Soar recurses with an impasse to solve the issue if it doesn't have one or can't apply it.

As a result, knowledge is represented as operator parts and issue spaces, as well as how to overcome impasses.

As a result, the architecture is how these choices and information may be organized.

Soar models have been employed in a range of cognitive science and AI applications, including military simulations, and systems with up to one million rules have been constructed.

Kathleen Carley, a social scientist at CMU, and Newell discussed how to use these cognitive models to simulate social agents.

Work on Soar continues, notably at the University of Michigan under the direction of John Laird, with a concentration on intelligent agents presently.

In 1975, the ACM A. M. Turing Award was given to Newell and Simon for their contributions to artificial intelligence, psychology of human cognition, and list processing.

Their work is credited with making significant contributions to computer science as an empirical investigation.

Newell has also been inducted into the National Academies of Sciences and Engineering.

He was awarded the National Medal of Science in 1992.

Newell was instrumental in establishing a productive and supportive research group, department, and institution.

His son said at his memorial service that he was not only a great scientist, but also a great father.

His weaknesses were that he was very intelligent, that he worked really hard, and that he had the same opinion of you.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; General Problem Solver; Simon, Herbert A.


References & Further Reading:


Newell, Allen. 1990. Unified Theories of Cognition. Cambridge, MA: Harvard University Press.

Newell, Allen. 1993. Desires and Diversions. Carnegie Mellon University, School of Computer Science. Stanford, CA: University Video Communications.

Simon, Herbert A. 1998. “Allen Newell: 1927–1992.” IEEE Annals of the History of Computing 20, no. 2: 63–76.




Artificial Intelligence - Quantum AI.

 



Artificial intelligence and quantum computing, according to Johannes Otterbach, a physicist at Rigetti Computing in Berkeley, California, are natural friends since both technologies are essentially statistical.

Airbus, Atos, Baidu, b|eit, Cambridge Quantum Computing, Elyah, Hewlett-Packard (HP), IBM, Microsoft Research QuArC, QC Ware, Quantum Benchmark Inc., R QUANTECH, Rahko, and Zapata Computing are among the organizations that have relocated to the region.

Bits are used to encode and modify data in traditional general-purpose computer systems.

Bits may only be in one of two states: 0 or 1.

Quantum computers use the actions of subatomic particles like electrons and photons to process data.

Superposition—particles residing in all conceivable states at the same time—and entanglement—the pairing and connection of particles such that they cannot be characterized independently of the state of others, even at long distances—are two of the most essential phenomena used by quantum computers.

Such entanglement was dubbed "spooky activity at a distance" by Albert Einstein.

Quantum computers use quantum registers, which are made up of a number of quantum bits or qubits, to store data.

While a clear explanation is elusive, qubits might be understood to reside in a weighted combination of two states at the same time to yield many states.

Each qubit that is added to the system doubles the processing capability of the system.

More than one quadrillion classical bits might be processed by a quantum computer with just fifty entangled qubits.

In a single year, sixty qubits could carry all of humanity's data.

Three hundred qubits might compactly encapsulate a quantity of data comparable to the observable universe's classical information content.

Quantum computers can operate in parallel on large quantities of distinct computations, collections of data, or operations.

True autonomous transportation would be possible if a working artificially intelligent quantum computer could monitor and manage all of a city's traffic in real time.

By comparing all of the photographs to the reference photo at the same time, quantum artificial intelligence may rapidly match a single face to a library of billions of photos.

Our understanding of processing, programming, and complexity has radically changed with the development of quantum computing.

A series of quantum state transformations is followed by a measurement in most quantum algorithms.

The notion of quantum computing goes back to the 1980s, when physicists such as Yuri Manin, Richard Feynman, and David Deutsch realized that by using so-called quantum gates, a concept taken from linear algebra, researchers would be able to manipulate information.

They hypothesized qubits might be controlled by different superpositions and entanglements into quantum algorithms, the outcomes of which could be observed, by mixing many kinds of quantum gates into circuits.

Some quantum mechanical processes could not be efficiently replicated on conventional computers, which presented a problem to these early researchers.

They thought that quantum technology (perhaps included in a universal quantum Turing computer) would enable quantum simulations.

In 1993, Umesh Vazirani and Ethan Bernstein of the University of California, Berkeley, hypothesized that quantum computing will one day be able to effectively solve certain problems quicker than traditional digital computers, in violation of the extended Church-Turing thesis.

In computational complexity theory, Vazirani and Bernstein argue for a special class of bounded-error quantum polynomial time choice problems.

These are issues that a quantum computer can solve in polynomial time with a one-third error probability in most cases.

The frequently proposed threshold for Quantum Supremacy is fifty qubits, the point at which quantum computers would be able to tackle problems that would be impossible to solve on conventional machines.

Although no one believes quantum computing would be capable of solving all NP-hard issues, quantum AI researchers think the machines will be capable of solving specific types of NP intermediate problems.

Creating quantum machine algorithms that do valuable work has proved to be a tough task.

In 1994, AT&T Laboratories' Peter Shor devised a polynomial time quantum algorithm that beat conventional methods in factoring big numbers, possibly allowing for the speedy breakage of current kinds of public key encryption.

Since then, intelligence services have been stockpiling encrypted material passed across networks in the hopes that quantum computers would be able to decipher it.

Another technique devised by Shor's AT&T Labs colleague Lov Grover allows for quick searches of unsorted datasets.

Quantum neural networks are similar to conventional neural networks in that they label input, identify patterns, and learn from experience using layers of millions or billions of linked neurons.

Large matrices and vectors produced by neural networks can be processed exponentially quicker by quantum computers than by classical computers.

Aram Harrow of MIT and Avinatan Hassidum gave the critical algorithmic insight for rapid classification and quantum inversion of the matrix in 2008.

Michael Hartmann, a visiting researcher at Google AI Quantum and Associate Professor of Photonics and Quantum Sciences at Heriot-Watt University, is working on a quantum neural network computer.

Hartmann's Neuromorphic Quantum Computing (Quromorphic) Project employs superconducting electrical circuits as hardware.

Hartmann's artificial neural network computers are inspired by the brain's neuronal organization.

They are usually stored in software, with each artificial neuron being programmed and connected to a larger network of neurons.

Hardware that incorporates artificial neural networks is also possible.

Hartmann estimates that a workable quantum computing artificial intelligence might take 10 years to develop.

D-Wave, situated in Vancouver, British Columbia, was the first business to mass-produce quantum computers in commercial numbers.

In 2011, D-Wave started producing annealing quantum computers.

Annealing processors are special-purpose products used for a restricted set of problems with multiple local minima in a discrete search space, such as combinatorial optimization issues.

The D-Wave computer isn't polynomially equal to a universal quantum computer, hence it can't run Shor's algorithm.

Lockheed Martin, the University of Southern California, Google, NASA, and the Los Alamos National Laboratory are among the company's clients.

Universal quantum computers are being pursued by Google, Intel, Rigetti, and IBM.

Each one has a quantum processor with fifty qubits.

In 2018, the Google AI Quantum lab, led by Hartmut Neven, announced the introduction of their newest 72-qubit Bristlecone processor.

Intel also debuted its 49-qubit Tangle Lake CPU last year.

The Aspen-1 processor from Rigetti Computing has sixteen qubits.

The IBM Q Experience quantum computing facility is situated in Yorktown Heights, New York, inside the Thomas J.

Watson Research Center.

To create quantum commercial applications, IBM is collaborating with a number of corporations, including Honda, JPMorgan Chase, and Samsung.

The public is also welcome to submit experiments to be processed on the company's quantum computers.

Quantum AI research is also highly funded by government organizations and universities.

The NASA Quantum Artificial Intelligence Laboratory (QuAIL) has a D-Wave 2000Q quantum computer with 2,048 qubits that it wants to use to tackle NP-hard problems in data processing, anomaly detection and decision-making, air traffic management, and mission planning and coordination.

The NASA team has chosen to concentrate on the most difficult machine learning challenges, such as generative models in unsupervised learning, in order to illustrate the technology's full potential.

In order to maximize the value of D-Wave resources and skills, NASA researchers have opted to focus on hybrid quantum-classical techniques.

Many laboratories across the globe are investigating completely quantum machine learning.

Quantum Learning Theory proposes that quantum algorithms might be utilized to address machine learning problems, hence improving traditional machine learning techniques.

Classical binary data sets are supplied into a quantum computer for processing in quantum learning theory.

The NIST Joint Quantum Institute and the University of Maryland's Joint Center for Quantum Information and Computer Science are also bridging the gap between machine learning and quantum computing.

Workshops bringing together professionals in mathematics, computer science, and physics to use artificial intelligence algorithms in quantum system control are hosted by the NIST-UMD.

Engineers are also encouraged to employ quantum computing to boost the performance of machine learning algorithms as part of the alliance.

The Quantum Algorithm Zoo, a collection of all known quantum algorithms, is likewise housed at NIST.

Scott Aaronson is the director of the University of Texas at Austin's Quantum Information Center.

The department of computer science, the department of electrical and computer engineering, the department of physics, and the Advanced Research Laboratories have collaborated to create the center.

The University of Toronto has a quantum machine learning start-up incubator.

Peter Wittek is the head of the Quantum Machine Learning Program of the Creative Destruction Lab, which houses the QML incubator.

Materials discovery, optimization, and logistics, reinforcement and unsupervised machine learning, chemical engineering, genomics and drug discovery, systems design, finance, and security are all areas where the University of Toronto incubator is fostering innovation.

In December 2018, President Donald Trump signed the National Quantum Initiative Act into law.

The legislation establishes a partnership of the National Institute of Standards and Technology (NIST), the National Science Foundation (NSF), and the Department of Energy (DOE) for quantum information science research, commercial development, and education.

The statute anticipates the NSF and DOE establishing many competitively awarded research centers as a result of the endeavor.

Due to the difficulties of running quantum processing units (QPUs), which must be maintained in a vacuum at temperatures near to absolute zero, no quantum computer has yet outperformed a state-of-the-art classical computer on a challenging job.

Because quantum computing is susceptible to external environmental impacts, such isolation is required.

Qubits are delicate; a typical quantum bit can only exhibit coherence for ninety microseconds before degrading and becoming unreliable.

In an isolated quantum processor with high thermal noise, communicating inputs and outputs and collecting measurements is a severe technical difficulty that has yet to be fully handled.

The findings are not totally dependable in a classical sense since the measurement is quantum and hence probabilistic.

Only one of the quantum parallel threads may be randomly accessed for results.

During the measuring procedure, all other threads are deleted.

It is believed that by connecting quantum processors to error-correcting artificial intelligence algorithms, the defect rate of these computers would be lowered.

Many machine intelligence applications, such as deep learning and probabilistic programming, rely on sampling from high-dimensional probability distributions.

Quantum sampling methods have the potential to make calculations on otherwise intractable issues quicker and more efficient.

Shor's method employs an artificial intelligence approach that alters the quantum state in such a manner that common properties of output values, such as symmetry of period of functions, can be quantified.

Grover's search method manipulates the quantum state using an amplification technique to increase the possibility that the desired output will be read off.

Quantum computers would also be able to execute many AI algorithms at the same time.

Quantum computing simulations have recently been used by scientists to examine the beginnings of biological life.

Unai Alvarez-Rodriguez of the University of the Basque Country in Spain built so-called artificial quantum living forms using IBM's QX superconducting quantum computer.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


General and Narrow AI.


References & Further Reading:


Aaronson, Scott. 2013. Quantum Computing Since Democritus. Cambridge, UK: Cambridge University Press.

Biamonte, Jacob, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. 2018. “Quantum Machine Learning.” https://arxiv.org/pdf/1611.09347.pdf.

Perdomo-Ortiz, Alejandro, Marcello Benedetti, John Realpe-Gómez, and Rupak Biswas. 2018. “Opportunities and Challenges for Quantum-Assisted Machine Learning in Near-Term Quantum Computers.” Quantum Science and Technology 3: 1–13.

Schuld, Maria, Ilya Sinayskiy, and Francesco Petruccione. 2015. “An Introduction to Quantum Machine Learning.” Contemporary Physics 56, no. 2: 172–85.

Wittek, Peter. 2014. Quantum Machine Learning: What Quantum Computing Means to Data Mining. Cambridge, MA: Academic Press




Artificial Intelligence - Who Is Aaron Sloman?

 




Aaron Sloman (1936–) is a renowned artificial intelligence and cognitive science philosopher.

He is a global expert in the evolution of biological information processing, an area of study that seeks to understand how animal species have acquired cognitive levels that surpass technology.

He's been debating if evolution was the first blind mathematician and whether weaver birds are actually capable of recursion in recent years (dividing a problem into parts to conquer it).

His present Meta-Morphogenesis Project is based on an idea by Alan Turing (1912–1954), who claimed that although computers could do mathematical brilliance, only brains could perform mathematical intuition.

According to Sloman, not every aspect of the cosmos, including the human brain, can be represented in a sufficiently massive digital computer because of this.

This assertion clearly contradicts digital physics, which claims that the universe may be characterized as a simulation running on a sufficiently big and fast general-purpose computer that calculates the cosmos's development.

Sloman proposes that the universe has developed its own biological building kits for creating and deriving other—different and more sophisticated—construction kits, similar to how scientists have evolved, accumulated, and applied increasingly complex mathematical knowledge via mathematics.

He refers to this concept as the Self-Informing Universe, and suggests that scientists build a multi-membrane Super-Turing machine that runs on subneural biological chemistry.

Sloman was born to Jewish Lithuanian immigrants in Southern Rhodesia (now Zimbabwe).

At the University of Cape Town, he got a bachelor's degree in Mathematics and Physics.

He was awarded a Rhodes Scholarship and earned his PhD in philosophy from Oxford University, where he defended Immanuel Kant's mathematical concepts.

He saw that artificial intelligence had promise as the way forward in philosophical understanding of the mind as a visiting scholar at Edinburgh University in the early 1970s.

He said that using Kant's recommendations as a starting point, a workable robotic toy baby could be created, which would eventually develop in intellect and become a mathematician on par with Archimedes or Zeno.

He was one of the first scholars to refute John McCarthy's claim that a computer program capable of operating intelligently in the real world must use structured, logic-based ideas.

Sloman was one of the founding members of the University of Sussex School of Cognitive and Computer Sciences.

There, he collaborated with Margaret Boden and Max Clowes to advance artificial intelligence instruction and research.

This effort resulted in the commercialization of the widely used Poplog AI teaching system.

Sloman's The Computer Revolution in Philosophy (1978) is famous for being one of the first to recognize that metaphors from the realm of computers (for example, the brain as a data storage device and thinking as a collection of tools) will dramatically alter how we think about ourselves.

The epilogue of the book contains observations on the near impossibility of AI sparking the Singularity and the likelihood of a human Society for the Liberation of Robots to address possible future brutal treatment of intelligent machines.

Sloman held the Artificial Intelligence and Cognitive Science chair in the School of Computer Science at the University of Birmingham until his formal retirement in 2002.

He is a member of the Alan Turing Institute and the Association for the Advancement of Artificial Intelligence.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Superintelligence; Turing, Alan.


References & Further Reading:


Sloman, Aaron. 1962. “Knowing and Understanding: Relations Between Meaning and Truth, Meaning and Necessary Truth, Meaning and Synthetic Necessary Truth.” D. Phil., Oxford University.

Sloman, Aaron. 1971. “Interactions between Philosophy and AI: The Role of Intuition and Non-Logical Reasoning in Intelligence.” Artificial Intelligence 2: 209–25.

Sloman, Aaron. 1978. The Computer Revolution in Philosophy: Philosophy, Science, and Models of Mind. Terrace, Hassocks, Sussex, UK: Harvester Press.

Sloman, Aaron. 1990. “Notes on Consciousness.” AISB Quarterly 72: 8–14.

Sloman, Aaron. 2018. “Can Digital Computers Support Ancient Mathematical Conscious￾ness?” Information 9, no. 5: 111.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...