Artificial Intelligence - Quantum AI.

 



Artificial intelligence and quantum computing, according to Johannes Otterbach, a physicist at Rigetti Computing in Berkeley, California, are natural friends since both technologies are essentially statistical.

Airbus, Atos, Baidu, b|eit, Cambridge Quantum Computing, Elyah, Hewlett-Packard (HP), IBM, Microsoft Research QuArC, QC Ware, Quantum Benchmark Inc., R QUANTECH, Rahko, and Zapata Computing are among the organizations that have relocated to the region.

Bits are used to encode and modify data in traditional general-purpose computer systems.

Bits may only be in one of two states: 0 or 1.

Quantum computers use the actions of subatomic particles like electrons and photons to process data.

Superposition—particles residing in all conceivable states at the same time—and entanglement—the pairing and connection of particles such that they cannot be characterized independently of the state of others, even at long distances—are two of the most essential phenomena used by quantum computers.

Such entanglement was dubbed "spooky activity at a distance" by Albert Einstein.

Quantum computers use quantum registers, which are made up of a number of quantum bits or qubits, to store data.

While a clear explanation is elusive, qubits might be understood to reside in a weighted combination of two states at the same time to yield many states.

Each qubit that is added to the system doubles the processing capability of the system.

More than one quadrillion classical bits might be processed by a quantum computer with just fifty entangled qubits.

In a single year, sixty qubits could carry all of humanity's data.

Three hundred qubits might compactly encapsulate a quantity of data comparable to the observable universe's classical information content.

Quantum computers can operate in parallel on large quantities of distinct computations, collections of data, or operations.

True autonomous transportation would be possible if a working artificially intelligent quantum computer could monitor and manage all of a city's traffic in real time.

By comparing all of the photographs to the reference photo at the same time, quantum artificial intelligence may rapidly match a single face to a library of billions of photos.

Our understanding of processing, programming, and complexity has radically changed with the development of quantum computing.

A series of quantum state transformations is followed by a measurement in most quantum algorithms.

The notion of quantum computing goes back to the 1980s, when physicists such as Yuri Manin, Richard Feynman, and David Deutsch realized that by using so-called quantum gates, a concept taken from linear algebra, researchers would be able to manipulate information.

They hypothesized qubits might be controlled by different superpositions and entanglements into quantum algorithms, the outcomes of which could be observed, by mixing many kinds of quantum gates into circuits.

Some quantum mechanical processes could not be efficiently replicated on conventional computers, which presented a problem to these early researchers.

They thought that quantum technology (perhaps included in a universal quantum Turing computer) would enable quantum simulations.

In 1993, Umesh Vazirani and Ethan Bernstein of the University of California, Berkeley, hypothesized that quantum computing will one day be able to effectively solve certain problems quicker than traditional digital computers, in violation of the extended Church-Turing thesis.

In computational complexity theory, Vazirani and Bernstein argue for a special class of bounded-error quantum polynomial time choice problems.

These are issues that a quantum computer can solve in polynomial time with a one-third error probability in most cases.

The frequently proposed threshold for Quantum Supremacy is fifty qubits, the point at which quantum computers would be able to tackle problems that would be impossible to solve on conventional machines.

Although no one believes quantum computing would be capable of solving all NP-hard issues, quantum AI researchers think the machines will be capable of solving specific types of NP intermediate problems.

Creating quantum machine algorithms that do valuable work has proved to be a tough task.

In 1994, AT&T Laboratories' Peter Shor devised a polynomial time quantum algorithm that beat conventional methods in factoring big numbers, possibly allowing for the speedy breakage of current kinds of public key encryption.

Since then, intelligence services have been stockpiling encrypted material passed across networks in the hopes that quantum computers would be able to decipher it.

Another technique devised by Shor's AT&T Labs colleague Lov Grover allows for quick searches of unsorted datasets.

Quantum neural networks are similar to conventional neural networks in that they label input, identify patterns, and learn from experience using layers of millions or billions of linked neurons.

Large matrices and vectors produced by neural networks can be processed exponentially quicker by quantum computers than by classical computers.

Aram Harrow of MIT and Avinatan Hassidum gave the critical algorithmic insight for rapid classification and quantum inversion of the matrix in 2008.

Michael Hartmann, a visiting researcher at Google AI Quantum and Associate Professor of Photonics and Quantum Sciences at Heriot-Watt University, is working on a quantum neural network computer.

Hartmann's Neuromorphic Quantum Computing (Quromorphic) Project employs superconducting electrical circuits as hardware.

Hartmann's artificial neural network computers are inspired by the brain's neuronal organization.

They are usually stored in software, with each artificial neuron being programmed and connected to a larger network of neurons.

Hardware that incorporates artificial neural networks is also possible.

Hartmann estimates that a workable quantum computing artificial intelligence might take 10 years to develop.

D-Wave, situated in Vancouver, British Columbia, was the first business to mass-produce quantum computers in commercial numbers.

In 2011, D-Wave started producing annealing quantum computers.

Annealing processors are special-purpose products used for a restricted set of problems with multiple local minima in a discrete search space, such as combinatorial optimization issues.

The D-Wave computer isn't polynomially equal to a universal quantum computer, hence it can't run Shor's algorithm.

Lockheed Martin, the University of Southern California, Google, NASA, and the Los Alamos National Laboratory are among the company's clients.

Universal quantum computers are being pursued by Google, Intel, Rigetti, and IBM.

Each one has a quantum processor with fifty qubits.

In 2018, the Google AI Quantum lab, led by Hartmut Neven, announced the introduction of their newest 72-qubit Bristlecone processor.

Intel also debuted its 49-qubit Tangle Lake CPU last year.

The Aspen-1 processor from Rigetti Computing has sixteen qubits.

The IBM Q Experience quantum computing facility is situated in Yorktown Heights, New York, inside the Thomas J.

Watson Research Center.

To create quantum commercial applications, IBM is collaborating with a number of corporations, including Honda, JPMorgan Chase, and Samsung.

The public is also welcome to submit experiments to be processed on the company's quantum computers.

Quantum AI research is also highly funded by government organizations and universities.

The NASA Quantum Artificial Intelligence Laboratory (QuAIL) has a D-Wave 2000Q quantum computer with 2,048 qubits that it wants to use to tackle NP-hard problems in data processing, anomaly detection and decision-making, air traffic management, and mission planning and coordination.

The NASA team has chosen to concentrate on the most difficult machine learning challenges, such as generative models in unsupervised learning, in order to illustrate the technology's full potential.

In order to maximize the value of D-Wave resources and skills, NASA researchers have opted to focus on hybrid quantum-classical techniques.

Many laboratories across the globe are investigating completely quantum machine learning.

Quantum Learning Theory proposes that quantum algorithms might be utilized to address machine learning problems, hence improving traditional machine learning techniques.

Classical binary data sets are supplied into a quantum computer for processing in quantum learning theory.

The NIST Joint Quantum Institute and the University of Maryland's Joint Center for Quantum Information and Computer Science are also bridging the gap between machine learning and quantum computing.

Workshops bringing together professionals in mathematics, computer science, and physics to use artificial intelligence algorithms in quantum system control are hosted by the NIST-UMD.

Engineers are also encouraged to employ quantum computing to boost the performance of machine learning algorithms as part of the alliance.

The Quantum Algorithm Zoo, a collection of all known quantum algorithms, is likewise housed at NIST.

Scott Aaronson is the director of the University of Texas at Austin's Quantum Information Center.

The department of computer science, the department of electrical and computer engineering, the department of physics, and the Advanced Research Laboratories have collaborated to create the center.

The University of Toronto has a quantum machine learning start-up incubator.

Peter Wittek is the head of the Quantum Machine Learning Program of the Creative Destruction Lab, which houses the QML incubator.

Materials discovery, optimization, and logistics, reinforcement and unsupervised machine learning, chemical engineering, genomics and drug discovery, systems design, finance, and security are all areas where the University of Toronto incubator is fostering innovation.

In December 2018, President Donald Trump signed the National Quantum Initiative Act into law.

The legislation establishes a partnership of the National Institute of Standards and Technology (NIST), the National Science Foundation (NSF), and the Department of Energy (DOE) for quantum information science research, commercial development, and education.

The statute anticipates the NSF and DOE establishing many competitively awarded research centers as a result of the endeavor.

Due to the difficulties of running quantum processing units (QPUs), which must be maintained in a vacuum at temperatures near to absolute zero, no quantum computer has yet outperformed a state-of-the-art classical computer on a challenging job.

Because quantum computing is susceptible to external environmental impacts, such isolation is required.

Qubits are delicate; a typical quantum bit can only exhibit coherence for ninety microseconds before degrading and becoming unreliable.

In an isolated quantum processor with high thermal noise, communicating inputs and outputs and collecting measurements is a severe technical difficulty that has yet to be fully handled.

The findings are not totally dependable in a classical sense since the measurement is quantum and hence probabilistic.

Only one of the quantum parallel threads may be randomly accessed for results.

During the measuring procedure, all other threads are deleted.

It is believed that by connecting quantum processors to error-correcting artificial intelligence algorithms, the defect rate of these computers would be lowered.

Many machine intelligence applications, such as deep learning and probabilistic programming, rely on sampling from high-dimensional probability distributions.

Quantum sampling methods have the potential to make calculations on otherwise intractable issues quicker and more efficient.

Shor's method employs an artificial intelligence approach that alters the quantum state in such a manner that common properties of output values, such as symmetry of period of functions, can be quantified.

Grover's search method manipulates the quantum state using an amplification technique to increase the possibility that the desired output will be read off.

Quantum computers would also be able to execute many AI algorithms at the same time.

Quantum computing simulations have recently been used by scientists to examine the beginnings of biological life.

Unai Alvarez-Rodriguez of the University of the Basque Country in Spain built so-called artificial quantum living forms using IBM's QX superconducting quantum computer.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


General and Narrow AI.


References & Further Reading:


Aaronson, Scott. 2013. Quantum Computing Since Democritus. Cambridge, UK: Cambridge University Press.

Biamonte, Jacob, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. 2018. “Quantum Machine Learning.” https://arxiv.org/pdf/1611.09347.pdf.

Perdomo-Ortiz, Alejandro, Marcello Benedetti, John Realpe-Gómez, and Rupak Biswas. 2018. “Opportunities and Challenges for Quantum-Assisted Machine Learning in Near-Term Quantum Computers.” Quantum Science and Technology 3: 1–13.

Schuld, Maria, Ilya Sinayskiy, and Francesco Petruccione. 2015. “An Introduction to Quantum Machine Learning.” Contemporary Physics 56, no. 2: 172–85.

Wittek, Peter. 2014. Quantum Machine Learning: What Quantum Computing Means to Data Mining. Cambridge, MA: Academic Press




Artificial Intelligence - What Is RoboThespian?

 




RoboThespian is an interactive robot created by Engineered Arts in England.

It is described as a humanoid, which means it was meant to look like a person.

The initial version of the robot was released in 2005, with improvements following in 2007, 2010, and 2014.

The robot is human-sized, with a plastic face, metal arms, and legs that can move in a variety of directions.

With its digital voice, the robot's video camera eyes can track a person's movements and infer his or her age and mood.

All RoboThespians, according to Engineered Arts' website, come with a touchscreen that enables users to personalize and manage their experience with the robot, including the ability to animate it and modify its language.

Users may also operate it remotely via a tablet, however since the robot can be preprogrammed, no live operator is necessary.

RoboThespian was created to engage with people in public places including colleges, museums, hotels, trade events, and exhibits.

The robot is utilized as a tour guide in venues like science museums.

It can scan QR codes, identify facial expressions, react to gestures, and communicate with people through a touchscreen kiosk.

RoboThespian may also amuse in addition to these practical uses.

It's jam-packed with songs, gestures, welcomes, and first impressions.

RoboThespian has also performed in front of an audience.

It has the ability to sing, dance, perform, read from a script, and communicate with emotion.

It can respond to audiences and forecast their emotions since it is equipped with cameras and face recognition.

According to Engineered Arts, it may have a "vast variety of facial expression" as an actor and "can be precisely displayed with the delicate subtlety, generally only achieved by human performers" (Engineered Arts 2017).

During the Edinburgh Festival Fringe in 2015, the drama Spillikin had its world debut at the Pleasance Theatre.

In a love tale about a husband who constructs a robot for his wife to keep her company after he dies, RoboThespian appeared with four human performers.

The play toured the United Kingdom from 2016 to 2017, receiving critical praise.

Companies who purchase a RoboThespian may tailor the robot's content to meet their specific requirements.

The appearance of the robot's face and other design elements may be changed.

It can feature a projected face, grippable hands, and moveable legs.

RoboThespians are now placed at NASA Kennedy Center in the United States, the National Science and Technology Museum in Spain, and the Copernicus Science Centre in Poland, among others.

University of Central Florida, University of North Carolina at Chapel Hill, University College London, and University of Barcelona are among the academic institutions where the robot may be found.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Autonomous and Semiautonomous Systems; Ishiguro, Hiroshi.


References & Further Reading:


Engineered Arts. 2017. “RoboThespian.” Engineered Arts Limited. www.engineeredarts.co.uk.

Hickey, Shane. 2014. “RoboThespian: The First Commercial Robot That Behaves Like a Person.” The Guardian, August 17, 2014. www.theguardian.com/technology/2014/aug/17/robothespian-engineered-arts-robot-human-behaviour.





Artificial Intelligence - Who Is Rudy Rucker?

 




Rudolf von Bitter (German: Rudolf von Bitter) Rucker (1946–) is an American novelist, mathematician, and computer scientist who is the great-great-great-grandson of philosopher Georg Wilhelm Friedrich Hegel (1770–1831).

Rucker is most recognized for his sarcastic, mathematics-heavy science fiction, while having written in a variety of fictional and nonfictional genres.

His Ware tetralogy (1982–2000) is regarded as one of the cyberpunk literary movement's fundamental works.

Rucker graduated from Rutgers University with a Ph.D. in mathematics in 1973.

He shifted from teaching mathematics in colleges in the US and Germany to teaching computer science at San José State University, where he ultimately became a professor before retiring in 2004.

Rucker has forty publications to his credit, including science fiction novels, short story collections, and nonfiction works.

His nonfiction works span the disciplines of mathematics, cognitive science, philosophy, and computer science, with topics such as the fourth dimension and the meaning of computation among them.

The popular mathematics book Infinity and the Mind: The Science and Philosophy of the Infinite (1982), which he wrote, is still in print at Princeton University Press.

Rucker established himself in the cyberpunk genre with the Ware series (Software 1982, Wetware 1988, Freeware 1997, and Realware 2000).

Since Dick's death in 1983, the famous American science fiction award has been handed out every year since Software received the inaugural Philip K. Dick Award.

Wetware was also awarded this prize in 1988, in a tie with Paul J. McAuley's Four Hundred Billion Stars.

The Ware Tetralogy, which Rucker has made accessible for free online as an e-book under a Creative Commons license, was reprinted in 2010 as a single volume.

Cobb Anderson, a retired roboticist who has fallen from favor for creating sentient robots with free agency, known as boppers, is the protagonist of the Ware series.

The boppers want to reward him by giving him immortality via mind uploading; unfortunately, this procedure requires the full annihilation of Cobb's brain, which the boppers do not consider necessary hardware.

In Wetware, a bopper named Berenice wants to impregnate Cobb's niece in order to produce a human-machine hybrid.

Humanity retaliates by unleashing a mold that kills boppers, but this chipmould thrives on the cladding that covers the boppers' exteriors, resulting in the creation of an organic-machine hybrid in the end.

Freeware is based on these lifeforms, which are now known as moldies and are generally detested by biological people.

This story also includes extraterrestrial intelligences, who in Realware provide superior technology and the power to change reality to different types of human and artificial entities.

The book Postsingular, published in 2007, was the first of Rucker's works to be distributed under a Creative Commons license.

The book, set in San Francisco, addresses the emergence of nanotechnology, first in a dystopian and later in a utopian scenario.

In the first section, a rogue engineer creates nants, which convert Earth into a virtual replica of itself, destroying the planet in the process, until a youngster is able to reverse their programming.

The narrative then goes on to depict orphids, a new kind of nanotechnology that allows people to become cognitively enhanced, hyperintelligent creatures.

Although the Ware tetralogy and Postsingular have been classified as cyberpunk books, Rucker's literature has been seen as difficult to label, since it combines hard science with humor, graphic sex, and constant drug use.

"Happily, Rucker himself has established a phrase to capture his unusual mix of commonplace reality and outraeous fantasy: transrealism," writes science fiction historian Rob Latham (Latham 2005, 4).

"Transrealism is not so much a form of SF as it is a sort of avant-garde literature," Rucker writes in "A Transrealist Manifesto," published in 1983.  (Rucker 1983, 7).


"This means writing SF about yourself, your friends, and your local surroundings, transmuted in some science-fictional fashion," he noted in a 2002 interview. Using actual life as a basis lends your writing a literary quality and keeps you from using clichés" (Brunsdale 2002, 48).


Rucker worked on the short story collection Transreal Cyberpunk with cyberpunk author Bruce Sterling, which was released in 2016.

Rucker chose to publish his book Nested Scrolls after suffering a brain hemorrhage in 2008.

It won the Emperor Norton Award for "amazing innovation and originality unconstrained by the constraints of petty reason" when it was published in 2011.

Million Mile Road Trip (2019), a science fiction book about a group of human and nonhuman characters on an intergalactic road trip, is his most recent work.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Digital Immortality; Nonhuman Rights and Personhood; Robot Ethic.


References & Further Reading:


Brunsdale, Mitzi. 2002. “PW talks with Rudy Rucker.” Publishers Weekly 249, no. 17 (April 29): 48. https://archive.publishersweekly.com/?a=d&d=BG20020429.1.82&srpos=1&e=-------en-20--1--txt-txIN%7ctxRV-%22PW+talks+with+Rudy+Rucker%22---------1.

Latham, Rob. 2005. “Long Live Gonzo: An Introduction to Rudy Rucker.” Journal of the Fantastic in the Arts 16, no. 1 (Spring): 3–5.

Rucker, Rudy. 1983. “A Transrealist Manifesto.” The Bulletin of the Science Fiction Writers of America 82 (Winter): 7–8.

Rucker, Rudy. 2007. “Postsingular.” https://manybooks.net/titles/ruckerrother07postsingular.html.

Rucker, Rudy. 2010. The Ware Tetralogy. Gaithersburg, MD: Prime Books, 2010.




Artificial Intelligence - Who Was Raj Reddy Or Dabbala Rajagopal "Raj" Reddy?

 


 


Dabbala Rajagopal "Raj" Reddy (1937–) is an Indian American who has made important contributions to artificial intelligence and has won the Turing Award.

He holds the Moza Bint Nasser Chair and University Professor of Computer Science and Robotics at Carnegie Mellon University's School of Computer Science.

He worked on the faculties of Stanford and Carnegie Mellon universities, two of the world's leading colleges for artificial intelligence research.

In the United States and in India, he has received honors for his contributions to artificial intelligence.

In 2001, the Indian government bestowed upon him the Padma Bhushan Award (the third highest civilian honor).

In 1984, he was also given the Legion of Honor, France's highest honor, which was created in 1802 by Napoleon Bonaparte himself.

In 1958, Reddy obtained his bachelor's degree from the University of Madras' Guindy Engineering College, and in 1960, he received his master's degree from the University of New South Wales in Australia.

In 1966, he came to the United States to get his doctorate in computer science at Stanford University.

He was the first in his family to get a university degree, which is typical of many rural Indian households.

He went to the academy in 1966 and joined the faculty of Stanford University as an Assistant Professor of Computer Science, where he stayed until 1969, after working in the industry as an Applied Science Representative at IBM Australia from 1960 to 1963.

He began working at Carnegie Mellon as an Associate Professor of Computer Science in 1969 and will continue to do so until 2020.

He rose up the ranks at Carnegie Mellon, eventually becoming a full professor in 1973 and a university professor in 1984.

In 1991, he was appointed as the head of the School of Computer Science, a post he held until 1999.

Many schools and institutions were founded as a result of Reddy's efforts.

In 1979, he launched the Robotics Institute and served as its first director, a position he held until 1999.

He was a driving force behind the establishment of the Language Technologies Institute, the Human Computer Interaction Institute, the Center for Automated Learning and Discovery (now the Machine Learning Department), and the Institute for Software Research at CMU during his stint as dean.

From 1999 to 2001, Reddy was a cochair of the President's Information Technology Advisory Committee (PITAC).

The President's Council of Advisors on Science and Technology (PCAST) took over PITAC in 2005.

Reddy was the president of the American Association for Artificial Intelligence (AAAI) from 1987 to 1989.

The AAAI has been renamed the Association for the Advancement of Artificial Intelligence, recognizing the worldwide character of the research community, which began with pioneers like Reddy.

The former logo, acronym (AAAI), and purpose have been retained.

Artificial intelligence, or the study of giving intelligence to computers, was the subject of Reddy's research.

He worked on voice control for robots, speech recognition without relying on the speaker, and unlimited vocabulary dictation, which allowed for continuous speech dictation.

Reddy and his collaborators have made significant contributions to computer analysis of natural sceneries, job oriented computer architectures, universal access to information (a project supported by UNESCO), and autonomous robotic systems.

Reddy collaborated on Hearsay II, Dragon, Harpy, and Sphinx I/II with his coworkers.

The blackboard model, one of the fundamental concepts that sprang from this study, has been extensively implemented in many fields of AI.

Reddy was also interested in employing technology for the sake of society, and he worked as the Chief Scientist at the Centre Mondial Informatique et Ressource Humaine in France.

He aided the Indian government in the establishment of the Rajiv Gandhi University of Knowledge Technologies, which focuses on low-income rural youth.

He is a member of the International Institute of Information Technology (IIIT) in Hyderabad's governing council.

IIIT is a non-profit public-private partnership (N-PPP) that focuses on technological research and applied research.

He was on the board of directors of the Emergency Management and Research Institute, a nonprofit public-private partnership that offers public emergency medical services.

EMRI has also aided in the emergency management of its neighboring nation, Sri Lanka.

In addition, he was a member of the Health Care Management Research Institute (HMRI).

HMRI provides non-emergency health-care consultation to rural populations, particularly in Andhra Pradesh, India.

In 1994, Reddy and Edward A. Feigenbaum shared the Turing Award, the top honor in artificial intelligence, and Reddy became the first person of Indian/Asian descent to receive the award.

In 1991, he received the IBM Research Ralph Gomory Fellow Award, the Okawa Foundation's Okawa Prize in 2004, the Honda Foundation's Honda Prize in 2005, and the Vannevar Bush Award from the United States National Science Board in 2006.

Reddy has received fellowships from the Institute of Electronic and Electrical Engineers (IEEE), the Acoustical Society of America, and the American Association for Artificial Intelligence, among other prestigious organizations.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Autonomous and Semiautonomous Systems; Natural Language Processing and Speech Understanding.


References & Further Reading:


Reddy, Raj. 1988. “Foundations and Grand Challenges of Artificial Intelligence.” AI Magazine 9, no. 4 (Winter): 9–21.

Reddy, Raj. 1996. “To Dream the Possible Dream.” Communications of the ACM 39, no. 5 (May): 105–12.






Artificial Intelligence - AI Product Liability.

 



Product liability is a legal framework that holds the seller, manufacturer, distributor, and others in the distribution chain liable for damage caused by their goods to customers.

Victims are entitled to financial compensation from the accountable corporation.

The basic purpose of product liability legislation is to promote societal safety by discouraging wrongdoers from developing and distributing unsafe items to the general public.

Users and third-party spectators may also sue if certain conditions are satisfied, such as foreseeability of the harm.

Because product liability is governed by state law rather than federal law in the United States, the applicable legislation in each case may change depending on the location of the harm.

In the past, victims had to establish that the firm responsible was negligent, which meant that its acts did not reach the acceptable level of care, in order to prevail in court and be reimbursed for their injuries.



Four components must be shown in order to establish negligence.


  • First and foremost, the corporation must owe the customer a legal duty of care.
  • Second, that responsibility was violated, implying that the producer failed to fulfill the requisite level.
  • Third, the breach of duty resulted in the injury, implying that the manufacturer's activities resulted in the damage.
  • Finally, the victims must have had genuine injuries.



One approach to get compensated for product injury is to show that the corporation was negligent.



Product liability lawsuits may also be established by demonstrating that the corporation failed to uphold its guarantees to customers about the product's quality and dependability.


Express warranties may specify how long the product is covered by the warranty, as well as which components of the product are covered and which are not.

Implied guarantees that apply to all items include promises that the product will function as advertised and for the purpose for which the customer acquired it.

In the great majority of product liability cases, the courts will apply strict liability, which means that the corporation will be held accountable regardless of guilt if the standards are satisfied.

This is because the courts have determined that customers would have a tough time proving the firm is irresponsible since the company has greater expertise and resources.

Instead of proving that a duty was breached, consumers must show that the product contained an unreasonably dangerous defect; the defect caused the injury while the product was being used for its intended purpose; and the product was not substantially altered from the condition in which it was sold to consumers.


Design flaws, manufacturing flaws, and marketing flaws, sometimes known as failure to warn, are the three categories of defects that may be claimed for product responsibility.


When there are defects in the design of the product itself at the planning stage, this is referred to as a design defect.

If there was a foreseeable danger that the product might cause harm when used by customers when it was being created, the corporation would be liable.


When there are issues throughout the production process, such as the use of low-quality materials or shoddy craftsmanship, it is referred to as a manufacturing fault.


The final product falls short of the design's otherwise acceptable quality.

Failure to notify flaws occurs when a product involves an inherent hazard, regardless of how well it was designed or made, yet the corporation failed to provide customers with warnings that the product may be harmful.

While product liability law was created to cope with the advent of more complicated technologies that may cause consumer damage, it's unclear if the present legislation can apply to AI or whether it has to be updated to completely safeguard consumers.




When it comes to AI, there are various areas where the law will need to be clarified or changed.


Product liability requires the presence of a product, and it is not always apparent whether software or an algorithm is a product or a service.


Product liability law would apply if they were classed as such.

When it comes to services, consumers must depend on typical negligence claims.

Consumers' capacity to sue a manufacturer under product liability will be determined by the specific AI technology that caused the injury and what the court concludes in each case.

When AI technology is able to learn and behave independently of its initial programming, new problems arise.

Because the AI's behaviors may not have been predictable in certain situations, it's unclear if a damage can still be linked to the product's design or production.

Furthermore, since AI depends on probability-based predictions and will, at some time, make a decision that causes harm even if it is the optimal course of action, it may not be fair for the maker to bear the risk when the AI is likely to produce harm by design.



In response to these difficult concerns, some commentators have recommended that AI be held to a different legal standard than conventional goods, such as strict responsibility.


They propose, for example, that medical AI technology be regarded as if it were a reasonable human doctor or medical student, and that autonomous automobiles be treated as if they were a reasonable human driver.

Artificial intelligence products would still be liable for customer harm, but the threshold they would have to reach would be that of a reasonable person in the identical circumstance.

Only if a human in the identical scenario would have been unable to avoid inflicting the damage would the AI be held accountable for the injuries.

This raises the issue of whether the designers or manufacturers would be held vicariously accountable since they had the right, capacity, and obligation to govern the AI, or if the AI would be considered a legal person responsible for paying the victims on its own.



As AI technology advances, it will become more difficult to distinguish between traditional and more sophisticated products.

However, because there are currently no alternatives in the law, product liability will continue to be the legal framework for determining who is responsible and under what circumstances consumers must be financially compensated when AI causes injuries.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Accidents and Risk Assessment; Autonomous and Semiautonomous Systems; Calo, Ryan; Driverless Vehicles and Liability; Trolley Problem.



References & Further Reading:



Kaye, Timothy S. 2015. ABA Fundamentals: Products Liability Law. Chicago: American Bar Association.

Owen, David. 2014. Products Liability in a Nutshell. St. Paul, MN: West Academic Publishing.

Turner, Jacob. 2018. Robot Rules: Regulating Artificial Intelligence. Cham, Switzerland: Palgrave Macmillan.

Weaver, John Frank. 2013. Robots Are People Too: How Siri, Google Car, and Artificial Intelligence Will Force Us to Change Our Laws. Santa Barbara, CA: Praeger.






Artificial Intelligence - Predictive Policing.

 





Predictive policing is a term that refers to proactive police techniques that are based on software program projections, particularly on high-risk areas and periods.

Since the late 2000s, these tactics have been progressively used in the United States and in a number of other nations throughout the globe.

Predictive policing has sparked heated debates about its legality and effectiveness.

Deterrence work in policing has always depended on some type of prediction.





Furthermore, from its inception in the late 1800s, criminology has included the study of trends in criminal behavior and the prediction of at-risk persons.

As early as the late 1920s, predictions were used in the criminal justice system.

Since the 1970s, an increased focus on geographical components of crime research, particularly spatial and environmental characteristics (such as street lighting and weather), has helped to establish crime mapping as a useful police tool.





Since the 1980s, proactive policing techniques have progressively used "hot-spot policing," which focuses police resources (particularly patrols) in regions where crime is most prevalent.

Predictive policing is sometimes misunderstood to mean that it prevents crime before it happens, as in the science fiction film Minority Report (2002).

Unlike conventional crime analysis approaches, they depend on predictive modeling algorithms powered by software programs that statistically analyze police data and/or apply machine-learning algorithms.





Perry et al. (2013) identified three sorts of projections that they can make: 

(1) locations and times when crime is more likely to occur; 

(2) persons who are more likely to conduct crimes; and 

(3) the names of offenders and victims of crimes.


"Predictive policing," on the other hand, generally relates mainly to the first and second categories of predictions.






Two forms of modeling are available in predictive policing software tools.

The geospatial ones show when and where crimes are likely to occur (in which area or even block), and they lead to the mapping of crime "hot spots." Individual-based modeling is the second form of modeling.

Variables like age, criminal histories, gang involvement, or the chance of a person being engaged in a criminal activity, particularly a violent one, are used in programs that give this sort of modeling.

These forecasts are often made in conjunction with the adoption of proactive police measures (Ridgeway 2013).

Police patrols and restrictions in crime "hot areas" are naturally included in geospatial modeling.

Individuals having a high risk of becoming involved in criminal behavior are placed under observation or reported to the authorities in the case of individual-based modeling.

Since the late 2000s, police agencies have been progressively using software tools from technology businesses that assist them create projections and implement predictive policing methods.

With the deployment of PredPol in 2011, the Santa Cruz Police Department became the first in the United States to employ such a strategy.





This software tool, which was inspired by earthquake aftershock prediction techniques, offers daily (and occasionally hourly) maps of "hot zones." It was first restricted to property offenses, but it was subsequently expanded to encompass violent crimes.

More than sixty police agencies throughout the United States already employ PredPol.

In 2012, the New Orleans Police Department was one of the first to employ Palantir to perform predictive policing.

Since then, many more software programs have been created, including CrimeScan, which analyzes seasonal and weekday trends in addition to crime statistics, and Hunchlab, which employs machine learning techniques and adds weather patterns.

Some police agencies utilize software tools that enable individual-based modeling in addition to geographic modeling.

The Chicago Police Department, for example, has relied on the Strategic Subject List (SSL) since 2013, which is generated by an algorithm that assesses the likelihood of persons being engaged in a shooting as either perpetrators or victims.

Individuals with the highest risk ratings are referred to the police for preventative action.




Predictive policing has been used in countries other than the United States.


PredPol was originally used in the United Kingdom in the early 2010s, and the Crime Anticipation System, which was first utilized in Amsterdam, was made accessible to all Dutch police departments in May 2017.

Several concerns have been raised about the accuracy of predictions produced by software algorithms employed in predictive policing.

Some argue that software systems are more objective than human crime data analyzers and can anticipate where crime will occur more accurately.

Predictive policing, from this viewpoint, may lead to a more efficient allocation of police resources (particularly police patrols) and is cost-effective, especially when software is used instead of paying human crime data analysts.

On the contrary, opponents argue that software program forecasts embed systemic biases since they depend on police data that is itself heavily skewed due to two sorts of faults.

To begin with, crime records appropriately represent law enforcement efforts rather than criminal activity.

Arrests for marijuana possession, for example, provide information on the communities and people targeted by police in their anti-drug efforts.

Second, not all victims report crimes to the police, and not all crimes are documented in the same way.

Sexual crimes, child abuse, and domestic violence, for example, are generally underreported, and U.S. citizens are more likely than non-U.S. citizens to report a crime.

For all of these reasons, some argue that predictions produced by predictive police software algorithms may merely tend to repeat prior policing behaviors, resulting in a feedback loop: In areas where the programs foresee greater criminal activity, policing may be more active, resulting in more arrests.

To put it another way, predictive police software tools may be better at predicting future policing than future criminal activity.

Furthermore, others argue that predictive police forecasts are racially prejudiced, given how historical policing has been far from colorblind.

Furthermore, since race and location of residency in the United States are intimately linked, the use of predictive policing may increase racial prejudices against nonwhite communities.

However, evaluating the effectiveness of predictive policing is difficult since it creates a number of methodological difficulties.

In fact, there is no statistical proof that it has a more beneficial impact on public safety than previous or other police approaches.

Finally, others argue that predictive policing is unsuccessful at decreasing crime since police patrols just dispense with criminal activity.

Predictive policing has sparked several debates.

The constitutionality of predictive policy's implicit preemptive action, for example, has been questioned since the hot-spot policing that commonly comes with it may include stop-and-frisks or unjustified stopping, searching, and questioning of persons.

Predictive policing raises ethical concerns about how it may infringe on civil freedoms, particularly the legal notion of presumption of innocence.

In reality, those on lists like the SSL should be allowed to protest their inclusion.

Furthermore, police agencies' lack of openness about how they use their data has been attacked, as has software firms' lack of transparency surrounding their algorithms and predictive models.

Because of this lack of openness, individuals are oblivious to why they are on lists like the SSL or why their area is often monitored.

Members of civil rights groups are becoming more concerned about the use of predictive policing technologies.

Predictive Policing Today: A Shared Statement of Civil Rights Concerns was published in 2016 by a coalition of seventeen organizations, highlighting the technology's racial biases, lack of transparency, and other serious flaws that lead to injustice, particularly for people of color and nonwhite neighborhoods.

In June 2017, four journalists sued the Chicago Police Department under the Freedom of Details Act, demanding that the department provide all information on the algorithm used to create the SSL.

While police departments are increasingly implementing software programs that predict crime, their use may decline in the future due to their mixed results in terms of public safety.

Two police agencies in the United Kingdom (Kent) and Louisiana (New Orleans) have terminated their contracts with predictive policing software businesses in 2018.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Clinical Decision Support Systems; Computer-Assisted Diagnosis.



References & Further Reading:



Collins, Francis S., and Harold Varmus. 2015. “A New Initiative on Precision Medicine.” New England Journal of Medicine 372, no. 2 (February 26): 793–95.

Haskins, Julia. 2018. “Wanted: 1 Million People to Help Transform Precision Medicine: All of Us Program Open for Enrollment.” Nation’s Health 48, no. 5 (July 2018): 1–16.

Madara, James L. 2016 “AMA Statement on Precision Medicine Initiative.” February 25, 2016. Chicago, IL: American Medical Association.

Morrison, S. M. 2019. “Precision Medicine.” Lister Hill National Center for Biomedical Communications. U.S. National Library of Medicine. Bethesda, MD: National Institutes of Health, Department of Health and Human Services.





What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...