Showing posts with label Accidents and Risk Assessment. Show all posts
Showing posts with label Accidents and Risk Assessment. Show all posts

Artificial Intelligence - What Is The Liability Of Self-Driving Vehicles?

 



Driverless cars may function completely or partly without the assistance of a human driver.

Driverless automobiles, like other AI products, confront difficulties with liability, responsibility, data protection, and customer privacy.

Driverless cars have the potential to eliminate human carelessness while also providing safe transportation for passengers.

They have been engaged in mishaps despite their potential.

The Autopilot software on a Tesla SUV may have failed to notice a huge vehicle crossing the highway in a well-publicized 2016 accident.

A Tesla Autopilot may have been involved in the death of a 49-year-old woman in 2018.

A class action lawsuit was filed against Tesla as a result of these occurrences, which the corporation resolved out of court.

Additional worries about autonomous cars have arisen as a result of bias and racial prejudice in machine vision and face recognition.

Current driverless cars may be better at spotting people with lighter skin, according to Georgia Institute of Technology researchers.

Product liability provides some much-needed solutions to such problems.

The Consumer Protection Act of 1987 governs product liability claims in the United Kingdom (CPA).

This act enacts the European Union's (EU) Product Liability Directive 85/374/EEC, which holds manufacturers liable for product malfunctions, i.e., items that are not as safe as they should be when bought.

This contrasts with U.S. law addressing product liability, which is fragmented and largely controlled by common law and a succession of state acts.

The Uniform Commercial Code (UCC) offers remedies where a product fails to fulfill stated statements, is not merchantable, or is inappropriate for its specific use.

In general, manufacturers are held accountable for injuries caused by their faulty goods, and this responsibility may be handled in terms of negligence or strict liability.

A defect in this situation could be a manufacturer defect, where the driverless vehicle does not satisfy the manufacturer’s specifications and standards; a design defect, which can result when an alternative design would have prevented an acci dent; or a warning defect, where there is a failure to provide enough warning as regards to a driverless car’s operations.

To evaluate product responsibility, the five stages of automation specified by the Society of Automotive Engineers (SAE) International should be taken into account: Level 0, full control of a vehicle by a driver; Level 1, a human driver assisted by an automated system; Level 2, an automated system partially conduct ing the driving while a human driver monitors the environment and performs most of the driving; Level 3, an automated system does the driving and monitor ing of the environment, but the human driver takes back control when signaled; Level 4, the driverless vehicle conducts driving and monitors the environment but is restricted in certain environment; and Level 5, a driverless vehicle without any restrictions does everything a human driver would.

In Levels 1–3 that involve human-machine interaction, where it is discovered that the driverless vehicle did not communicate or send out a signal to the human driver or that the autopilot software did not work, the manufacturer will be liable based on product liability.

At Level 4 and Level 5, liability for defective product will fully apply.

Manufacturers have a duty of care to ensure that any driverless vehicle they manufacture is safe when used in any foreseeable manner.

Failure to exercise this duty will make them liable for negligence.

In some other cases, even when manufacturers have exercised all reasonable care, they will still be liable for unintended defects as per the strict liability principle.

The liability for the driver, especially in Levels 1–3, could be based on tort principles, too.

The requirement of article 8 of the 1949 Vienna Convention on Road Traffic, which states that “[e]very vehicle or combination of vehicles proceeding as a unit shall have a driver,” may not be fulfilled in cases where a vehicle is fully automated.

In some U.S. states, namely, Nevada and Florida, the word driver has been changed to controller, and the latter means any person who causes the autonomous technology to engage; the person must not necessarily be present in the vehicle.

A driver or controller becomes responsible if it is proved that the obligation of reasonable care was not performed by the driver or controller or they were negligent in the observance of this duty.

In certain other cases, victims will only be reimbursed by their own insurance companies under no-fault responsibility.

Victims may also base their claims for damages on the strict responsibility concept without having to present proof of the driver’s fault.

In this situation, the driver may demand that the manufacturer be joined in a lawsuit for damages if the driver or the controller feels that the accident was the consequence of a flaw in the product.

In any case, proof of the driver's or controller's negligence will reduce the manufacturer's liability.

Third parties may sue manufacturers directly for injuries caused by faulty items under product liability.

According to MacPherson v. Buick Motor Co. (1916), where the court found that an automobile manufacturer's duty for a faulty product goes beyond the initial consumer, there is no privity of contract between the victim and the maker.

The question of product liability for self-driving vehicles is complex.

The transition from manual to smart automated control transfers responsibility from the driver to the manufacturer.

The complexity of driving modes, as well as the interaction between the human operator and the artificial agent, is one of the primary challenges concerning accident responsibility.

In the United States, the law of motor vehicle product liability relating to flaws in self-driving cars is still in its infancy.

While the Department of Transportation and, especially, the National Highway Traffic Safety Administration give some basic recommendations on automation in driverless vehicles, Congress has yet to adopt self-driving car law.

In the United Kingdom, the Automated and Electric Cars Act of 2018 makes insurers accountable by default for accidents using automated vehicles that result in death, bodily injury, or property damage, providing the vehicles were in self-driving mode and insured at the time of the accident.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Accidents and Risk Assessment; Product Liability and AI; Trolley Problem.


Further Reading:


Geistfeld. Mark A. 2017. “A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation.” California Law Review 105: 1611–94.

Hevelke, Alexander, and Julian Nida-RĂ¼melin. 2015. “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis.” Science and Engineering Ethics 21, no. 3 (June): 619–30.

Karanasiou, Argyro P., and Dimitris A. Pinotsis. 2017. “Towards a Legal Definition of Machine Intelligence: The Argument for Artificial Personhood in the Age of Deep Learning.” In ICAIL ’17: Proceedings of the 16th edition of the International Conference on Artificial Intelligence and Law, edited by Jeroen Keppens and Guido Governatori, 119–28. New York: Association for Computing Machinery.

Luetge, Christoph. 2017. “The German Ethics Code for Automated and Connected Driving.” Philosophy & Technology 30 (September): 547–58.

Rabin, Robert L., and Kenneth S. Abraham. 2019. “Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era.” Virginia Law Review 105, no. 1 (March): 127–71.

Wilson, Benjamin, Judy Hoffman, and Jamie Morgenstern. 2019. “Predictive Inequity in Object Detection.” https://arxiv.org/abs/1902.11097.




Artificial Intelligence - How Do Autonomous Vehicles Leverage AI?




Using a virtual driver system, driverless automobiles and trucks, also known as self-driving or autonomous vehicles, are capable of moving through settings with little or no human control.

A virtual driver system is a set of characteristics and capabilities that augment or replicate the actions of an absent driver to the point that, at the maximum degree of autonomy, the driver may not even be present.

Diverse technology uses, restricting circumstances, and categorization methods make reaching an agreement on what defines a driverless car difficult.

A semiautonomous system, in general, is one in which the human performs certain driving functions (such as lane maintaining) while others are performed autonomously (such as acceleration and deceleration).

All driving activities are autonomous only under certain circumstances in a conditionally autonomous system.

All driving duties are automated in a fully autonomous system.

Automobile manufacturers, technology businesses, automotive suppliers, and universities are all testing and developing applications.

Each builder's car or system, as well as the technical road that led to it, demonstrates a diverse range of technological answers to the challenge of developing a virtual driving system.

Ambiguities exist at the level of defining circumstances, so that a same technological system may be characterized in a variety of ways depending on factors such as location, speed, weather, traffic density, human attention, and infrastructure.

When individual driving duties are operationalized for feature development and context plays a role in developing solutions, more complexity is generated (such as connected vehicles, smart cities, and regulatory environment).

Because of this complication, producing driverless cars often necessitates collaboration across several roles and disciplines of study, such as hardware and software engineering, ergonomics, user experience, legal and regulatory, city planning, and ethics.

The development of self-driving automobiles is both a technical and a socio-cultural enterprise.

The distribution of mobility tasks across an array of equipment to collectively perform a variety of activities such as assessing driver intent, sensing the environment, distinguishing objects, mapping and wayfinding, and safety management are among the technical problems of engineering a virtual driver system.

LIDAR, radar, computer vision, global positioning, odometry, and sonar are among the hardware and software components of a virtual driving system.

There are many approaches to solving discrete autonomous movement problems.

With cameras, maps, and sensors, sensing and processing can be centralized in the vehicle, or it can be distributed throughout the environment and across other vehicles, as with intelligent infrastructure and V2X (vehicle to everything) capability.

The burden and scope of this processing—and the scale of the problems to be solved—are closely related to the expected level of human attention and intervention, and as a result, the most frequently referenced classification of driverless capability is formally structured along the lines of human attentional demands and intervention requirements by the Society of Automotive Engineers, and has been adopted in 2 years.

These companies use various levels of driver automation, ranging from Level 0 to Level 5.

Level 0 refers to no automation, which means the human driver is solely responsible for longitudinal and latitudinal control (steering) (acceleration and deceleration).

On Level 0, the human driver is in charge of keeping an eye on the environment and reacting to any unexpected safety hazards.

Automated systems that take control of longitudinal or latitudinal control are classified as Level 1, or driver aid.

The driver is in charge of observation and intervention.

Level 2 denotes partial automation, in which the virtual driver system is in charge of both lateral and longitudinal control.

The human driver is deemed to be in the loop, which means that they are in charge of monitoring the environment and acting in the event of a safety-related emergency.

Level 2 capability has not yet been achieved by commercially available systems.

The monitoring capabilities of the virtual driving system distinguishes Level 3 conditional autonomy from Level 2.

At this stage, the human driver may be disconnected from the surroundings and depend on the autonomous system to keep track of it.

The person is required to react to calls for assistance in a range of situations, such as during severe weather or in construction zones.

A navigation system (e.g., GPS) is not required at this level.

To operate at Level 2 or Level 3, a vehicle does not need a map or a specific destination.

A human driver is not needed to react to a request for intervention at Level 4, often known as high automation.

The virtual driving system is in charge of navigation, locomotion, and monitoring.

When a specific condition cannot be satisfied, such as when a navigation destination is obstructed, it may request that a driver intervene.

If the human driver does not choose to interfere, the system may safely stop or redirect based on the engineering approach.

The classification of this situation is based on standards of safe driving, which are established not only by technical competence and environmental circumstances, but also by legal and regulatory agreements and lawsuit tolerance.

Level 5 autonomy, often known as complete automation, refers to a vehicle that is capable of doing all driving activities in any situation that a human driver could handle.

Although Level 4 and Level 5 systems do not need the presence of a person, they still necessitate substantial technological and social cooperation.

While efforts to construct autonomous vehicles date back to the 1920s, Leonardo Da Vinci is credited with the concept of a self-propelled cart.

In his 1939 New York World's Fair Futurama display, Norman Bel Geddes envisaged a smart metropolis of the future inhabited by self-driving automobiles.

Automobiles, according to Bel Geddes, will be outfitted with "technology that would rectify the mistakes of human drivers" by 1960.

General Motors popularized the concept of smart infrastructure in the 1950s by building a "automated highway" with steering-assist circuits.

In 1960, the business tested a working prototype car, but owing to the expensive expense of infrastructure, it quickly moved its focus from smart cities to smart autos.

A team lead by Sadayuki Tsugawa of Tsukuba Mechanical Engineering Laboratory in Japan created an early prototype of an autonomous car.

Their 1977 vehicle operated under predefined environmental conditions dictated by lateral guiding rails.

The truck used cameras to track the rails, and most of the processing equipment was aboard.

The EUREKA (European Research Organization) pooled money and experience in the 1980s to enhance the state-of-the-art in cameras and processing for autonomous cars.

Simultaneously, Carnegie Mellon University in Pittsburgh, Pennsylvania pooled their resources for research on autonomous navigation utilizing GPS data.

Since then, automakers including General Motors, Tesla, and Ford Motor Company, as well as technology firms like ARGO AI and Waymo, have been working on autonomous cars or critical components.

The technology is becoming less dependent on very limited circumstances and more adaptable to real-world scenarios.

Manufacturers are currently producing Level 4 autonomous test cars, and testing are being undertaken in real-world traffic and weather situations.

Commercially accessible Level 4 self-driving cars are still a long way off.

There are supporters and opponents of autonomous driving.

Supporters point to a number of benefits that address social problems, environmental concerns, efficiency, and safety.

The provision of mobility services and a degree of autonomy to those who do not already have access, such as those with disabilities (e.g., blindness or motor function impairment) or those who are unable to drive, such as the elderly and children, is one such social benefit.

The capacity to decrease fuel economy by managing acceleration and braking has environmental benefits.

Because networked cars may go bumper to bumper and are routed according to traffic optimization algorithms, congestion is expected to be reduced.

Finally, self-driving vehicles have the potential to be safer.

They may be able to handle complicated information more quickly and thoroughly than human drivers, resulting in fewer collisions.

Self-driving car negative repercussions may be included in any of these areas.

In terms of society, driverless cars may limit access to mobility and municipal services.

Autonomous mobility may be heavily regulated, costly, or limited to places that are inaccessible to low-income commuters.

Non-networked or manually operated cars might be kept out of intelligent geo-fenced municipal infrastructure.

Furthermore, if no adult or responsible human party is present during transportation, autonomous automobiles may pose a safety concern for some susceptible passengers, such as children.

Greater convenience may have environmental consequences.

Drivers may sleep or work while driving autonomously, which may have the unintended consequence of extending commutes and worsening traffic congestion.

Another security issue is widespread vehicle hacking, which could bring individual automobiles and trucks, or even a whole city, to a halt. 


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Accidents and Risk Assessment; Autonomous and Semiautonomous Systems; Autonomy and Complacency; Intelligent Transportation; Trolley Problem.


Further Reading:


Antsaklis, Panos J., Kevin M. Passino, and Shyh J. Wang. 1991. “An Introduction to Autonomous Control Systems.” IEEE Control Systems Magazine 11, no. 4: 5–13.

Bel Geddes, Norman. 1940. Magic Motorways. New York: Random House.

Bimbraw, Keshav. 2015. “Autonomous Cars: Past, Present, and Future—A Review of the Developments in the Last Century, the Present Scenario, and the Expected Future of Autonomous Vehicle Technology.” In ICINCO: 2015—12th International Conference on Informatics in Control, Automation and Robotics, vol. 1, 191–98. Piscataway, NJ: IEEE.

Kröger, Fabian. 2016. “Automated Driving in Its Social, Historical and Cultural Contexts.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner, 41–68. Berlin: Springer.

Lin, Patrick. 2016. “Why Ethics Matters for Autonomous Cars.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner, 69–85. Berlin: Springer.

Weber, Marc. 2014. “Where To? A History of Autonomous Vehicles.” Computer History Museum. https://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/.


Artificial Intelligence - Who Is Ryan Calo?

 



Michael Ryan Calo (1977–) is a thought leader in the area of artificial intelligence and robotics' legal and policy ramifications.

Calo was instrumental in establishing a network of legal experts dedicated to robots and AI; he foresaw the harm AI may pose to consumer privacy and autonomy, and he produced an early and widely distributed primer on AI law and policy.

Calo has forged methodological and practice innovations for early stage tech policy work, demonstrating the importance and efficacy of legal scholars working side by side with technologists and designers to anticipate futures and meaningful policy responses, in addition to these and other contributions.

Calo was born and raised in the cities of Syracuse, New York, and Florence, Italy.

His parents got him a great remote-controlled base coupled to an inflatable robot when he was a child, and it was his first interaction with robots.

Calo studied philosophy as a student at Dartmouth University, where he studied the ethics of computer pioneer James Moor, among others.

Calo graduated from the University of Michigan with a law degree in 2005.

He became a fellow and subsequently research director at Stanford's Center for Internet and Society after law school, a federal appellate clerkship, and two years in private practice (CIS).

Calo was a pioneer in bringing robotics law and policy into the mainstream at Stanford, co-founding the Legal Aspects of Autonomous Driving effort with Sven Beiker at Stanford's Center for Automotive Research (CARS).

Calo met Ian Kerr, a Canadian law professor and philosopher of technology, and Michael Froomkin, a cyberlaw pioneer, along the road.

The We Robot conference was created by Froomkin, Kerr, and Calo in 2012.

Calo praises Kerr for inspiring him to explore robotics and artificial intelligence as a field of study.

Calo now codirects the University of Washington's Tech Policy Lab, an interdisciplinary research unit that spans computer science, information science, and law.

He and his codirectors Batya Friedman and Tadayoshi Kohno determine the Lab's research and practice agenda in this capacity.

Calo also cofounded the University of Washington Center for an Informed Public, which is dedicated to researching and combating digital and analog disinformation.

Calo has published several articles on the legal and policy implications of robots and artificial intelligence.

Updating the behavioral economic theory of market manipulation in light of artificial intelligence and digital media, advocating for a social systems approach to studying AI's effects, anticipating the privacy harms of robotics and AI, and rigorously examining how the affordances of robotics and AI challenge the American legal system are among the book's key contributions.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Accidents and Risk Assessment; Product Liability and AI.


Further Reading

Calo, Ryan. 2011. “Peeping Hals.” Artificial Intelligence 175, no. 5–6 (April): 940–41.

Calo, Ryan. 2014. “Digital Market Manipulation.” George Washington Law Review 82, no. 4 (August): 995–1051.

Calo, Ryan. 2015. “Robotics and the Lessons of Cyberlaw.” California Law Review 103, no. 3: 513–63.

Calo, Ryan. 2017. “Artificial Intelligence Policy: A Primer and Roadmap.” University of California, Davis Law Review 51: 399–435.

Crawford, Kate, and Ryan Calo. 2016. “There Is a Blind Spot in AI Research.” Nature 538 (October): 311–13.


Artificial Intelligence - What Is The Asilomar Conference On Beneficial AI?

 


The Asilomar Meeting on Beneficial AI has most prominently portrayed social concerns around artificial intelligence and danger to people via Isaac Asimov's Three Laws of Robotics.

"A robot may not damage a human being or, by inactivity, enable a human being to come to harm; A robot must follow human instructions unless such orders would contradict with the First Law; A robot must safeguard its own existence unless such protection would clash with the First or Second Law" (Asimov 1950, 40).

In subsequent books, Asimov added a Fourth Law or Zeroth Law, which is often quoted as "A robot may not hurt mankind, or, by inactivity, enable humanity to come to harm," and is detailed in Robots and Empire by the robot character Daneel Olivaw (Asimov 1985, chapter 18).

Asimov's zeroth rule sparked debate on how to judge whether or not something is harmful to mankind.

This was the topic of the 2017 Asilomar Conference on Beneficial AI, which went beyond the Three Laws and the Zeroth Law to propose twenty-three principles to protect mankind in the future of AI.

The conference's sponsor, the Future of Life Institute, has posted the principles on its website and has received 3,814 signatures from AI experts and other multidisciplinary supporters.

There are three basic kinds of principles: research questions, ethics and values, and long-term concerns.

These research guidelines are intended to guarantee that the aims of artificial intelligence continue to be helpful to people.

They're meant to help investors decide where to put their money in AI research.

To achieve useful AI, Asilomar signatories con incline that research agendas should encourage and preserve openness and conversation between AI researchers, policymakers, and developers.

Researchers interested in the development of artificial intelligence systems should work together to prioritize safety.

Proposed concepts relating to ethics and values are aimed to prevent damage and promote direct human control over artificial intelligence systems.

Parties to the Asilomar principles believe that AI should reflect human values such as individual rights, freedoms, and diversity acceptance.

Artificial intelligences, in particular, should respect human liberty and privacy, and should only be used to empower and enrich humanity.

Human social and civic norms must be adhered to by AI.

The Asilomar signatories believe that AI creators should be held accountable for their work.

One aspect that stands out is the likelihood of an autonomous weapons arms race.

Because of the high stakes, the designers of the Asilomar principles incorporated principles that addressed longer-term challenges.

They advised prudence, meticulous planning, and human supervision.

Superintelligences must be produced for the wider welfare of mankind, and not merely to further the aims of one industry or government.

The Asilomar Conference's twenty-three principles have sparked ongoing discussions about the need for beneficial AI and specific safeguards for the future of AI and humanity.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Accidents and Risk Assessment; Asimov, Isaac; Autonomous Weapons Systems, Ethics of; Campaign to Stop Killer Robots; Robot Ethics.



Further Reading


Asilomar AI Principles. 2017. https://futureoflife.org/ai-principles/.

Asimov, Isaac. 1950. “Runaround.” In I, Robot, 30–47. New York: Doubleday.

Asimov, Isaac. 1985. Robots and Empire. New York: Doubleday.

Sarangi, Saswat, and Pankaj Sharma. 2019. Artificial Intelligence: Evolution, Ethics, and Public Policy. Abingdon, UK: Routledge.





Artificial Intelligence - Autonomy And Complacency In AI Systems.

 




The concepts of machine autonomy and human autonomy and complacency are intertwined.

Artificial intelligences are undoubtedly getting more independent as they are trained to learn from their own experiences and data intake.

Machines that gain more skills than humans tend to become increasingly dependent on them to make judgments and react correctly to unexpected events.

This dependence on AI systems' decision-making processes might lead to a loss of human agency and complacency.

This complacency may result in the AI's system or decision-making processes failing to respond to major faults.

Autonomous machines are ones that can function in unsupervised settings, adapt to new situations and experiences, learn from previous errors, and decide the best potential outcomes in each case without the need for fresh programming input.

To put it another way, these robots learn from their experiences and are capable of going beyond their original programming in certain respects.

The concept is that programmers won't be able to foresee every circumstance that an AI-enabled machine could experience based on its activities, thus it must be able to adapt.

This is not widely recognized, since others say that these systems' adaptability is inherent in their programming, as their programs are designed to be adaptable.

The disagreement over whether any agent, including humans, can express free will and act autonomously exacerbates these debates.

With the advancement of technology, the autonomy of AI programs is not the only element of autonomy that is being explored.

Worries have also been raised concerning the influence on human autonomy, as well as concerns about machine complacency.

People who gain from the machine's choice being irrelevant since they no longer have to make decisions as AI systems grow increasingly tuned to anticipate people's wishes and preferences.

The interaction of human employees and automated systems has gotten a lot of attention.

According to studies, humans are more prone to overlook flaws in these procedures, particularly when they get routinized, which leads to a positive expectation of success rather than a negative expectation of failure.

This sense of accomplishment causes the operators or supervisors of automated processes to place their confidence in inaccurate readouts or machine judgments, which may lead to mistakes and accidents.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Accidents and Risk Assessment; Autonomous and Semiautonomous Systems.



Further Reading


AndrĂ©, Quentin, Ziv Carmon, Klaus Wertenbroch, Alia Crum, Frank Douglas, William 
Goldstein, Joel Huber, Leaf Van Boven, Bernd Weber, and Haiyang Yang. 2018. 

“Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data.” Customer Needs and Solutions 5, no. 1–2: 28–37.

Bahner, J. Elin, Anke-Dorothea HĂ¼per, and Dietrich Manzey. 2008. “Misuse of Auto￾mated Decision Aids: Complacency, Automation Bias, and the Impact of Training 
Experience.” International Journal of Human-Computer Studies 66, no. 9: 
688–99.

Lawless, W. F., Ranjeev Mittu, Donald Sofge, and Stephen Russell, eds. 2017. Autonomy 
and Intelligence: A Threat or Savior? Cham, Switzerland: Springer.

Parasuraman, Raja, and Dietrich H. Manzey. 2010. “Complacency and Bias in Human 
Use of Automation: An Attentional Integration.” Human Factors 52, no. 3: 
381–410.





Artificial Intelligence - How Are Accidents and Risk Assessment Done Using AI?

 



Many computer-based systems' most significant feature is their reliability.

Physical damage, data loss, economic disruption, and human deaths may all result from mechanical and software failures.

Many essential systems are now controlled by robotics, automation, and artificial intelligence.

Nuclear power plants, financial markets, social security payments, traffic lights, and military radar stations are all under their watchful eye.

High-tech systems may be designed purposefully hazardous to people, as with Trojan horses, viruses, and spyware, or they can be dangerous due to human programming or operation errors.

They may become dangerous in the future as a result of purposeful or unintended actions made by the machines themselves, or as a result of unanticipated environmental variables.

The first person to be murdered while working with a robot occurred in 1979.

A one-ton parts-retrieval robot built by Litton Industries hit Ford Motor Company engineer Robert Williams in the head.

After failing to entirely switch off a malfunctioning robot on the production floor at Kawasaki Heavy Industries two years later, Japanese engineer Kenji Urada was murdered.

Urada was shoved into a grinding machine by the robot's arm.

Accidents do not always result in deaths.

A 300-pound Knightscope K5 security robot on patrol at a retail business center in Northern California, for example, knocked down a kid and ran over his foot in 2016.

Only a few cuts and swelling were sustained by the youngster.

The Cold War's history is littered with stories of nuclear near-misses caused by faulty computer technology.

In 1979, a computer glitch at the North American Aerospace Defense Command (NORAD) misled the Strategic Air Command into believing that the Soviet Union had fired over 2,000 nuclear missiles towards the US.

An examination revealed that a training scenario had been uploaded to an active defense computer by mistake.

In 1983, a Soviet military early warning system identified a single US intercontinental ballistic missile launching a nuclear assault.

Stanislav Petrov, the missile defense system's operator, correctly discounted the signal as a false alarm.

The reason of this and subsequent false alarms was ultimately discovered to be sunlight hitting high altitude clouds.

Petrov was eventually punished for humiliating his superiors by disclosing faults, despite preventing global thermonuclear Armageddon.

The so-called "2010 Flash Crash" was caused by stock market trading software.

In slightly over a half-hour on May 6, 2010, the S&P 500, Dow Jones, and NASDAQ stock indexes lost—and then mainly regained—a trillion dollars in value.

Navin Dal Singh Sarao, a U.K. trader, was arrested after a five-year investigation by the US Department of Justice for allegedly manipulating an automated system to issue and then cancel huge numbers of sell orders, allowing his business to acquire equities at temporarily reduced prices.

In 2015, there were two more software-induced market flash crashes, and in 2017, there were flash crashes in the gold futures market and digital cryptocurrency sector.

Tay (short for "Thinking about you"), a Microsoft Corporation artificial intelligence social media chatterbot, went tragically wrong in 2016.

Tay was created by Microsoft engineers to imitate a nineteen-year-old American girl and to learn from Twitter discussions.

Instead, Tay was trained to use harsh and aggressive language by internet trolls, which it then repeated in tweets.

After barely sixteen hours, Microsoft deleted Tay's account.

More AI-related accidents in motor vehicle operating may occur in the future.

In 2016, the first fatal collision involving a self-driving car happened when a Tesla Model S in autopilot mode collided with a semi-trailer crossing the highway.

The motorist may have been viewing a Harry Potter movie on a portable DVD player when the accident happened, according to witnesses.

Tesla's software does not yet allow for completely autonomous driving, hence a human operator is required.

Despite these dangers, one management consulting company claims that autonomous automobiles might avert up to 90% of road accidents.

Artificial intelligence security is rapidly growing as a topic of cybersecurity study.

Militaries all around the globe are working on prototypes of dangerous autonomous weapons systems.

Automatic weapons, such as drones, that now rely on a human operator to make deadly force judgments against targets, might be replaced with automated systems that make life and death decisions.

Robotic decision-makers on the battlefield may one day outperform humans in extracting patterns from the fog of war and reacting quickly and logically to novel or challenging circumstances.

High technology is becoming more and more important in modern civilization, yet it is also becoming more fragile and prone to failure.

An inquisitive squirrel caused the NASDAQ's primary computer to collapse in 1987, bringing one of the world's major stock exchanges to its knees.

In another example, the ozone hole above Antarctica was not discovered for years because exceptionally low levels reported in data-processed satellite images were assumed to be mistakes.

It's likely that the complexity of autonomous systems, as well as society's reliance on them under quickly changing circumstances, will make completely tested AI unachievable.

Artificial intelligence is powered by software that can adapt to and interact with its surroundings and users.

Changes in variables, individual acts, or events may have unanticipated and even disastrous consequences.

One of the dark secrets of sophisticated artificial intelligence is that it is based on mathematical approaches and deep learning algorithms that are so complicated that even its creators are baffled as to how it makes accurate conclusions.

Autonomous cars, for example, depend on exclusively computer-written instructions while they watch people driving in real-world situations.

But how can a self-driving automobile learn to anticipate the unexpected?

Will attempts to adjust AI-generated code to decrease apparent faults, omissions, and impenetrability lessen the likelihood of unintended negative consequences, or will they merely magnify existing problems and produce new ones? Although it is unclear how to mitigate the risks of artificial intelligence, it is likely that society will rely on well-established and presumably trustworthy machine-learning systems to automatically provide rationales for their actions, as well as examine newly developed cognitive computing systems on our behalf.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



Also see: Algorithmic Error and Bias; Autonomy and Complacency; Beneficial AI, Asilo mar Meeting on; Campaign to Stop Killer Robots; Driverless Vehicles and Liability; Explainable AI; Product Liability and AI; Trolley Problem.


Further Reading

De Visser, Ewart Jan. 2012. “The World Is Not Enough: Trust in Cognitive Agents.” Ph.D. diss., George Mason University.

Forester, Tom, and Perry Morrison. 1990. “Computer Unreliability and Social Vulnerability.” Futures 22, no. 5 (June): 462–74.

Lee, John D., and Katrina A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors 46, no. 1 (Spring): 50–80.

Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan 

M. Ćirković, 308–45. New York: Oxford University Press.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...