Showing posts with label Product Liability and AI. Show all posts
Showing posts with label Product Liability and AI. Show all posts

Artificial Intelligence - What Is The Liability Of Self-Driving Vehicles?

 



Driverless cars may function completely or partly without the assistance of a human driver.

Driverless automobiles, like other AI products, confront difficulties with liability, responsibility, data protection, and customer privacy.

Driverless cars have the potential to eliminate human carelessness while also providing safe transportation for passengers.

They have been engaged in mishaps despite their potential.

The Autopilot software on a Tesla SUV may have failed to notice a huge vehicle crossing the highway in a well-publicized 2016 accident.

A Tesla Autopilot may have been involved in the death of a 49-year-old woman in 2018.

A class action lawsuit was filed against Tesla as a result of these occurrences, which the corporation resolved out of court.

Additional worries about autonomous cars have arisen as a result of bias and racial prejudice in machine vision and face recognition.

Current driverless cars may be better at spotting people with lighter skin, according to Georgia Institute of Technology researchers.

Product liability provides some much-needed solutions to such problems.

The Consumer Protection Act of 1987 governs product liability claims in the United Kingdom (CPA).

This act enacts the European Union's (EU) Product Liability Directive 85/374/EEC, which holds manufacturers liable for product malfunctions, i.e., items that are not as safe as they should be when bought.

This contrasts with U.S. law addressing product liability, which is fragmented and largely controlled by common law and a succession of state acts.

The Uniform Commercial Code (UCC) offers remedies where a product fails to fulfill stated statements, is not merchantable, or is inappropriate for its specific use.

In general, manufacturers are held accountable for injuries caused by their faulty goods, and this responsibility may be handled in terms of negligence or strict liability.

A defect in this situation could be a manufacturer defect, where the driverless vehicle does not satisfy the manufacturer’s specifications and standards; a design defect, which can result when an alternative design would have prevented an acci dent; or a warning defect, where there is a failure to provide enough warning as regards to a driverless car’s operations.

To evaluate product responsibility, the five stages of automation specified by the Society of Automotive Engineers (SAE) International should be taken into account: Level 0, full control of a vehicle by a driver; Level 1, a human driver assisted by an automated system; Level 2, an automated system partially conduct ing the driving while a human driver monitors the environment and performs most of the driving; Level 3, an automated system does the driving and monitor ing of the environment, but the human driver takes back control when signaled; Level 4, the driverless vehicle conducts driving and monitors the environment but is restricted in certain environment; and Level 5, a driverless vehicle without any restrictions does everything a human driver would.

In Levels 1–3 that involve human-machine interaction, where it is discovered that the driverless vehicle did not communicate or send out a signal to the human driver or that the autopilot software did not work, the manufacturer will be liable based on product liability.

At Level 4 and Level 5, liability for defective product will fully apply.

Manufacturers have a duty of care to ensure that any driverless vehicle they manufacture is safe when used in any foreseeable manner.

Failure to exercise this duty will make them liable for negligence.

In some other cases, even when manufacturers have exercised all reasonable care, they will still be liable for unintended defects as per the strict liability principle.

The liability for the driver, especially in Levels 1–3, could be based on tort principles, too.

The requirement of article 8 of the 1949 Vienna Convention on Road Traffic, which states that “[e]very vehicle or combination of vehicles proceeding as a unit shall have a driver,” may not be fulfilled in cases where a vehicle is fully automated.

In some U.S. states, namely, Nevada and Florida, the word driver has been changed to controller, and the latter means any person who causes the autonomous technology to engage; the person must not necessarily be present in the vehicle.

A driver or controller becomes responsible if it is proved that the obligation of reasonable care was not performed by the driver or controller or they were negligent in the observance of this duty.

In certain other cases, victims will only be reimbursed by their own insurance companies under no-fault responsibility.

Victims may also base their claims for damages on the strict responsibility concept without having to present proof of the driver’s fault.

In this situation, the driver may demand that the manufacturer be joined in a lawsuit for damages if the driver or the controller feels that the accident was the consequence of a flaw in the product.

In any case, proof of the driver's or controller's negligence will reduce the manufacturer's liability.

Third parties may sue manufacturers directly for injuries caused by faulty items under product liability.

According to MacPherson v. Buick Motor Co. (1916), where the court found that an automobile manufacturer's duty for a faulty product goes beyond the initial consumer, there is no privity of contract between the victim and the maker.

The question of product liability for self-driving vehicles is complex.

The transition from manual to smart automated control transfers responsibility from the driver to the manufacturer.

The complexity of driving modes, as well as the interaction between the human operator and the artificial agent, is one of the primary challenges concerning accident responsibility.

In the United States, the law of motor vehicle product liability relating to flaws in self-driving cars is still in its infancy.

While the Department of Transportation and, especially, the National Highway Traffic Safety Administration give some basic recommendations on automation in driverless vehicles, Congress has yet to adopt self-driving car law.

In the United Kingdom, the Automated and Electric Cars Act of 2018 makes insurers accountable by default for accidents using automated vehicles that result in death, bodily injury, or property damage, providing the vehicles were in self-driving mode and insured at the time of the accident.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Accidents and Risk Assessment; Product Liability and AI; Trolley Problem.


Further Reading:


Geistfeld. Mark A. 2017. “A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation.” California Law Review 105: 1611–94.

Hevelke, Alexander, and Julian Nida-RĂ¼melin. 2015. “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis.” Science and Engineering Ethics 21, no. 3 (June): 619–30.

Karanasiou, Argyro P., and Dimitris A. Pinotsis. 2017. “Towards a Legal Definition of Machine Intelligence: The Argument for Artificial Personhood in the Age of Deep Learning.” In ICAIL ’17: Proceedings of the 16th edition of the International Conference on Artificial Intelligence and Law, edited by Jeroen Keppens and Guido Governatori, 119–28. New York: Association for Computing Machinery.

Luetge, Christoph. 2017. “The German Ethics Code for Automated and Connected Driving.” Philosophy & Technology 30 (September): 547–58.

Rabin, Robert L., and Kenneth S. Abraham. 2019. “Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era.” Virginia Law Review 105, no. 1 (March): 127–71.

Wilson, Benjamin, Judy Hoffman, and Jamie Morgenstern. 2019. “Predictive Inequity in Object Detection.” https://arxiv.org/abs/1902.11097.




Artificial Intelligence - Who Is Ryan Calo?

 



Michael Ryan Calo (1977–) is a thought leader in the area of artificial intelligence and robotics' legal and policy ramifications.

Calo was instrumental in establishing a network of legal experts dedicated to robots and AI; he foresaw the harm AI may pose to consumer privacy and autonomy, and he produced an early and widely distributed primer on AI law and policy.

Calo has forged methodological and practice innovations for early stage tech policy work, demonstrating the importance and efficacy of legal scholars working side by side with technologists and designers to anticipate futures and meaningful policy responses, in addition to these and other contributions.

Calo was born and raised in the cities of Syracuse, New York, and Florence, Italy.

His parents got him a great remote-controlled base coupled to an inflatable robot when he was a child, and it was his first interaction with robots.

Calo studied philosophy as a student at Dartmouth University, where he studied the ethics of computer pioneer James Moor, among others.

Calo graduated from the University of Michigan with a law degree in 2005.

He became a fellow and subsequently research director at Stanford's Center for Internet and Society after law school, a federal appellate clerkship, and two years in private practice (CIS).

Calo was a pioneer in bringing robotics law and policy into the mainstream at Stanford, co-founding the Legal Aspects of Autonomous Driving effort with Sven Beiker at Stanford's Center for Automotive Research (CARS).

Calo met Ian Kerr, a Canadian law professor and philosopher of technology, and Michael Froomkin, a cyberlaw pioneer, along the road.

The We Robot conference was created by Froomkin, Kerr, and Calo in 2012.

Calo praises Kerr for inspiring him to explore robotics and artificial intelligence as a field of study.

Calo now codirects the University of Washington's Tech Policy Lab, an interdisciplinary research unit that spans computer science, information science, and law.

He and his codirectors Batya Friedman and Tadayoshi Kohno determine the Lab's research and practice agenda in this capacity.

Calo also cofounded the University of Washington Center for an Informed Public, which is dedicated to researching and combating digital and analog disinformation.

Calo has published several articles on the legal and policy implications of robots and artificial intelligence.

Updating the behavioral economic theory of market manipulation in light of artificial intelligence and digital media, advocating for a social systems approach to studying AI's effects, anticipating the privacy harms of robotics and AI, and rigorously examining how the affordances of robotics and AI challenge the American legal system are among the book's key contributions.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Accidents and Risk Assessment; Product Liability and AI.


Further Reading

Calo, Ryan. 2011. “Peeping Hals.” Artificial Intelligence 175, no. 5–6 (April): 940–41.

Calo, Ryan. 2014. “Digital Market Manipulation.” George Washington Law Review 82, no. 4 (August): 995–1051.

Calo, Ryan. 2015. “Robotics and the Lessons of Cyberlaw.” California Law Review 103, no. 3: 513–63.

Calo, Ryan. 2017. “Artificial Intelligence Policy: A Primer and Roadmap.” University of California, Davis Law Review 51: 399–435.

Crawford, Kate, and Ryan Calo. 2016. “There Is a Blind Spot in AI Research.” Nature 538 (October): 311–13.


What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...