Showing posts with label Autonomous Weapons Systems. Show all posts
Showing posts with label Autonomous Weapons Systems. Show all posts

Artificial Intelligence - What Is The Trolley Problem?

 



Philippa Foot used the term "trolley problem" in 1967 to describe an ethical difficulty.

Artificial intelligence advancements in different domains have sparked ethical debates regarding how these systems' decision-making processes should be designed.




Of course, there is widespread worry about AI's capacity to assess ethical challenges and respect societal values.

An operator finds herself near a trolley track, standing next to a lever that determines whether the trolley will continue on its current path or go in a different direction, in this classic philosophical thought experiment.

Five people are standing on the track where the trolley is running, unable to get out of the path and certain to be murdered if the trolley continues on its current course.

On the opposite track, there is another person who will be killed if the operator pulls the lever.





The operator has the option of pulling the lever, killing one person while rescuing the other five, or doing nothing and allowing the other five to perish.

This is a typical problem between utilitarianism (activities should maximize the well-being of affected persons) and deontology (actions should maximize the well-being of affected individuals) (whether the action is right or wrong based on rules, as opposed to the consequences of the action).

With the development of artificial intelligence, the issue has arisen how we should teach robots to behave in scenarios that are perceived as inescapable realities, such as the Trolley Problem.

The Trolley Problem has been investigated with relation to artificial intelligence in fields such as primary health care, the operating room, security, self-driving automobiles, and weapons technology.

The subject has been studied most thoroughly in the context of self-driving automobiles, where regulations, guidelines, and norms have already been suggested or developed.

Because autonomous vehicles have already gone millions of kilometers in the United States, they face this difficulty.

The problem is made more urgent by the fact that a few self-driving car users have actually died while utilizing the technology.

Accidents have sparked even greater public discussion over the proper use of this technology.





Moral Machine is an online platform established by a team at the Massachusetts Institute of Technology to crowdsource responses to issues regarding how self-driving automobiles should prioritize lives.

The makers of The Moral Machine urge users to the website to guess what option a self-driving automobile would make in a variety of Trolley Problem-style problems.

Respondents must prioritize the lives of car passengers, pedestrians, humans and animals, people walking legally or illegally, and people of various fitness levels and socioeconomic status, among other variables.

When respondents are in a car, they almost always indicate that they would move to save their own lives.

It's possible that crowd-sourced solutions aren't the best method to solve Trolley Problem problems.

Trading a pedestrian life for a vehicle passenger's life, for example, may be seen as arbitrary and unjust.

The aggregated solutions currently do not seem to represent simple utilitarian calculations that maximize lives saved or favor one sort of life over another.

It's unclear who will get to select how AI will be programmed and who will be held responsible if AI systems fail.





This obligation might be assigned to policymakers, the corporation that develops the technology, or the people who end up utilizing it.

Each of these factors has its own set of ramifications that must be handled.

The Trolley Problem's usefulness in resolving AI quandaries is not widely accepted.

The Trolley Problem is dismissed by some artificial intelligence and ethics academics as a helpful thinking exercise.

Their arguments are usually based on the notion of trade-offs between different lifestyles.

They claim that the Trolley Problem lends credence to the idea that these trade-offs (as well as autonomous vehicle disasters) are unavoidable.

Instead than concentrating on the best methods to avoid a dilemma like the trolley issue, policymakers and programmers should instead concentrate on the best ways to react to the different circumstances.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Accidents and Risk Assessment; Air Traffic Control, AI and; Algorithmic Bias and Error; Autonomous Weapons Systems, Ethics of; Driverless Cars and Trucks; Moral Turing Test; Robot Ethics.


References And Further Reading

Cadigan, Pat. 2018. AI and the Trolley Problem. New York: Tor.

Etzioni, Amitai, and Oren Etzioni. 2017. “Incorporating Ethics into Artificial Intelligence.” Journal of Ethics 21: 403–18.

Goodall, Noah. 2014. “Ethical Decision Making during Automated Vehicle Crashes.” Transportation Research Record: Journal of the Transportation Research Board 2424: 58–65.

Moolayil, Amar Kumar. 2018. “The Modern Trolley Problem: Ethical and Economically Sound Liability Schemes for Autonomous Vehicles.” Case Western Reserve Journal of Law, Technology & the Internet 9, no. 1: 1–32.



Artificial Intelligence - What Is The Stop Killer Robots Campaign?

 



The Campaign to Stop Killer Robots is a non-profit organization devoted to mobilize and campaign against the development and deployment of deadly autonomous weapon systems (LAWS).

The campaign's main issue is that armed robots making life-or-death decisions undercut legal and ethical restraints on violence in human conflicts.

Advocates for LAWS argue that these technologies are compatible with current weapons and regulations, such as cruise missiles that are planned and fired by humans to hunt out and kill a specific target.

Advocates also say that robots are completely reliant on people, that they are bound by their design and must perform the behaviors that have been assigned to them, and that with appropriate monitoring, they may save lives by substituting humans in hazardous situations.


The Campaign to Stop Killer Robots dismisses responsible usage as a viable option, stating fears that the development of LAWS could result in a new arms race.


The advertisement underlines the danger of losing human control over the use of lethal force in situations when armed robots identify and remove a threat before human intervention is feasible.

Human Rights Watch, an international nongovernmental organization (NGO) that promotes fundamental human rights and investigates violations of those rights, organized and managed the campaign, which was officially launched on April 22, 2013, in London, England.


Many member groups make up the Campaign to Stop Killer Robots, including the International Committee for Robot Arms Control and Amnesty International.


A steering group and a worldwide coordinator are in charge of the campaign's leadership.

As of 2018, the steering committee consists of eleven non-governmental organizations.

Mary Wareham, who formerly headed international efforts to ban land mines and cluster bombs, is the campaign's worldwide coordinator.

Efforts to ban armed robots, like those to ban land mines and cluster bombs, concentrate on their potential to inflict needless suffering and indiscriminate damage to humans.


The United Nations Convention on Certain Conventional Weapons (CCW), which originally went into force in 1983, coordinates the worldwide ban of weapons.




Because the CCW has yet to agree on a ban on armed robots, and because the CCW lacks any mechanism for enforcing agreed-upon restrictions, the Campaign to Stop Killer Robots calls for the inclusion of LAWS in the CCW.

The Campaign to Stop Killer Robots also promotes the adoption of new international treaties to implement more preemptive restrictions.

The Campaign to Stop Killer Robots offers tools for educating and mobilizing the public, including multimedia databases, campaign reports, and a mailing list, in addition to lobbying governing authorities for treaty and convention prohibitions.

The Campaign also seeks the participation of technological businesses, requesting that they refuse to participate in the creation of LAWS on their own will.

The @BanKillerRobots account on Twitter is where the Campaign keeps track of and broadcasts the names of companies that have pledged not to engage in the creation or marketing of intelligent weapons.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Autonomous Weapons Systems, Ethics of; Battlefield AI and Robotics; Lethal Autonomous Weapons Systems.


Further Reading


Baum, Seth. 2015. “Stopping Killer Robots and Other Future Threats.” Bulletin of the Atomic Scientists, February 22, 2015. https://thebulletin.org/2015/02/stopping-killer-robots-and-other-future-threats/.

Campaign to Stop Killer Robots. 2020. https://www.stopkillerrobots.org/.

Carpenter, Charli. 2016. “Rethinking the Political / -Science- / Fiction Nexus: Global Policy Making and the Campaign to Stop Killer Robots.” Perspectives on Politics 14, no. 1 (March): 53–69.

Docherty, Bonnie. 2012. Losing Humanity: The Case Against Killer Robots. New York: Human Rights Watch.

Garcia, Denise. 2015. “Killer Robots: Why the US Should Lead the Ban.” Global Policy6, no. 1 (February): 57–63.


Artificial Intelligence - What Is The Asilomar Conference On Beneficial AI?

 


The Asilomar Meeting on Beneficial AI has most prominently portrayed social concerns around artificial intelligence and danger to people via Isaac Asimov's Three Laws of Robotics.

"A robot may not damage a human being or, by inactivity, enable a human being to come to harm; A robot must follow human instructions unless such orders would contradict with the First Law; A robot must safeguard its own existence unless such protection would clash with the First or Second Law" (Asimov 1950, 40).

In subsequent books, Asimov added a Fourth Law or Zeroth Law, which is often quoted as "A robot may not hurt mankind, or, by inactivity, enable humanity to come to harm," and is detailed in Robots and Empire by the robot character Daneel Olivaw (Asimov 1985, chapter 18).

Asimov's zeroth rule sparked debate on how to judge whether or not something is harmful to mankind.

This was the topic of the 2017 Asilomar Conference on Beneficial AI, which went beyond the Three Laws and the Zeroth Law to propose twenty-three principles to protect mankind in the future of AI.

The conference's sponsor, the Future of Life Institute, has posted the principles on its website and has received 3,814 signatures from AI experts and other multidisciplinary supporters.

There are three basic kinds of principles: research questions, ethics and values, and long-term concerns.

These research guidelines are intended to guarantee that the aims of artificial intelligence continue to be helpful to people.

They're meant to help investors decide where to put their money in AI research.

To achieve useful AI, Asilomar signatories con incline that research agendas should encourage and preserve openness and conversation between AI researchers, policymakers, and developers.

Researchers interested in the development of artificial intelligence systems should work together to prioritize safety.

Proposed concepts relating to ethics and values are aimed to prevent damage and promote direct human control over artificial intelligence systems.

Parties to the Asilomar principles believe that AI should reflect human values such as individual rights, freedoms, and diversity acceptance.

Artificial intelligences, in particular, should respect human liberty and privacy, and should only be used to empower and enrich humanity.

Human social and civic norms must be adhered to by AI.

The Asilomar signatories believe that AI creators should be held accountable for their work.

One aspect that stands out is the likelihood of an autonomous weapons arms race.

Because of the high stakes, the designers of the Asilomar principles incorporated principles that addressed longer-term challenges.

They advised prudence, meticulous planning, and human supervision.

Superintelligences must be produced for the wider welfare of mankind, and not merely to further the aims of one industry or government.

The Asilomar Conference's twenty-three principles have sparked ongoing discussions about the need for beneficial AI and specific safeguards for the future of AI and humanity.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Accidents and Risk Assessment; Asimov, Isaac; Autonomous Weapons Systems, Ethics of; Campaign to Stop Killer Robots; Robot Ethics.



Further Reading


Asilomar AI Principles. 2017. https://futureoflife.org/ai-principles/.

Asimov, Isaac. 1950. “Runaround.” In I, Robot, 30–47. New York: Doubleday.

Asimov, Isaac. 1985. Robots and Empire. New York: Doubleday.

Sarangi, Saswat, and Pankaj Sharma. 2019. Artificial Intelligence: Evolution, Ethics, and Public Policy. Abingdon, UK: Routledge.





Artificial Intelligence - AI And Robotics In The Battlefield.

 



Because of the growth of artificial intelligence (AI) and robots and their application to military matters, generals on the contemporary battlefield are seeing a possible tactical and strategic revolution.

Unmanned aerial vehicles (UAVs), also known as drones, and other robotic devices played a key role in the wars in Afghanistan (2001–) and Iraq (2003–2011).

It is possible that future conflicts will be waged without the participation of humans.

Without human control or guidance, autonomous robots will fight in war on land, in the air, and beneath the water.

While this vision remains in the realm of science fiction, battlefield AI and robotics raise a slew of practical, ethical, and legal issues that military leaders, technologists, jurists, and philosophers must address.

When many people think about AI and robotics on the battlefield, the first image that springs to mind is "killer robots," armed machines that indiscriminately destroy everything in their path.

There are, however, a variety of applications for battlefield AI that do not include killing.

In recent wars, the most notable application of such technology has been peaceful in character.

UAVs are often employed for surveillance and reconnaissance.

Other robots, like as iRobot's PackBot (the same firm that makes the vacuum-cleaning Roomba), are employed to locate and assess improvised explosive devices (IEDs), making their safe disposal easier.

Robotic devices can navigate treacherous terrain, such as Afghanistan's caves and mountain crags, as well as areas too dangerous for humans, such as under a vehicle suspected of being rigged with an IED.

Unmanned Underwater Vehicles (UUVs) are also used to detect mines underwater.

IEDs and explosives are so common on today's battlefields that these robotic gadgets are priceless.

Another potential life-saving capacity of battlefield robots that has yet to be realized is in the realm of medicine.

Robots can safely collect injured troops on the battlefield in areas that are inaccessible to their human counterparts, without jeopardizing their own lives.

Robots may also transport medical supplies and medications to troops on the battlefield, as well as conduct basic first aid and other emergency medical operations.

AI and robots have the greatest potential to change the battlefield—whether on land, sea, or in the air—in the arena of deadly power.

The Aegis Combat System (ACS) is an example of an autonomous system used by several militaries across the globe aboard destroyers and other naval combat vessels.

Through radar and sonar, the system can detect approaching threats, such as missiles from the surface or air, mines, or torpedoes from the water.

The system is equipped with a powerful computer system and can use its own munitions to eliminate identified threats.

Despite the fact that Aegis is activated and supervised manually, it has the potential to operate autonomously in order to counter threats faster than humans could.

In addition to partly automated systems like the ACS and UAVs, completely autonomous military robots capable of making judgments and acting on their own may be developed in the future.

The most significant feature of AI-powered robotics is the development of lethal autonomous weapons (LAWs), sometimes known as "killer robots." On a sliding scale, robot autonomy exists.

At one extreme of the spectrum are robots that are designed to operate autonomously, but only in reaction to a specific stimulus and in one direction.

This degree of autonomy is shown by a mine that detonates autonomously when stepped on.

Remotely operated machines, which are unmanned yet controlled remotely by a person, are also available at the lowest end of the range.

Semiautonomous systems occupy the midpoint of the spectrum.

These systems may be able to work without the assistance of a person, but only to a limited extent.

A robot commanded to launch, go to a certain area, and then return at a specific time is an example of such a system.

The machine does not make any "decisions" on its own in this situation.

Semiautonomous devices may also be configured to accomplish part of a task before waiting for further inputs before moving on to the next step.

Full autonomy is the last step.

Fully autonomous robots are designed with a purpose and are capable of achieving it entirely on their own.

This might include the capacity to use deadly force without direct human guidance in warfare circumstances.

Robotic gadgets that are lethally equipped, AI-enhanced, and totally autonomous have the ability to radically transform the current warfare.

Military ground troops made up of both humans and robots, or entirely of robots with no humans, would expand armies.

Small, armed UAVs would not be constrained by the requirement for human operators, and they might be assembled in massive swarms to overwhelm bigger, but less mobile troops.

Such technological advancements will entail equally dramatic shifts in tactics, strategy, and even the notion of combat.

This technology will become less expensive as it becomes more widely accessible.

This might disturb the present military power balance.

Even minor governments, and maybe even non-state organizations like terrorist groups, may be able to develop their own robotic army.

Fully autonomous LAWs bring up a slew of practical, ethical, and legal issues.

One of the most pressing practical considerations is safety.

A completely autonomous robot with deadly armament that malfunctions might represent a major threat to everyone who comes in contact with it.

Fully autonomous missiles might theoretically wander off course and kill innocent people due to a mechanical failure.

Unpredictable technological faults and malfunctions may occur in any kind of apparatus.

Such issues offer a severe safety concern to individuals who deploy deadly robotic gadgets as well as unwitting bystanders.

Even if there are no possible problems, restrictions in program ming may result in potentially disastrous errors.

Programming robots to discriminate between combatants and noncombatants, for example, is a big challenge, and it's simple to envisage misidentification leading to unintentional fatalities.

The greatest concern, though, is that robotic AI may grow too quickly and become independent of human control.

Sentient robots might turn their armament on humans, like in popular science fiction movies and literature, and in fulfillment of eminent scientist Stephen Hawking's grim forecast that the development of AI could end in humanity's annihilation.

Laws may also lead to major legal issues.

The rules of war apply to human beings.

Robots cannot be held accountable for prospective law crimes, whether criminally, civilly, or in any other manner.

As a result, there's a chance that war crimes or other legal violations may go unpunished.

Here are some serious issues to consider: Can the programmer or engineer of a robot be held liable for the machine's actions? Could a person who gave the robot its "command" be held liable for the robot's unpredictability or blunders on a mission that was otherwise self-directed? Such considerations must be thoroughly considered before any completely autonomous deadly equipment is deployed.

Aside from legal issues of duty, a slew of ethical issues must be addressed.

The conduct of war necessitates split-second moral judgments.

Will self-driving robots be able to tell the difference between a kid and a soldier, or between a wounded and helpless soldier and an active combatant? Will a robotic military force always be seen as a cold, brutal, and merciless army of destruction, or can a robot be designed to behave kindly when the situation demands it? Because combat is riddled with moral dilemmas, LAWs involved in war will always be confronted with them.

Experts wonder that dangerous autonomous robots can ever be trusted to do the right thing.

Moral action requires not just rationality—which robots may be capable of—but also emotions, empathy, and wisdom.

These later items are much more difficult to implement in code.

Many individuals have called for an absolute ban on research in this field because to legal, ethical, and practical problems highlighted by the potential of ever more powerful AI-powered robotic military hardware.

Others, on the other hand, believe that scientific advancement cannot be halted.

Rather than prohibiting such study, they argue that scientists and society as a whole should seek realistic answers to the difficulties.

Some argue that keeping continual human supervision and control over robotic military units may address many of the ethical and legal issues.

Others argue that direct supervision is unlikely in the long term because human intellect will be unable to match the pace with which computers think and act.

As the side that gives its robotic troops more autonomy gains an overwhelming advantage over those who strive to preserve human control, there will be an inevitable trend toward more and more autonomy.

They warn that fully autonomous forces will always triumph.

Despite the fact that it is still in its early stages, the introduction of more complex AI and robotic equipment to the battlefield has already resulted in significant change.

AI and robotics on the battlefield have the potential to drastically transform the future of warfare.

It remains to be seen if and how this technology's technical, practical, legal, and ethical limits can be addressed.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Autonomous Weapons Systems, Ethics of; Lethal Autonomous Weapons 
Systems.


Further Reading

Borenstein, Jason. 2008. “The Ethics of Autonomous Military Robots.” Studies in Ethics, 
Law, and Technology 2, no. 1: n.p. https://www.degruyter.com/view/journals/selt/2/1/article-selt.2008.2.1.1036.xml.xml.

Morris, Zachary L. 2018. “Developing a Light Infantry-Robotic Company as a System.” 
Military Review 98, no. 4 (July–August): 18–29.

Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New 
York: W. W. Norton.

Singer, Peter W. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st 
Century. London: Penguin.

Sparrow, Robert. 2007. “Killer Robots,” Journal of Applied Philosophy 24, no. 1: 62–77.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...