Showing posts with label AI Ethics. Show all posts
Showing posts with label AI Ethics. Show all posts

Artificial Intelligence - What Is The Trolley Problem?

 



Philippa Foot used the term "trolley problem" in 1967 to describe an ethical difficulty.

Artificial intelligence advancements in different domains have sparked ethical debates regarding how these systems' decision-making processes should be designed.




Of course, there is widespread worry about AI's capacity to assess ethical challenges and respect societal values.

An operator finds herself near a trolley track, standing next to a lever that determines whether the trolley will continue on its current path or go in a different direction, in this classic philosophical thought experiment.

Five people are standing on the track where the trolley is running, unable to get out of the path and certain to be murdered if the trolley continues on its current course.

On the opposite track, there is another person who will be killed if the operator pulls the lever.





The operator has the option of pulling the lever, killing one person while rescuing the other five, or doing nothing and allowing the other five to perish.

This is a typical problem between utilitarianism (activities should maximize the well-being of affected persons) and deontology (actions should maximize the well-being of affected individuals) (whether the action is right or wrong based on rules, as opposed to the consequences of the action).

With the development of artificial intelligence, the issue has arisen how we should teach robots to behave in scenarios that are perceived as inescapable realities, such as the Trolley Problem.

The Trolley Problem has been investigated with relation to artificial intelligence in fields such as primary health care, the operating room, security, self-driving automobiles, and weapons technology.

The subject has been studied most thoroughly in the context of self-driving automobiles, where regulations, guidelines, and norms have already been suggested or developed.

Because autonomous vehicles have already gone millions of kilometers in the United States, they face this difficulty.

The problem is made more urgent by the fact that a few self-driving car users have actually died while utilizing the technology.

Accidents have sparked even greater public discussion over the proper use of this technology.





Moral Machine is an online platform established by a team at the Massachusetts Institute of Technology to crowdsource responses to issues regarding how self-driving automobiles should prioritize lives.

The makers of The Moral Machine urge users to the website to guess what option a self-driving automobile would make in a variety of Trolley Problem-style problems.

Respondents must prioritize the lives of car passengers, pedestrians, humans and animals, people walking legally or illegally, and people of various fitness levels and socioeconomic status, among other variables.

When respondents are in a car, they almost always indicate that they would move to save their own lives.

It's possible that crowd-sourced solutions aren't the best method to solve Trolley Problem problems.

Trading a pedestrian life for a vehicle passenger's life, for example, may be seen as arbitrary and unjust.

The aggregated solutions currently do not seem to represent simple utilitarian calculations that maximize lives saved or favor one sort of life over another.

It's unclear who will get to select how AI will be programmed and who will be held responsible if AI systems fail.





This obligation might be assigned to policymakers, the corporation that develops the technology, or the people who end up utilizing it.

Each of these factors has its own set of ramifications that must be handled.

The Trolley Problem's usefulness in resolving AI quandaries is not widely accepted.

The Trolley Problem is dismissed by some artificial intelligence and ethics academics as a helpful thinking exercise.

Their arguments are usually based on the notion of trade-offs between different lifestyles.

They claim that the Trolley Problem lends credence to the idea that these trade-offs (as well as autonomous vehicle disasters) are unavoidable.

Instead than concentrating on the best methods to avoid a dilemma like the trolley issue, policymakers and programmers should instead concentrate on the best ways to react to the different circumstances.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Accidents and Risk Assessment; Air Traffic Control, AI and; Algorithmic Bias and Error; Autonomous Weapons Systems, Ethics of; Driverless Cars and Trucks; Moral Turing Test; Robot Ethics.


References And Further Reading

Cadigan, Pat. 2018. AI and the Trolley Problem. New York: Tor.

Etzioni, Amitai, and Oren Etzioni. 2017. “Incorporating Ethics into Artificial Intelligence.” Journal of Ethics 21: 403–18.

Goodall, Noah. 2014. “Ethical Decision Making during Automated Vehicle Crashes.” Transportation Research Record: Journal of the Transportation Research Board 2424: 58–65.

Moolayil, Amar Kumar. 2018. “The Modern Trolley Problem: Ethical and Economically Sound Liability Schemes for Autonomous Vehicles.” Case Western Reserve Journal of Law, Technology & the Internet 9, no. 1: 1–32.



Artificial Intelligence - Iterative AI Ethics In Complex Socio-Technical Systems

 



Title: The Need For Iterative And Evolving AI Ethics Processes And Frameworks To Ensure Relevant, Fair, And Ethical Scalable Complex Socio-Technical Systems.

Author: Jai Krishna Ponnappan




Ethics has strong fangs, but they are seldom applied in AI ethics today, therefore it's no surprise that AI ethics is criticized for lacking efficacy. 


This essay claims that present AI ethics 'ethics' is generally useless, caught in a 'ethical principles' approach and hence particularly vulnerable to manipulation, particularly by industrial players. 

Using ethics as a replacement for the law puts it at danger of being abused and misapplied. 

This severely restricts what ethics can accomplish, and it is a big setback for the AI field and its implications for people and society. 

This paper examines these dangers before focusing on the efficacy of ethics and the critical contribution they can – and should – provide to AI ethics right now. 



Ethics is a potent weapon. 


Unfortunately, we seldom use them in AI ethics, thus it's no surprise that AI ethics is dubbed "ineffective." 

This paper examines the different ethical procedures that have arisen in recent years in response to the widespread deployment and usage of AI in society, as well as the hazards that come with it. 

Lists of principles, ethical codes, suggestions, and guidelines are examples of these procedures. 


However, as many have showed, although these ethical innovations are exciting, they are also problematic: their usefulness has yet to be proven, and they are particularly susceptible to manipulation, notably by industry. 


This is a setback for AI, as it severely restricts what ethics may do for society and people. 

However, as this paper demonstrates, the problem isn't that ethics is meaningless (or ineffective) in the face of current AI deployment; rather, ethics is being utilized (or manipulated) in such a manner that it is made ineffectual for AI ethics. 

The paper starts by describing the current state of AI ethics: AI ethics is essentially principled, that is, it adheres to a 'law' view of ethics. 

It then demonstrates how this ethical approach fails to accomplish what it claims to do. 

The second section of this paper focuses on the true worth of ethics – its 'efficacy,' which we describe as the capacity to notice the new as it develops on a continuous basis. 



We explain how, in today's AI ethics, the ability to resist cognitive and perceptual inertia, which makes us inactive in the face of new advancements, is crucial. 


Finally, although we acknowledge that the legalistic approach to ethics is not entirely incorrect, we argue that it is the end of ethics, not its beginning, and that it ignores the most valuable and crucial components of ethics. 

In many stakeholder quarters, there are several ongoing conversations and activities on AI ethics (policy, academia, industry and even the media). This is something we can all be happy about. 


Policymakers (e.g., the European Commission and the European Parliament) and business, in particular, are concerned about doing things right in order to promote ethical and responsible AI research and deployment in society. 


It is now widely acknowledged that if AI is adopted without adequate attention and thought for its potentially detrimental effects on people, particular groups, and society as a whole, things might go horribly wrong (including, for example, bias and discrimination, injustice, privacy infringements, increase in surveillance, loss of autonomy, overdependency on technology, etc.). 

The focus then shifts to ethics, with the goal of ensuring that AI is implemented in a way that respects deeply held social values and norms, placing them at the center of responsible technology development and deployment (Hagendorff, 2020; Jobin et al., 2019). 

The 'Ethical guidelines for trustworthy AI,' established by the European Commission's High-Level Expert Group on AI in 2018, is one example of contemporary ethics efforts (High-Level Expert Group on Artificial Intelligence, 2019). 

However, the present use of the term "ethics" in the subject of AI ethics is questionable. 

Today's AI ethics is dominated by what British philosopher G.E.M. Anscombe refers to as a 'law conception of ethics,' i.e., a perspective on ethics that treats it as if it were a kind of law (Anscombe, 1958). 

It's customary to think of ethics as a "softer" version of the law (Jobin et al., 2019: 389). 


However, this is simply one approach to ethics, and it is problematic, as Anscombe has shown. It is problematic in at least two respects in terms of AI ethics. 

For starters, it's troublesome since it has the potential to be misapplied as a substitute for regulation (whether through law, policies or standards). 

Over the previous several years, many authors have advocated the following point: Article 19, 2019; Greene et al., 2019; Hagendorff, 2020; Jobin et al., 2019; Klöver and Fanta, 2019; Mittelstadt, 2019; Wagner, 2018); Greene et al., 2019; Hagendorff, 2020; Jobin et al., 2019; Klöver and Fanta, 2019; Klöver and Fanta, 2019; Klöver and Fanta, 2019; Klöver and Fanta, Wagner cites the situation of a member of the Google DeepMind ethics team continuously asserting 'how ethically Google DeepMind was working, while simultaneously dodging any accountability for the data security crisis at Google DeepMind' at the Conference on World Affairs 2018. (Wagner, 2018). 

'Ethical AI' discourse, according to Ochigame (2019), was "aligned strategically with a Silicon Valley campaign attempting to circumvent legally enforceable prohibitions of problematic technology." Ethics falls short in this regard because it lacks the instruments to enforce conformity. 


Ethics, according to Hagendorff, "lacks means to support its own normative assertions" (2020: 99). 


If ethics is about enforcing rules, then it is true that ethics is ineffective. 

Although ethical programs "bring forward great intentions," according to the human rights organization Article 19, "their general lack of accountability and enforcement measures" renders them ineffectual (Article 19, 2019: 18). 

Finally, and predictably, ethics is attacked for being ineffective. 

However, it's important to note that the problem isn't that ethics is being asked to perform something for which it is too weak or soft. 

It's more like it's being asked to do something it wasn't supposed to accomplish. 


Criticizing ethics for not having efficacy to enforce compliance with whatever it requires is like to blaming a fork for not correctly cutting meat: that is not what it is supposed to achieve. 


The goal of ethics is not to prescribe certain behaviors and then guarantee that they are followed. 

The issue occurs when it is utilized in this manner. 

This is especially true in the field of AI ethics, where ethical principles, norms, or criteria are required to control AI and guarantee that it does not damage people or society as a whole (e.g. AI HLEG). 

Some suggest that this ethical lapse is deliberate, motivated by a desire to guarantee that AI is not governed by legislation, i.e. 

that greater flexibility is available and that no firm boundaries are drawn constraining industrial and economic interests associated to this technology (Klöver and Fanta, 2019). 

For example, this criticism has been directed against the AI HLEG guidelines. 

Industry was extensively represented during debates at the European High-Level Expert Group on Artificial Intelligence (EU-HLEG), while academia and civil society did not have the same luxury, according to Article 19. 


While several non-negotiable ethical standards were initially specified in the text, owing to corporate pressure, they were eliminated from the final version. 


(Article 19, page 18 of the 2019 edition) It is a significant and concerning abuse and misuse of ethics to use ethics to hinder the execution of vital legal regulation. 

The result is ethics washing, as well as its cousins: ethics shopping, ethics shirking, and so on (Floridi, 2019; Greene et al., 2019; Wagner, 2018). 

Second, since the AI ethics area is dominated by this 'legal notion of ethics,' it fails to fully use what ethics has to give, namely, its correct efficacy, despite the critical need for them. 

What exactly are these ethical efficacy, and what value might they provide to the field? The true fangs of ethics are a never-failing capacity to perceive the new (Laugier, 2013). 


Ethics is basically a state of mind, a constantly renewed and nimble response to reality as it changes. 


The ethics of care has emphasized attention as a critical component of ethics (Tronto, 1993: 127). 

In this way, ethics is a strong instrument against cognitive and perceptual inertia, which prevents us from seeing what is different from before or in new settings, cultures, or circumstances, and hence necessitates a change in behavior (regulation included). 

This is particularly important for AI, given the significant changes and implications it has had and continues to have on society, as well as our basic ways of being and functioning. 

This ability to observe the environment is what keeps us from being cooked alive like the frog: it allows us to detect subtle changes as they happen. 

An extension and deepening of monitoring by governments and commercial enterprises, a rising reliance on technology, and the deployment of biased systems that lead to discrimination against women and minorities are all contributing to the increasingly hot water in AI. 


The positive changes they bring to society must be carefully examined and opposed when their negative consequences exceed their advantages. 


In this way, ethics has a tight relationship with social sciences, as an attempt to perceive what we don't otherwise notice, and ethics aids us in looking concretely at how the world evolves. 

It aids in the cleaning of the lens through which we see the world so that we may be more aware of its changes (and AI does bring many of these). 

It is critical that ethics back us up in this respect. 

It enables us to be less passive in the face of these changes, allowing us to better direct them in ways that benefit people and society while also improving our quality of life. 


Hagendorff makes a similar point in his essay on the 'Ethics of AI Ethics,' disputing the prevalent deontological approach to ethics in AI ethics (what we've referred to as a legalistic approach to ethics in this article), whose primary goal is to 'limit, control, or direct' (2020: 112). 


He emphasizes the necessity for AI to adopt virtue ethics, which strives to 'broaden the scope of action, disclose blind spots, promote autonomy and freedom, and cultivate self-responsibility' (Hagendorff, 2020: 112). 

Other ethical theory frameworks that might be useful in today's AI ethics discussion include the Spinozist approach, which focuses on the growth or loss of agency and action capability. 

So, are we just misinterpreting AI ethics, which, as we've seen, is now dominated by a 'law-concept of ethics'? Is today's legalistic approach to ethics entirely incorrect? No, not at all. 



The problem is that principles, norms, and values — the legal definition of ethics that is so prevalent in AI ethics today – are more of a means to a goal than an end in themselves. 


The word "end" has two meanings in this context. 

First, it is an end of ethics in the sense that it is the last destination of ethics, i.e., moulding laws, choices, behaviors, and acts in ways that are consistent with society's ideals. 

Ethics may be defined as the creation of principles (as in the AI HLEG criteria) or the application of ethical principles, values, or standards to particular situations. 

This process of operationalization of ethical standards may be observed, for example, in the European Commission's research funding program's Ethics evaluation procedure5 or in ethics impact assessments, which look at how a new technique or technology could alter ethical norms and values. 

These are unquestionably worthwhile endeavors that have a beneficial influence on society and people. 


Ethics, as the development of principles, is also useful in shaping policies and regulatory frameworks. 


The AI HLEG guidelines are heavily influenced by current policy and legislative developments at the EU level, such as the European Commission's "White Paper on Artificial Intelligence" (February 2020) and the European Parliament's proposed "Framework of ethical aspects of artificial intelligence, robotics, and related technologies" (April 2020). 

Ethics clearly lays forth the rights and wrongs, as well as what should be done and what should be avoided. 

It's important to recall, however, that ethics as ethical principles is also an end of ethics in another meaning: where it comes to a halt, where the thought is paused, and where this never-ending attention comes to an end. 

As a result, when ethics is reduced to a collection of principles, norms, or criteria, it has achieved its conclusion. 

There is no need for ethics if we have attained a sufficient degree of certainty and confidence in what are the correct judgments and acts. 



Ethics is about navigating muddy and dangerous seas while being vigilant. 


In the realm of AI, for example, ethical standards do not, by themselves, assist in the practical exploration of difficult topics such as fairness in extremely complex socio-technical systems. 


These must be thoroughly studied to ensure that we are not putting in place systems that violate deeply held norms and beliefs. 

Ethics is made worthless without a continual process of challenging what is or may be clear, of probing behind what seems to be resolved, and of keeping this inquiry alive. 

As a result, the process of settling ethics into established norms and principles comes to an end. 

It is vital to maintain ethics nimble and alive in light of AI's profound, huge, and broad influence on society. 

The ongoing renewal process of examining the world and the glasses through which we experience it — intentionally, consistently, and iteratively – is critical to AI ethics.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


AI ethics, law of AI, regulation of AI, ethics washing, EU HLEG on AI, ethical principles



Download PDF: 




Further Reading:



  • Anscombe, GEM (1958) Modern moral philosophy. Philosophy 33(124): 1–19.
  • European Committee for Standardization (2017) CEN Workshop Agreement: Ethics assessment for research and innovation – Part 2: Ethical impact assessment framework (by the SATORI project). Available at: https://satoriproject.eu/media/CWA17145-23d2017 .
  • Boddington, P (2017) Towards a Code of Ethics for Artificial Intelligence. Cham: Springer.
  • European Parliament JURI (April 2020) Framework of ethical aspects of artificial intelligence, robotics and related technologies, draft report (2020/2012(INL)). Available at: https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?lang=&reference=2020/2012 .
  • Gilligan, C (1982) In a Different Voice: Psychological Theory and Women’s Development. Cambridge: Harvard University Press.
  • Floridi, L (2019) Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology 32: 185–193.
  • Greene, D, Hoffmann, A, Stark, L (2019) Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii international conference on system sciences, Maui, Hawaii, 2019, pp.2122–2131.
  • Gilligan, C (1982) In a Different Voice: Psychological Theory and Women’s Development. Cambridge: Harvard University Press.
  • Hagendorff, T (2020) The ethics of AI ethics. An evaluation of guidelines. Minds and Machines 30: 99–120.
  • Greene, D, Hoffmann, A, Stark, L (2019) Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii international conference on system sciences, Maui, Hawaii, 2019, pp.2122–2131.
  • Jansen P, Brey P, Fox A, Maas J, Hillas B, Wagner N, Smith P, Oluoch I, Lamers L, van Gein H, Resseguier A, Rodrigues R, Wright D, Douglas D (2019) Ethical analysis of AI and robotics technologies. August, SIENNA D4.4, https://www.sienna-project.eu/digitalAssets/801/c_801912-l_1-k_d4.4_ethical-analysis–ai-and-r–with-acknowledgements.pdf
  • High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  • Jansen P, Brey P, Fox A, Maas J, Hillas B, Wagner N, Smith P, Oluoch I, Lamers L, van Gein H, Resseguier A, Rodrigues R, Wright D, Douglas D (2019) Ethical analysis of AI and robotics technologies. August, SIENNA D4.4, https://www.sienna-project.eu/digitalAssets/801/c_801912-l_1-k_d4.4_ethical-analysis–ai-and-r–with-acknowledgements.pdf .
  • Jobin, A, Ienca, M, Vayena, E (2019) The global landscape of AI ethics guidelines. Nature Machine Intelligence 1(9): 389–399.
  • Laugier, S (2013) The will to see: Ethics and moral perception of sense. Graduate Faculty Philosophy Journal 34(2): 263–281.
  • Klöver, C, Fanta, A (2019) No red lines: Industry defuses ethics guidelines for artificial intelligence. Available at: https://algorithmwatch.org/en/industry-defuses-ethics-guidelines-for-artificial-intelligence/
  • López, JJ, Lunau, J (2012) ELSIfication in Canada: Legal modes of reasoning. Science as Culture 21(1): 77–99.
  • Rodrigues, R, Rességuier, A (2019) The underdog in the AI ethical and legal debate: Human autonomy. In: Ethics Dialogues. Available at: https://www.ethicsdialogues.eu/2019/06/12/the-underdog-in-the-ai-ethical-and-legal-debate-human-autonomy/
  • Ochigame, R (2019) The invention of “Ethical AI” how big tech manipulates academia to avoid regulation. The Intercept. Available at: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/?comments=1
  • Mittelstadt, B (2019) Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1: 501–507.
  • Tronto, J (1993) Moral Boundaries: A Political Argument for an Ethic of Care. New York: Routledge.
  • Wagner, B (2018) Ethics as an escape from regulation: From ethics-washing to ethics-shopping. In: Bayamlioglu, E, Baraliuc, I, Janssens, L, et al. (eds) Being Profiled: Cogitas Ergo Sum: 10 Years of Profiling the European Citizen. Amsterdam: Amsterdam University Press, pp. 84–89.



Artificial Intelligence - What Is The Stop Killer Robots Campaign?

 



The Campaign to Stop Killer Robots is a non-profit organization devoted to mobilize and campaign against the development and deployment of deadly autonomous weapon systems (LAWS).

The campaign's main issue is that armed robots making life-or-death decisions undercut legal and ethical restraints on violence in human conflicts.

Advocates for LAWS argue that these technologies are compatible with current weapons and regulations, such as cruise missiles that are planned and fired by humans to hunt out and kill a specific target.

Advocates also say that robots are completely reliant on people, that they are bound by their design and must perform the behaviors that have been assigned to them, and that with appropriate monitoring, they may save lives by substituting humans in hazardous situations.


The Campaign to Stop Killer Robots dismisses responsible usage as a viable option, stating fears that the development of LAWS could result in a new arms race.


The advertisement underlines the danger of losing human control over the use of lethal force in situations when armed robots identify and remove a threat before human intervention is feasible.

Human Rights Watch, an international nongovernmental organization (NGO) that promotes fundamental human rights and investigates violations of those rights, organized and managed the campaign, which was officially launched on April 22, 2013, in London, England.


Many member groups make up the Campaign to Stop Killer Robots, including the International Committee for Robot Arms Control and Amnesty International.


A steering group and a worldwide coordinator are in charge of the campaign's leadership.

As of 2018, the steering committee consists of eleven non-governmental organizations.

Mary Wareham, who formerly headed international efforts to ban land mines and cluster bombs, is the campaign's worldwide coordinator.

Efforts to ban armed robots, like those to ban land mines and cluster bombs, concentrate on their potential to inflict needless suffering and indiscriminate damage to humans.


The United Nations Convention on Certain Conventional Weapons (CCW), which originally went into force in 1983, coordinates the worldwide ban of weapons.




Because the CCW has yet to agree on a ban on armed robots, and because the CCW lacks any mechanism for enforcing agreed-upon restrictions, the Campaign to Stop Killer Robots calls for the inclusion of LAWS in the CCW.

The Campaign to Stop Killer Robots also promotes the adoption of new international treaties to implement more preemptive restrictions.

The Campaign to Stop Killer Robots offers tools for educating and mobilizing the public, including multimedia databases, campaign reports, and a mailing list, in addition to lobbying governing authorities for treaty and convention prohibitions.

The Campaign also seeks the participation of technological businesses, requesting that they refuse to participate in the creation of LAWS on their own will.

The @BanKillerRobots account on Twitter is where the Campaign keeps track of and broadcasts the names of companies that have pledged not to engage in the creation or marketing of intelligent weapons.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Autonomous Weapons Systems, Ethics of; Battlefield AI and Robotics; Lethal Autonomous Weapons Systems.


Further Reading


Baum, Seth. 2015. “Stopping Killer Robots and Other Future Threats.” Bulletin of the Atomic Scientists, February 22, 2015. https://thebulletin.org/2015/02/stopping-killer-robots-and-other-future-threats/.

Campaign to Stop Killer Robots. 2020. https://www.stopkillerrobots.org/.

Carpenter, Charli. 2016. “Rethinking the Political / -Science- / Fiction Nexus: Global Policy Making and the Campaign to Stop Killer Robots.” Perspectives on Politics 14, no. 1 (March): 53–69.

Docherty, Bonnie. 2012. Losing Humanity: The Case Against Killer Robots. New York: Human Rights Watch.

Garcia, Denise. 2015. “Killer Robots: Why the US Should Lead the Ban.” Global Policy6, no. 1 (February): 57–63.


Artificial Intelligence - Ethics Of Autonomous Weapons Systems.

 



Autonomous weapons systems (AWS) are armaments that are designed to make judgments without the constant input of their programmers.

Navigation, target selection, and when to attack opposing fighters are just a few of the decisions that must be made.

Because of the imminence of this technology, numerous ethical questions and arguments have arisen regarding whether it should be developed and how it should be utilized.

The technology's seeming inevitability prompted Human Rights Watch to launch a campaign in 2013 called "Stop Killer Robots," which pushes for universal bans on their usage.

This movement continues to exist now.

Other academics and military strategists point to AWS' strategic and resource advantages as reasons for continuing to develop and use them.

A discussion of whether it is desirable or feasible to construct an international agreement on their development and/or usage is central to this argument.

Those who advocate for further technological advancement in these areas focus on the advantages that a military power can gain from using AWS.

These technologies have the potential to reduce collateral damage, battle casualties, the capacity to minimize needless risk, more efficient military operations, reduced psychological harm to troops from war, and armies with declining human numbers.

In other words, they concentrate on the advantages of the weapon to the military that will use it.

The essential assumption in these discussions is that the military's aims are morally worthwhile in and of themselves.

AWS may result in less civilian deaths since the systems can make judgments faster than humans; however, this is not always the case with technology, as the decision-making procedures of AWS may result in higher civilian fatalities rather than the opposite.

However, if they can avoid civilian fatalities and property damage more effectively than conventional fighting, they are more efficient and hence preferable.

In times of conflict, they might also improve efficiency by minimizing resource waste.

Transportation of people and the resources required to keep them alive is a time-consuming and challenging part of battle.

AWS provides a solution to complex logistical issues.

Drones and other autonomous systems don't need rain gear, food, drink, or medical attention, making them less cumbersome and perhaps more successful in completing their objectives.

AWS are considered as eliminating waste and offering the best possible outcome in a combat situation in these and other ways.

The employment of AWS in military operations is inextricably linked to Just War Theory.

Just War Theory examines whether it is morally acceptable or essential for a military force to engage in war, as well as what activities are ethically justifiable during wartime.

If an autonomous system may be used in a military strike, it can only be done if the attack is justifiable in the first place.

According to this viewpoint, the manner in which one is murdered is less essential than the reason for one's death.

Those who believe AWS is unethical concentrate on the hazards that such technology entails.

These scenarios include scenarios in which enemy combatants obtain weaponry and use it against the military power that deploys it, as well as scenarios in which there is increased (and uncontrollable) collateral damage, reduced retaliation capability (against enemy combatant aggressors), and loss of human dignity.

One key concern is whether being murdered by a computer without a person as the final decision-maker is consistent with human dignity.

There appears to be something demeaning about being murdered by an AWS that has had minimal human interaction.

Another key worry is the risk aspect, which includes the danger to the user of the technology that if the AWS is taken down (either because to a malfunction or an enemy assault), it will be seized and used against the owner.

Those who oppose the use of AWS are likewise concerned about the concept of just war.

The targeting of civilians by military agents is expressly prohibited under Just War Theory; the only lawful military targets are other military bases or personnel.

However, the introduction of autonomous weapons may imply that a state, particularly one without access to AWS, may be unable to react to military attacks launched by AWS.

In a scenario where one side has access to AWS but the other does not, the side without the weapons will inevitably be without a legal military target, forcing them to either target nonmilitary (civilian) targets or not react at all.

Neither alternative is feasible in terms of ethics or practicality.

Because automated weaponry is widely assumed to be on the horizon, another ethical consideration is how to regulate its use.

Because of the United States' extensive use of remote control drones in the Middle East, this debate has gotten a lot of attention.

Some advocate for a worldwide ban on the technology; although this is often seen as foolish and hence impractical, these advocates frequently point to the UN's restriction against blinding lasers, which has been ratified by 108 countries.

Others want to create an international convention that controls the proper use of these technologies, with consequences and punishments for nations that break these standards, rather than a full prohibition.

There is currently no such agreement, and each state must decide how to govern the usage of these technologies on its own.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Battlefield AI and Robotics; Campaign to Stop Killer Robots; Lethal Autonomous Weapons Systems; Robot Ethics.



Further Reading

Arkin, Ronald C. 2010. “The Case for Ethical Autonomy in Unmanned Systems.” Journal 
of Military Ethics 9, no. 4: 332–41.

Bhuta, Nehal, Susanne Beck, Robin Geiss, Hin-Yan Liu, and Claus Kress, eds. 2016. 
Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge, UK: Cambridge 
University Press.

Killmister, Suzy. 2008. “Remote Weaponry: The Ethical Implications.” Journal of 
Applied Philosophy 25, no. 2: 121–33.

Leveringhaus, Alex. 2015. “Just Say ‘No!’ to Lethal Autonomous Robotic Weapons.” 
Journal of Information, Communication, and Ethics in Society 13, no. 3–4: 
299–313.

Sparrow, Robert. 2016. “Robots and Respect: Assessing the Case Against Autonomous 
Weapon Systems.” Ethics & International Affairs 30, no. 1: 93–116.





What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...