Showing posts with label Robot Ethics. Show all posts
Showing posts with label Robot Ethics. Show all posts

Artificial Intelligence - What Is The Asilomar Conference On Beneficial AI?

 


The Asilomar Meeting on Beneficial AI has most prominently portrayed social concerns around artificial intelligence and danger to people via Isaac Asimov's Three Laws of Robotics.

"A robot may not damage a human being or, by inactivity, enable a human being to come to harm; A robot must follow human instructions unless such orders would contradict with the First Law; A robot must safeguard its own existence unless such protection would clash with the First or Second Law" (Asimov 1950, 40).

In subsequent books, Asimov added a Fourth Law or Zeroth Law, which is often quoted as "A robot may not hurt mankind, or, by inactivity, enable humanity to come to harm," and is detailed in Robots and Empire by the robot character Daneel Olivaw (Asimov 1985, chapter 18).

Asimov's zeroth rule sparked debate on how to judge whether or not something is harmful to mankind.

This was the topic of the 2017 Asilomar Conference on Beneficial AI, which went beyond the Three Laws and the Zeroth Law to propose twenty-three principles to protect mankind in the future of AI.

The conference's sponsor, the Future of Life Institute, has posted the principles on its website and has received 3,814 signatures from AI experts and other multidisciplinary supporters.

There are three basic kinds of principles: research questions, ethics and values, and long-term concerns.

These research guidelines are intended to guarantee that the aims of artificial intelligence continue to be helpful to people.

They're meant to help investors decide where to put their money in AI research.

To achieve useful AI, Asilomar signatories con incline that research agendas should encourage and preserve openness and conversation between AI researchers, policymakers, and developers.

Researchers interested in the development of artificial intelligence systems should work together to prioritize safety.

Proposed concepts relating to ethics and values are aimed to prevent damage and promote direct human control over artificial intelligence systems.

Parties to the Asilomar principles believe that AI should reflect human values such as individual rights, freedoms, and diversity acceptance.

Artificial intelligences, in particular, should respect human liberty and privacy, and should only be used to empower and enrich humanity.

Human social and civic norms must be adhered to by AI.

The Asilomar signatories believe that AI creators should be held accountable for their work.

One aspect that stands out is the likelihood of an autonomous weapons arms race.

Because of the high stakes, the designers of the Asilomar principles incorporated principles that addressed longer-term challenges.

They advised prudence, meticulous planning, and human supervision.

Superintelligences must be produced for the wider welfare of mankind, and not merely to further the aims of one industry or government.

The Asilomar Conference's twenty-three principles have sparked ongoing discussions about the need for beneficial AI and specific safeguards for the future of AI and humanity.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Accidents and Risk Assessment; Asimov, Isaac; Autonomous Weapons Systems, Ethics of; Campaign to Stop Killer Robots; Robot Ethics.



Further Reading


Asilomar AI Principles. 2017. https://futureoflife.org/ai-principles/.

Asimov, Isaac. 1950. “Runaround.” In I, Robot, 30–47. New York: Doubleday.

Asimov, Isaac. 1985. Robots and Empire. New York: Doubleday.

Sarangi, Saswat, and Pankaj Sharma. 2019. Artificial Intelligence: Evolution, Ethics, and Public Policy. Abingdon, UK: Routledge.





Artificial Intelligence - Ethics Of Autonomous Weapons Systems.

 



Autonomous weapons systems (AWS) are armaments that are designed to make judgments without the constant input of their programmers.

Navigation, target selection, and when to attack opposing fighters are just a few of the decisions that must be made.

Because of the imminence of this technology, numerous ethical questions and arguments have arisen regarding whether it should be developed and how it should be utilized.

The technology's seeming inevitability prompted Human Rights Watch to launch a campaign in 2013 called "Stop Killer Robots," which pushes for universal bans on their usage.

This movement continues to exist now.

Other academics and military strategists point to AWS' strategic and resource advantages as reasons for continuing to develop and use them.

A discussion of whether it is desirable or feasible to construct an international agreement on their development and/or usage is central to this argument.

Those who advocate for further technological advancement in these areas focus on the advantages that a military power can gain from using AWS.

These technologies have the potential to reduce collateral damage, battle casualties, the capacity to minimize needless risk, more efficient military operations, reduced psychological harm to troops from war, and armies with declining human numbers.

In other words, they concentrate on the advantages of the weapon to the military that will use it.

The essential assumption in these discussions is that the military's aims are morally worthwhile in and of themselves.

AWS may result in less civilian deaths since the systems can make judgments faster than humans; however, this is not always the case with technology, as the decision-making procedures of AWS may result in higher civilian fatalities rather than the opposite.

However, if they can avoid civilian fatalities and property damage more effectively than conventional fighting, they are more efficient and hence preferable.

In times of conflict, they might also improve efficiency by minimizing resource waste.

Transportation of people and the resources required to keep them alive is a time-consuming and challenging part of battle.

AWS provides a solution to complex logistical issues.

Drones and other autonomous systems don't need rain gear, food, drink, or medical attention, making them less cumbersome and perhaps more successful in completing their objectives.

AWS are considered as eliminating waste and offering the best possible outcome in a combat situation in these and other ways.

The employment of AWS in military operations is inextricably linked to Just War Theory.

Just War Theory examines whether it is morally acceptable or essential for a military force to engage in war, as well as what activities are ethically justifiable during wartime.

If an autonomous system may be used in a military strike, it can only be done if the attack is justifiable in the first place.

According to this viewpoint, the manner in which one is murdered is less essential than the reason for one's death.

Those who believe AWS is unethical concentrate on the hazards that such technology entails.

These scenarios include scenarios in which enemy combatants obtain weaponry and use it against the military power that deploys it, as well as scenarios in which there is increased (and uncontrollable) collateral damage, reduced retaliation capability (against enemy combatant aggressors), and loss of human dignity.

One key concern is whether being murdered by a computer without a person as the final decision-maker is consistent with human dignity.

There appears to be something demeaning about being murdered by an AWS that has had minimal human interaction.

Another key worry is the risk aspect, which includes the danger to the user of the technology that if the AWS is taken down (either because to a malfunction or an enemy assault), it will be seized and used against the owner.

Those who oppose the use of AWS are likewise concerned about the concept of just war.

The targeting of civilians by military agents is expressly prohibited under Just War Theory; the only lawful military targets are other military bases or personnel.

However, the introduction of autonomous weapons may imply that a state, particularly one without access to AWS, may be unable to react to military attacks launched by AWS.

In a scenario where one side has access to AWS but the other does not, the side without the weapons will inevitably be without a legal military target, forcing them to either target nonmilitary (civilian) targets or not react at all.

Neither alternative is feasible in terms of ethics or practicality.

Because automated weaponry is widely assumed to be on the horizon, another ethical consideration is how to regulate its use.

Because of the United States' extensive use of remote control drones in the Middle East, this debate has gotten a lot of attention.

Some advocate for a worldwide ban on the technology; although this is often seen as foolish and hence impractical, these advocates frequently point to the UN's restriction against blinding lasers, which has been ratified by 108 countries.

Others want to create an international convention that controls the proper use of these technologies, with consequences and punishments for nations that break these standards, rather than a full prohibition.

There is currently no such agreement, and each state must decide how to govern the usage of these technologies on its own.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Battlefield AI and Robotics; Campaign to Stop Killer Robots; Lethal Autonomous Weapons Systems; Robot Ethics.



Further Reading

Arkin, Ronald C. 2010. “The Case for Ethical Autonomy in Unmanned Systems.” Journal 
of Military Ethics 9, no. 4: 332–41.

Bhuta, Nehal, Susanne Beck, Robin Geiss, Hin-Yan Liu, and Claus Kress, eds. 2016. 
Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge, UK: Cambridge 
University Press.

Killmister, Suzy. 2008. “Remote Weaponry: The Ethical Implications.” Journal of 
Applied Philosophy 25, no. 2: 121–33.

Leveringhaus, Alex. 2015. “Just Say ‘No!’ to Lethal Autonomous Robotic Weapons.” 
Journal of Information, Communication, and Ethics in Society 13, no. 3–4: 
299–313.

Sparrow, Robert. 2016. “Robots and Respect: Assessing the Case Against Autonomous 
Weapon Systems.” Ethics & International Affairs 30, no. 1: 93–116.





Artificial Intelligence - What Has Been Isaac Asimov's Influence On AI?



(c. 1920–1992) Isaac Asimov was a professor of biochemistry at Boston University and a well-known science fiction novelist.

Asimov was a prolific writer in a variety of genres, and his corpus of science fiction has had a major impact on not just the genre, but also on ethical concerns surrounding science and technology.

Asimov was born in the Russian Federation.

He celebrated his birthday on January 2, 1920, despite not knowing his official birth date.

In 1923, his family moved to New York City.

At the age of sixteen, Asimov applied to Columbia College, the undergraduate school of Columbia University, but was refused admission owing to anti-Semitic restrictions on the number of Jewish students.

He finally enrolled in Seth Low Junior College, a connected undergraduate institution.

Asimov switched to Columbia College when Seth Low closed its doors, but obtained a Bachelor of Science rather than a Bachelor of Arts, which he regarded as "a gesture of second-class citizenship" (Asimov 1994, n.p.).

Asimov grew interested in science fiction about this time and started writing letters to science fiction periodicals, ultimately attempting to write his own short tales.

His debut short story, "Marooned off Vesta," was published in Amazing Stories in 1938.

His early works placed him in the company of science fiction pioneers like Robert Heinlein.

After graduation, Asimov attempted, but failed, to enroll in medical school.

Instead, at the age of nineteen, he enrolled in graduate school for chemistry.

World War II halted Asimov's graduate studies, and at Heinlein's recommendation, he completed his military duty by working at the Naval Air Experimental Station in Philadelphia.

He created short tales while stationed there, which constituted the foundation for Foundation (1951), one of his most well-known works and the first of a multi-volume series that he would eventually tie to numerous of his other pieces.

He earned his doctorate from Columbia University in 1948.

The pioneering Robots series by Isaac Asimov (1950s–1990s) has served as a foundation for ethical norms to alleviate human worries about technology gone awry.

The Three Laws of Robotics, for example, are often mentioned as guiding principles for artificial intelligence and robotics.

The Three Laws were initially mentioned in the short tale "Run about" (1942), which was eventually collected in I, Robot (1950):

1. A robot may not damage a human being or enable a human being to come to danger as a result of its inactivity.

2. Except when such commands contradict with the First Law, a robot shall follow orders provided to it by humans.

3. A robot must defend its own existence as long as this does not violate the First or Second Laws.

A "zeroth rule" is devised in Robots and Empire (1985) in order for robots to prevent a scheme to destroy Earth: "A robot may not damage mankind, or enable humanity to come to danger via inactivity." The original Three Laws are superseded by this statute.

Characters in Asimov's Robot series of books and short stories are often tasked with solving a mystery in which a robot seems to have broken one of the Three Laws.

In "Runaround," for example, two field experts with U.S. Robots and Mechanical Men, Inc. discover they're in danger of being stuck on Mercury since their robot "Speedy" hasn't returned with selenium required to power a protective shield in an abandoned base to screen them from the sun.

Speedy has malfunctioned because he is stuck in a conflict between the Second and Third Laws: when the robot approaches the selenium, it is obliged to recede in order to defend itself from a corrosive quantity of carbon monoxide near the selenium.

The humans must discover out how to apply the Three Laws to free Speedy from a conflict-induced feedback cycle.

More intricate arguments concerning the application of the Three Laws appear in later tales and books.

The Machines manage the world's economy in "The Evitable Conflict" (1950), and "robopsychologist" Susan Calvin notices that they have changed the First Law into a predecessor of Asimov's zeroth law: "the Machines work not for any one human being, but for all humanity" (Asimov 2004b, 222).

Calvin is concerned that the Machines are guiding mankind toward "the ultimate good of humanity" (Asimov 2004b, 222), even if humanity is unaware of what it is.

Furthermore, Asimov's Basis trilogy (1940s–1990s) coined the word "psychohistory," which may be interpreted as foreshadowing the algorithms that provide the foundation for artificial intelligence today.

In Foundation, the main character Hari Seldon creates psychohistory as a method of making broad predictions about the future behavior of extremely large groups of people, such as the breakdown of civilization (here, the Galactic Empire) and the ensuing Dark Ages.

Seldon, on the other hand, claims that using psychohistory may shorten the era of anarchy: Psychohistory, which has the ability to foretell the fall, may make pronouncements about the coming dark times.

The Empire has been in existence for almost a thousand years.

The next dark ages will last thirty thousand years, not twelve.

A Second Empire will develop, but there will be a thousand generations of suffering humanity between it and our civilization...

If my party is permitted to operate immediately, it is conceivable to cut the period of anarchy to a single millennium.

30–31 (Asimov, 2004a) Psychohistory produces "a mathematical prediction" (Asimov, 2004a, 30), similar to how artificial intelligence would make a forecast.

Seldon established the Basis in the Foundation trilogy, a hidden collection of people enacting humanity's collective knowledge and so serving as the physical foundation for a hypothetical second Galactic Empire.

The Foundation is threatened by the Mule in later parts of the Foundation series, a mutant and consequently an aberration that was not predicted by psychohistory's predictive research.

Although Seldon's thousand-year plan depends on macro conceptions—"the future isn't nebulous," the friction between large-scale theories and individual actions is a crucial factor driving Foundation.

Seldon has computed and plotted it" (Asimov 2004a, 100)—individual acts may save or destroy the scheme.

Asimov's works were frequently foreshadowing, prompting some to label his work as "future history" or "speculative fiction." Asimov's ethical challenges are often cited in legal, political, and policy arguments years after they were published.

For example, in 2007, the South Korean Ministry of Commerce, Industry, and Energy established a Robot Ethics Charter based on the Three Laws, predicting that by 2020, every Korean household will have a robot.

The British House of Lords' Artificial Intelligence Committee adopted a set of guidelines in 2017 that are similar to the Three Laws.

The Three Laws' utility has been questioned by others.

First, some opponents point out that robots are often employed for military purposes, and that the Three Laws would restrict this usage, which Asimov would have supported given his anti-war short tales like "The Gentle Vultures" (1957).

Second, some argue that today's robots and AI applications vary significantly from those shown in the Robot series.

Asimov's imaginary robots are still powered by a "positronic brain," which is still science fiction and beyond current computer capacity.

Third, the Three Laws are clearly fiction, and Asimov's Robot series is founded on misinterpretations in order to advance ethical concerns and for dramatic impact.

Critics claim that the Three Laws cannot serve as a true moral framework for controlling AI or robotics since they may be misunderstood just like any other legislation.

Finally, some argue that these ethical principles should be applied to all people.

Asimov died in 1992 from symptoms related to AIDS, which he caught after receiving a tainted blood transfusion during a heart bypass operation in 1983.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Beneficial AI, Asilomar Meeting on; Pathetic Fallacy; Robot Ethics.


Further Reading

Asimov, Isaac. 1994. I, Asimov: A Memoir. New York: Doubleday.

Asimov, Isaac. 2002. It’s Been a Good Life. Amherst: Prometheus Books.

Asimov, Isaac. 2004a. The Foundation Novels. New York: Bantam Dell.

Asimov, Isaac. 2004b. I, Robot. New York: Bantam Dell.





What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...