Showing posts sorted by relevance for query Autonomous. Sort by date Show all posts
Showing posts sorted by relevance for query Autonomous. Sort by date Show all posts

Artificial Intelligence - Who Is Hans Moravec?

 




Hans Moravec(1948–) is well-known in the computer science community as the long-time head of Carnegie Mellon University's Robotics Institute and an unashamed techno logical optimist.

For the last twenty-five years, he has studied and produced artificially intelligent robots at the CMU lab, where he is still an adjunct faculty member.

Moravec spent almost 10 years as a research assistant at Stanford University's groundbreaking Artificial Intelligence Lab before coming to Carnegie Mellon.

Moravec is also noted for his paradox, which states that, contrary to popular belief, it is simple to program high-level thinking skills into robots—as with chess or Jeopardy!—but difficult to transmit sensorimo tor agility.

Human sensory and motor abilities have developed over millions of years and seem to be easy, despite their complexity.

Higher-order cognitive abilities, on the other hand, are the result of more recent cultural development.

Geometry, stock market research, and petroleum engineering are examples of disciplines that are difficult for people to learn but easier for robots to learn.

"The basic lesson of thirty-five years of AI research is that the hard issues are simple, and the easy ones are hard," writes Steven Pinker of Moravec's scientific career.

Moravec built his first toy robot out of scrap metal when he was eleven years old, and his light-following electronic turtle and a robot operated by punched paper tape earned him two high school science fair honors.

He proposed a Ship of Theseus-like analogy for the viability of artificial brains while still in high school.

Consider replacing a person's human neurons one by one with precisely manufactured equivalents, he said.

When do you think human awareness will vanish? Is anybody going to notice? Is it possible to establish that the person is no longer human? Later in his career, Moravec would suggest that human knowledge and training might be broken down in the same manner, into subtasks that machine intelligences could take over.

Moravec's master's thesis focused on the development of a computer language for artificial intelligence, while his PhD research focused on the development of a robot that could navigate obstacle courses utilizing spatial representation methods.

The area of interest (ROI) in a scene was identified by these robot vision systems.

Moravec's early computer vision robots were extremely sluggish by today's standards, taking around five hours to go from one half of the facility to the other.

To measure distance and develop an internal picture of physical impediments in the room, a remote computer carefully analysed continuous video-camera images recorded by the robot from various angles.

Moravec finally developed 3D occupancy grid technology, which allowed a robot to create an awareness of a cluttered area in a matter of seconds.

Moravec's lab took on a new challenge by converting a Pontiac TransSport minivan into one of the world's first road-ready autonomous cars.

The self-driving minivan reached speeds of up to 60 miles per hour.

DANTE II, a robot capable of going inside the crater of an active volcano on Mount Spurr in Alaska, was also constructed by the CMU Robotics Institute.

While DANTE II's immediate aim was to sample harmful fumarole gases, a job too perilous for humans, it was also planned to demonstrate technologies for robotic expeditions to distant worlds.

The volcanic explorer robot used artificial intelligence to navigate the perilous, boulder-strewn terrain on its own.

Because such rovers produced so much visual and other sensory data that had to be analyzed and managed, Moravec believes that experience with mobile robots spurred the development of powerful artificial intelligence and computer vision methods.

For the National Aeronautics and Space Administration (NASA), Moravec's team built fractal branching ultra-dexterous robots ("Bush robots") in the 1990s.

These robots, which were proposed but never produced due to the lack of necessary manufacturing technologies, comprised of a branching hierarchy of dynamic articulated limbs, starting with a main trunk and splitting down into smaller branches.

As a result, the Bush robot would have "hands" at all scales, from macroscopic to tiny.

The tiniest fingers would be nanoscale in size, allowing them to grip very tiny objects.

Moravec said the robot would need autonomy and depend on artificial intelligence agents scattered throughout the robot's limbs and branches because to the intricacy of manipulating millions of fingers in real time.

He believed that the robots may be made entirely of carbon nanotube material, employing the quick prototyping technology known as 3D printers.

Moravec believes that artificial intelligence will have a significant influence on human civilization.

To stress the role of AI in change, he coined the concept of the "landscape of human capability," which physicist Max Tegmark has later converted into a graphic depiction.

Moravec's picture depicts a three-dimensional environment in which greater altitudes reflect more challenging jobs in terms of human difficulty.

The point where the swelling waters meet the shore reflects the line where robots and humans both struggle with the same duties.

Art, science, and literature are now beyond of grasp for an AI, but the sea has already defeated mathematics, chess, and the game Go.

Language translation, autonomous driving, and financial investment are all on the horizon.

More controversially, in two popular books, Mind Children (1988) and Robot: Mere Machine to Transcendent Mind (1989), Moravec engaged in future conjecture based on what he understood of developments in artificial intelligence research (1999).

In 2040, he said, human intellect will be surpassed by machine intelligence, and the human species would go extinct.

Moravec evaluated the functional equivalence between 50,000 million instructions per second (50,000 MIPS) of computer power and a gram of brain tissue and came up with this figure.

He calculated that home computers in the early 2000s equaled only an insect's nervous system, but that if processing power doubled every eighteen months, 350 million years of human intellect development could be reduced to just 35 years of artificial intelligence advancement.

He estimated that a hundred million MIPS would be required to create human-like universal robots.

Moravec refers to these sophisticated robots as our "mind children" in the year 2040.

Humans, he claims, will devise techniques to delay biological civilization's final demise.

Moravec, for example, was the first to anticipate what is now known as universal basic income, which is delivered by benign artificial superintelligences.

In a completely automated society, a basic income system would provide monthly cash payments to all individuals without any type of employment requirement.

Moravec is more concerned about the idea of a renegade automated corporation breaking its programming and refusing to pay taxes into the human cradle-to-grave social security system than he is about technological unemployment.

Nonetheless, he predicts that these "wild" intelligences will eventually control the universe.

Moravec has said that his books Mind Children and Robot may have had a direct impact on the last third of Stanley Kubrick's original screenplay for A.I. Artificial Intelligence (later filmed by Steven Spielberg).

Moravecs, on the other hand, are self-replicating devices in the science fiction books Ilium and Olympos.

Moravec defended the same physical fundamentalism he expressed in his high school thoughts throughout his life.

He contends in his most transhumanist publications that the only way for humans to stay up with machine intelligences is to merge with them by replacing sluggish human cerebral tissue with artificial neural networks controlled by super-fast algorithms.

In his publications, Moravec has blended the ideas of artificial intelligence with virtual reality simulation.


He's come up with four scenarios for the development of consciousness.

(1) human brains in the physical world, 

(2) a programmed AI implanted in a physical robot, 

(3) a human brain immersed in a virtual reality simulation, and 

(4) an AI functioning inside the boundaries of virtual reality All of them are equally credible depictions of reality, and they are as "real" as we believe them to be.


Moravec is the creator and chief scientist of the Pittsburgh-based Seegrid Corporation, which makes autonomous Robotic Industrial Trucks that can navigate warehouses and factories without the usage of automated guided vehicle systems.

A human trainer physically pushes Seegrid's vehicles through a new facility once.

The robot conducts the rest of the job, determining the most efficient and safe pathways for future journeys, while the trainer stops at the appropriate spots for the truck to be loaded and unloaded.

Seegrid VGVs have transported over two million production miles and eight billion pounds of merchandise for DHL, Whirlpool, and Amazon.

Moravec was born in the Austrian town of Kautzen.

During World War II, his father was a Czech engineer who sold electrical products.

When the Russians invaded Czechoslovakia in 1944, the family moved to Austria.

In 1953, his family relocated to Canada, where he now resides.

Moravec earned a bachelor's degree in mathematics from Acadia University in Nova Scotia, a master's degree in computer science from the University of Western Ontario, and a doctorate from Stanford University, where he worked with John McCarthy and Tom Binford on his thesis.

The Office of Naval Study, the Defense Advanced Research Projects Agency, and NASA have all supported his research.

Elon Musk (1971–) is an American businessman and inventor.

Elon Musk is an engineer, entrepreneur, and inventor who was born in South Africa.

He is a dual citizen of South Africa, Canada, and the United States, and resides in California.

Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century, as well as an important influencer and contributor to the development of artificial intelligence.

Despite his controversial personality, Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century and an important influencer and contributor to the development of artificial intelligence.

Musk's business instincts and remarkable technological talent were evident from an early age.

By the age of 10, he had self-taught himself how program computers, and by the age of twelve, he had produced a video game and sold the source code to a computer maga zine.

Musk has included allusions to some of his favorite novels in SpaceX's Falcon Heavy rocket launch and Tesla's software since he was a youngster.

Musk's official schooling was centered on economics and physics rather than engineering, interests that are mirrored in his subsequent work, such as his efforts in renewable energy and space exploration.

He began his education at Queen's University in Canada, but later transferred to the University of Pennsylvania, where he earned bachelor's degrees in Economics and Physics.

Musk barely stayed at Stanford University for two days to seek a PhD in energy physics before departing to start his first firm, Zip2, with his brother Kimbal Musk.

Musk has started or cofounded many firms, including three different billion-dollar enterprises: SpaceX, Tesla, and PayPal, all driven by his diverse interests and goals.

• Zip2 was a web software business that was eventually purchased by Compaq.

• X.com: an online bank that merged with PayPal to become the online payments corporation PayPal.

• Tesla, Inc.: an electric car and solar panel maker • SpaceX: a commercial aircraft manufacturer and space transportation services provider (via its subsidiarity SolarCity) • Neuralink: a neurotechnology startup focusing on brain-computer connections • The Boring Business: an infrastructure and tunnel construction corporation • OpenAI: a nonprofit AI research company focused on the promotion and development of friendly AI Musk is a supporter of environmentally friendly energy and consumption.

Concerns over the planet's future habitability prompted him to investigate the potential of establishing a self-sustaining human colony on Mars.

Other projects include the Hyperloop, a high-speed transportation system, and the Musk electric jet, a jet-powered supersonic electric aircraft.

Musk sat on President Donald Trump's Strategy and Policy Forum and Manufacturing Jobs Initiative for a short time before stepping out when the US withdrew from the Paris Climate Agreement.

Musk launched the Musk Foundation in 2002, which funds and supports research and activism in the domains of renewable energy, human space exploration, pediatric research, and science and engineering education.

Musk's effect on AI is significant, despite his best-known work with Tesla and SpaceX, as well as his contentious social media pronouncements.

In 2015, Musk cofounded the charity OpenAI with the objective of creating and supporting "friendly AI," or AI that is created, deployed, and utilized in a manner that benefits mankind as a whole.

OpenAI's objective is to make AI open and accessible to the general public, reducing the risks of AI being controlled by a few privileged people.

OpenAI is especially concerned about the possibility of Artificial General Intelligence (AGI), which is broadly defined as AI capable of human-level (or greater) performance on any intellectual task, and ensuring that any such AGI is developed responsibly, transparently, and distributed evenly and openly.

OpenAI has had its own successes in taking AI to new levels while staying true to its goals of keeping AI friendly and open.

In June of 2018, a team of OpenAI-built robots defeated a human team in the video game Dota 2, a feat that could only be accomplished through robot teamwork and collaboration.

Bill Gates, a cofounder of Microsoft, praised the achievement on Twitter, calling it "a huge milestone in advancing artificial intelligence" (@BillGates, June 26, 2018).

Musk resigned away from the OpenAI board in February 2018 to prevent any conflicts of interest while Tesla advanced its AI work for autonomous driving.

Musk became the CEO of Tesla in 2008 after cofounding the company in 2003 as an investor.

Musk was the chairman of Tesla's board of directors until 2018, when he stepped down as part of a deal with the US Securities and Exchange Commission over Musk's false claims about taking the company private.

Tesla produces electric automobiles with self-driving capabilities.

Tesla Grohmann Automation and Solar City, two of its subsidiaries, offer relevant automotive technology and manufacturing services and solar energy services, respectively.

Tesla, according to Musk, will reach Level 5 autonomous driving capabilities in 2019, as defined by the National Highway Traffic Safety Administration's (NHTSA) five levels of autonomous driving.

Tes la's aggressive development with autonomous driving has influenced conventional car makers' attitudes toward electric cars and autonomous driving, and prompted a congressional assessment of how and when the technology should be regulated.

Musk is widely credited as a key influencer in moving the automotive industry toward autonomous driving, highlighting the benefits of autonomous vehicles (including reduced fatalities in vehicle crashes, increased worker productivity, increased transportation efficiency, and job creation) and demonstrating that the technology is achievable in the near term.

Tesla's autonomous driving code has been created and enhanced under the guidance of Musk and Tesla's Director of AI, Andrej Karpathy (Autopilot).

The computer vision analysis used by Tesla, which includes an array of cameras on each car and real-time image processing, enables the system to make real-time observations and predictions.

The cameras, as well as other exterior and internal sensors, capture a large quantity of data, which is evaluated and utilized to improve Autopilot programming.

Tesla is the only autonomous car maker that is opposed to the LIDAR laser sensor (an acronym for light detection and ranging).

Tesla uses cameras, radar, and ultrasonic sensors instead.

Though academics and manufacturers disagree on whether LIDAR is required for fully autonomous driving, the high cost of LIDAR has limited Tesla's rivals' ability to produce and sell vehicles at a pricing range that allows a large number of cars on the road to gather data.

Tesla is creating its own AI hardware in addition to its AI programming.

Musk stated in late 2017 that Tesla is building its own silicon for artificial-intelligence calculations, allowing the company to construct its own AI processors rather than depending on third-party sources like Nvidia.

Tesla's AI progress in autonomous driving has been marred by setbacks.

Tesla has consistently missed self-imposed deadlines, and serious accidents have been blamed on flaws in the vehicle's Autopilot mode, including a non-injury accident in 2018, in which the vehicle failed to detect a parked firetruck on a California freeway, and a fatal accident in 2018, in which the vehicle failed to detect a pedestrian outside a crosswalk.

Neuralink was established by Musk in 2016.

With the stated objective of helping humans to keep up with AI breakthroughs, Neuralink is focused on creating devices that can be implanted into the human brain to better facilitate communication between the brain and software.

Musk has characterized the gadgets as a more efficient interface with computer equipment, while people now operate things with their fingertips and voice commands, directives would instead come straight from the brain.

Though Musk has made major advances to AI, his pronouncements regarding the risks linked with AI have been apocalyptic.

Musk has called AI "humanity's greatest existential danger" and "the greatest peril we face as a civilisation" (McFarland 2014).

(Morris 2017).

He cautions against the perils of power concentration, a lack of independent control, and a competitive rush to acceptance without appropriate analysis of the repercussions.

While Musk has used colorful terminology such as "summoning the devil" (McFarland 2014) and depictions of cyborg overlords, he has also warned of more immediate and realistic concerns such as job losses and AI-driven misinformation campaigns.

Though Musk's statements might come out as alarmist, many important and well-respected figures, including as Microsoft cofounder Bill Gates, Swedish-American scientist Max Tegmark, and the late theoretical physicist Stephen Hawking, share his concern.

Furthermore, Musk does not call for the cessation of AI research.

Instead, Musk supports for responsible AI development and regulation, including the formation of a Congressional committee to spend years studying AI with the goal of better understanding the technology and its hazards before establishing suitable legal limits.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Superintelligence; Technological Singularity; Workplace Automation.



References & Further Reading:


Moravec, Hans. 1988. Mind Children: The Future of Robot and Human Intelligence. Cambridge, MA: Harvard University Press.

Moravec, Hans. 1999. Robot: Mere Machine to Transcendent Mind. Oxford, UK: Oxford University Press.

Moravec, Hans. 2003. “Robots, After All.” Communications of the ACM 46, no. 10 (October): 90–97.

Pinker, Steven. 2007. The Language Instinct: How the Mind Creates Language. New York: Harper.




Artificial Intelligence - Lethal Autonomous Weapons Systems.

  




Lethal Autonomous Weapons Systems.(LAWS), also known as "lethal autonomous weapons," "robotic weapons," or "killer robots," are unmanned robotic systems that can choose and engage targets autonomously and determine whether or not to employ lethal force.

While human-like robots waging wars or utilizing fatal force against people are common in popular culture (ED-209 in RoboCop, T-800 in The Terminator, etc. ), fully autonomous robots are still under development.

LAWS raise serious ethical issues, which are increasingly being contested by AI specialists, NGOs, and the international community.

While the concept of autonomy varies depending on the debate over LAWS, it is often defined as "the capacity to select and engage a target without further human interference after being commanded to do so" (Arkin 2017).


However, according on their amount of autonomy, LAWS are typically categorized into three categories: 

1. Weapons with a person in the loop: These weapons can only identify targets and deliver force in response to a human order.

2. Weapons with a person on the loop: These weapons may choose targets and administer force while being monitored by a human supervisor who can overrule their actions.

3. Human-out-of-the-loop weapons: they can choose targets and deliver force without any human involvement or input.

These three categories of unmanned weapons are covered under the LAWS.


The phrase "totally autonomous weapons" applies to both human-out-of-the-loop and "human-on-the-loop weapons" (or weapons with monitored autonomy) if the monitoring is restricted (for example, if their response time cannot be matched by a human operator).

Robotic weapons aren't a new concept.

Anti-tank mines, for example, have been frequently utilized since World War II (1939–1945), when they were first activated by a human and then engaged targets on their own.

Furthermore, LAWS covers a wide range of unmanned weapons with varying degrees of autonomy and lethality, ranging from ground mines to remote-controlled Unmanned Combat Aerial Vehicles (UCAV), also known as com bat drones, and fire-and-forget missiles.

To far, the only weapons in use that have total autonomy are "defensive" systems (such as landmines).

Neither completely "offensive" autonomous lethal weapons nor machine learning-based LAWS have been deployed yet.

Even though military research is often kept secret, it is known that a number of nations (including the United States, China, Russia, the United Kingdom, Israel, and South Korea) are significantly investing in military AI applications.

The inter-national AI arms race, which began in the early 2010s, has resulted in a rapid pace of progress in this sector, with fully autonomous deadly weapons on the horizon.

There are numerous obvious forerunners of such weapons.

The MK 15 Phalanx CIWS, for example, is a close-in weapon system capable of autonomously performing search, detection, evaluation, tracking, engagement, and kill assessment duties.

It is primarily used by the US Navy.

Another example is Israel's Harpy, a self-destructing anti-radar "fire-and-forget" drone that is dispatched without a specified target and flies a search pattern before attacking targets.

The deployment of LAWS has the potential to revolutionize combat in the same way as gunpowder and nuclear weapons did earlier.

It would eliminate the distinction between fighters and weaponry, and it would make battlefield delimitation more difficult.

However, LAWS may be linked to a variety of military advantages.

Their employment would undoubtedly be a force multiplier, reducing the number of human warriors on the battlefield.

As a result, military lives would be saved.

Because to its quicker reaction time, capacity to undertake movements that human fighters cannot (due to human physical restrictions), and ability to make more efficient judgments (from a military viewpoint), LAWS may be superior to many conventional weapons in terms of force projection.

The use of LAWS, on the other hand, involves significant ethical and political difficulties.

In addition to violating the "Three Laws of Robotics," the deployment of LAWS might lead to the normalization of deadly force, since armed confrontations involve less and fewer human fighters.

Some argue that LAWS are a danger to mankind in this way.

Concerns about the use of LAWS by non-state organizations and nations in non-international armed situations have also been raised.

Delegating life-or-death choices to computers might be seen as a violation of human dignity.

Furthermore, the capacity of LAWS to comply with the norms of international humanitarian law, particularly the rules of proportionality and military necessity, is frequently contested.

Despite their lack of compassion, others claim that LAWS would not act on emotions like as rage, which may lead to purposeful pain such as torture or rape.

Given the difficult difficulty of avoiding war crimes, as seen by countless incidents in previous armed conflicts, it is even possible to claim that LAWS might commit fewer crimes than human warriors.

The effect of LAWS deployment on noncombatants is also a hot topic of debate.

Some argue that the adoption of LAWS will result in fewer civilian losses (Arkin 2017), since AI may be more efficient in decision-making than human warriors.

Some detractors, however, argue that there is a greater chance of bystanders getting caught in the crossfire.

Furthermore, the capacity of LAWS to adhere to the principle of distinction is a hot topic, since differentiating fighters from civilians may be particularly difficult, especially in non-international armed conflicts and asymmetric warfare.

Because they are not moral actors, LAWS cannot be held liable for any of their conduct.

This lack of responsibility may cause further suffering to war victims.

It may also inspire war crimes to be committed.

However, it is debatable whether the authority that chose to deploy LAWS or the persons who created or constructed it have moral culpability.

LAWS has attracted a lot of scientific and political interest in the recent 10 years.

Eighty-seven non-governmental organizations have joined the group that began the "Stop Killer Robots" campaign in 2012.

Civil society mobilizations have emerged from its campaign for a preemptive prohibition on the creation, manufacturing, and use of LAWS.

A statement signed by over 4,000 AI and robotics academics in 2016 called for a ban on LAWS.

Over 240 technology businesses and organizations promised not to engage in or promote the creation, manufacturing, exchange, or use of LAWS in 2018.

Because current international law may not effectively handle the challenges created by LAWS, the UN's Convention on Certain Conventional Weapons launched a consultation process on the subject.

It formed a Group of Governmental Experts in 2016. (GGE). 

Due to a lack of consensus and the resistance of certain nations, the GGE has yet to establish an international agreement to outlaw LAWS (especially the United States, Russia, South Korea, and Israel).

However, twenty-six UN member nations have backed the request for a ban on LAWS, and the European Parliament passed a resolution in June 2018 asking for "an international prohibition on weapon systems that lack human supervision over the use of force." Because there is no example of a technical invention that has not been employed, LAWS will almost certainly be used in the future of conflict.

Nonetheless, there is widespread agreement that humans should be kept "in the loop" and that the use of Regulations should be governed by international and national laws.

However, as the deployment of nuclear and chemical weapons, as well as anti-personal landmines, has shown, a worldwide legal prohibition on the use of LAWS is unlikely to be enforced by all governments and non-state groups.

Hacking the Mac Mac Hack IV, a 1967 chess software built by Richard Greenblatt, gained notoriety for being the first computer chess program to engage in a chess tournament and to play adequately against humans, obtaining a USCF rating of 1,400 to 1,500.

Greenblatt's software, written in the macro assembly language MIDAS, operated on a DEC PDP-6 computer with a clock speed of 200 kilohertz.

While a graduate student at MIT's Artificial Intelligence Laboratory, he built the software as part of Project MAC.

"Chess is the drosophila [fruit fly] of artificial intelligence," according to Russian mathematician Alexander Kronrod, the field's chosen experimental organ ism (Quoted in McCarthy 1990, 227).

Creating a champion chess software has been a cherished goal in artificial intelligence since 1950, when Claude Shan ley first described chess play as a task for computer programmers.

Chess and games in general involve difficult but well-defined issues with well-defined rules and objectives.

Chess has long been seen as a prime illustration of human-like intelligence.

Chess is a well-defined example of human decision-making in which movements must be chosen with a specific purpose in mind, with limited knowledge and uncertainty about the result.

The processing capability of computers in the mid-1960s severely restricted the depth to which a chess move and its alternative answers could be studied since the number of different configurations rises exponentially with each consecutive reply.

The greatest human players have been proven to examine a small number of moves in greater detail rather than a large number of moves in lower depth.

Greenblatt aimed to recreate the methods used by good players to locate significant game tree branches.

He created Mac Hack to reduce the number of nodes analyzed while choosing moves by using a minimax search of the game tree along with alpha-beta pruning and heuristic components.

In this regard, Mac Hack's style of play was more human-like than that of more current chess computers (such as Deep Thought and Deep Blue), which use the sheer force of high processing rates to study tens of millions of branches of the game tree before making moves.

In a contest hosted by MIT mathematician Seymour Papert in 1967, Mac Hack defeated MIT philosopher Hubert Dreyfus and gained substantial renown among artificial intelligence researchers.

The RAND Corporation published a mimeographed version of Dreyfus's paper, Alchemy and Artificial Intelligence, in 1965, which criticized artificial intelligence researchers' claims and aspirations.

Dreyfus claimed that no computer could ever acquire intelligence since human reason and intelligence are not totally rule-bound, and hence a computer's data processing could not imitate or represent human cognition.

In a part of the paper titled "Signs of Stagnation," Dreyfus highlighted attempts to construct chess-playing computers, among his many critiques of AI.

Mac Hack's victory against Dreyfus was first seen as vindication by the AI community.

Machine Learning Regressions are a kind of regression that is used in machine learning.

"Machine learning," a phrase originated by Arthur Samuel in 1959, is a kind of artificial intelligence that produces results without requiring explicit programming.

Instead, the system learns from a database on its own and improves over time.

Machine learning techniques have a wide range of applications (e.g., computer vision, natural language processing, autonomous gaming agents, classification, and regressions) and are used in practically every sector due to their resilience and simplicity of implementation (e.g., tech, finance, research, education, gaming, and navigation).

Machine learning algorithms may be generically classified into three learning types: supervised, unsupervised, and reinforcement, notwithstanding their vast range of applications.

Supervised learning is exemplified by machine learning regressions.

They use algorithms that have been trained on data with labeled continuous numerical outputs.

The quantity of training data or validation criteria required once the regression algorithm has been suitably trained and verified will depend on the issues being addressed.

For data with comparable input structures, the newly developed predictive models give inferred outputs.

These aren't static models.

They may be updated on a regular basis with new training data or by displaying the actual right outputs on previously unlabeled inputs.

Despite machine learning methods' generalizability, there is no one program that is optimal for all regression issues.

When choosing the best machine learning regression method for the present situation, there are a lot of things to think about (e.g., programming languages, available libraries, algo rithm types, data size, and data structure).

There are machine learning programs that employ single or multivariable linear regression approaches, much like other classic statistical methods.

These models represent the connections between a single or several independent feature variables and a dependent target variable.

The models provide linear representations of the combined input variables as their output.

These models are only applicable to noncomplex and small data sets.

Polynomial regressions may be used with nonlinear data.

This necessitates the programmers' knowledge of the data structure, which is often the goal of utilizing machine learning models in the first place.

These methods are unlikely to be appropriate for most real-world data, but they give a basic starting point and might provide users with models that are straightforward to understand.

Decision trees, as the name implies, are tree-like structures that map the input features/attributes of programs to determine the eventual output goal.

The answer to the condition of that node splits into edges in a decision tree algorithm, which starts with the root node (i.e., an input variable).

A leaf is defined as an edge that no longer divides; an internal edge is defined as one that continues to split.

For example, age, weight, and family diabetic history might be used as input factors in a dataset of diabetic and nondiabetic patients to estimate the likelihood of a new patient developing diabetes.

The age variable might be used as the root node (e.g., age 40), with the dataset being divided into those who are more than or equal to 40 and those who are 39 and younger.

The model provides that leaf as the final output if the following internal node after picking more than or equal to 40 is whether or not a parent has/had diabetes, and the leaf estimates the affirmative responses to have a 60% likelihood of this patient acquiring diabetes.

This is a very basic decision tree that demonstrates the decision-making process.

Thousands of nodes may readily be found in a decision tree.

Random forest algorithms are just decision tree mashups.

They are made up of hundreds of decision trees, the ultimate outputs of which are the averaged outputs of the individual trees.

Although decision trees and random forests are excellent at learning very complex data structures, they are prone to overfitting.

With adequate pruning (e.g., establishing the n values limits for splitting and leaves) and big enough random forests, overfitting may be reduced.

Machine learning techniques inspired by the neural connections of the human brain are known as neural networks.


Neurons are the basic unit of neural network algorithms, much as they are in the human brain, and they are organized into numerous layers.

The input layer contains the input variables, the hidden layers include the layers of neurons (there may be numerous hidden levels), and the output layer contains the final neuron.

A single neuron in a feedforward process 

(a) takes the input feature variables, 

(b) multiplies the feature values by a weight, 

(c) adds the resultant feature products, together with a bias variable, and 

(d) passes the sums through an activation function, most often a sigmoid function.


The partial derivative computations of the previous neurons and neural layers are used to alter the weights and biases of each neuron.

Backpropagation is the term for this practice.


The output of the activation function of a single neuron is distributed to all neurons in the next hidden layer or final output layer.

As a result, the projected value is the last neuron's output.

Because neural networks are exceptionally adept at learning exceedingly complicated variable associations, programmers may spend less time reconstructing their data.

Neural network models, on the other hand, are difficult to interpret due to their complexity, and the intervariable relationships are largely hidden.

When used on extremely big datasets, neural networks operate best.

They need meticulous hyper-tuning and considerable processing capacity.

For data scientists attempting to comprehend massive datasets, machine learning has become the standard technique.

Machine learning systems are always being improved in terms of accuracy and usability by researchers.

Machine learning algorithms, on the other hand, are only as useful as the data used to train the model.

Poor data produces dramatically erroneous outcomes, while biased data combined with a lack of knowledge deepens societal disparities.

 


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Autonomous Weapons Systems, Ethics of; Battlefield AI and Robotics.


Further Reading:



Arkin, Ronald. 2017. “Lethal Autonomous Systems and the Plight of the Non-Combatant.” In The Political Economy of Robots, edited by Ryan Kiggins, 317–26. Basingstoke, UK: Palgrave Macmillan.

Heyns, Christof. 2013. Report of the Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions. Geneva, Switzerland: United Nations Human Rights Council. http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf.

Human Rights Watch. 2012. Losing Humanity: The Case against Killer Robots. https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots.

Krishnan, Armin. 2009. Killer Robots: Legality and Ethicality of Autonomous Weapons. Aldershot, UK: Ashgate.

Roff, Heather. M. 2014. “The Strategic Robot Problem: Lethal Autonomous Weapons in War.” Journal of Military Ethics 13, no. 3: 211–27.

Simpson, Thomas W., and Vincent C. Müller. 2016. “Just War and Robots’ Killings.” Philosophical Quarterly 66, no. 263 (April): 302–22.

Singer, Peter. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st Century. New York: Penguin.

Sparrow, Robert. 2007. “Killer Robots.” Journal of Applied Philosophy 24, no. 1: 62–77. 


Artificial Intelligence - How Are Accidents and Risk Assessment Done Using AI?

 



Many computer-based systems' most significant feature is their reliability.

Physical damage, data loss, economic disruption, and human deaths may all result from mechanical and software failures.

Many essential systems are now controlled by robotics, automation, and artificial intelligence.

Nuclear power plants, financial markets, social security payments, traffic lights, and military radar stations are all under their watchful eye.

High-tech systems may be designed purposefully hazardous to people, as with Trojan horses, viruses, and spyware, or they can be dangerous due to human programming or operation errors.

They may become dangerous in the future as a result of purposeful or unintended actions made by the machines themselves, or as a result of unanticipated environmental variables.

The first person to be murdered while working with a robot occurred in 1979.

A one-ton parts-retrieval robot built by Litton Industries hit Ford Motor Company engineer Robert Williams in the head.

After failing to entirely switch off a malfunctioning robot on the production floor at Kawasaki Heavy Industries two years later, Japanese engineer Kenji Urada was murdered.

Urada was shoved into a grinding machine by the robot's arm.

Accidents do not always result in deaths.

A 300-pound Knightscope K5 security robot on patrol at a retail business center in Northern California, for example, knocked down a kid and ran over his foot in 2016.

Only a few cuts and swelling were sustained by the youngster.

The Cold War's history is littered with stories of nuclear near-misses caused by faulty computer technology.

In 1979, a computer glitch at the North American Aerospace Defense Command (NORAD) misled the Strategic Air Command into believing that the Soviet Union had fired over 2,000 nuclear missiles towards the US.

An examination revealed that a training scenario had been uploaded to an active defense computer by mistake.

In 1983, a Soviet military early warning system identified a single US intercontinental ballistic missile launching a nuclear assault.

Stanislav Petrov, the missile defense system's operator, correctly discounted the signal as a false alarm.

The reason of this and subsequent false alarms was ultimately discovered to be sunlight hitting high altitude clouds.

Petrov was eventually punished for humiliating his superiors by disclosing faults, despite preventing global thermonuclear Armageddon.

The so-called "2010 Flash Crash" was caused by stock market trading software.

In slightly over a half-hour on May 6, 2010, the S&P 500, Dow Jones, and NASDAQ stock indexes lost—and then mainly regained—a trillion dollars in value.

Navin Dal Singh Sarao, a U.K. trader, was arrested after a five-year investigation by the US Department of Justice for allegedly manipulating an automated system to issue and then cancel huge numbers of sell orders, allowing his business to acquire equities at temporarily reduced prices.

In 2015, there were two more software-induced market flash crashes, and in 2017, there were flash crashes in the gold futures market and digital cryptocurrency sector.

Tay (short for "Thinking about you"), a Microsoft Corporation artificial intelligence social media chatterbot, went tragically wrong in 2016.

Tay was created by Microsoft engineers to imitate a nineteen-year-old American girl and to learn from Twitter discussions.

Instead, Tay was trained to use harsh and aggressive language by internet trolls, which it then repeated in tweets.

After barely sixteen hours, Microsoft deleted Tay's account.

More AI-related accidents in motor vehicle operating may occur in the future.

In 2016, the first fatal collision involving a self-driving car happened when a Tesla Model S in autopilot mode collided with a semi-trailer crossing the highway.

The motorist may have been viewing a Harry Potter movie on a portable DVD player when the accident happened, according to witnesses.

Tesla's software does not yet allow for completely autonomous driving, hence a human operator is required.

Despite these dangers, one management consulting company claims that autonomous automobiles might avert up to 90% of road accidents.

Artificial intelligence security is rapidly growing as a topic of cybersecurity study.

Militaries all around the globe are working on prototypes of dangerous autonomous weapons systems.

Automatic weapons, such as drones, that now rely on a human operator to make deadly force judgments against targets, might be replaced with automated systems that make life and death decisions.

Robotic decision-makers on the battlefield may one day outperform humans in extracting patterns from the fog of war and reacting quickly and logically to novel or challenging circumstances.

High technology is becoming more and more important in modern civilization, yet it is also becoming more fragile and prone to failure.

An inquisitive squirrel caused the NASDAQ's primary computer to collapse in 1987, bringing one of the world's major stock exchanges to its knees.

In another example, the ozone hole above Antarctica was not discovered for years because exceptionally low levels reported in data-processed satellite images were assumed to be mistakes.

It's likely that the complexity of autonomous systems, as well as society's reliance on them under quickly changing circumstances, will make completely tested AI unachievable.

Artificial intelligence is powered by software that can adapt to and interact with its surroundings and users.

Changes in variables, individual acts, or events may have unanticipated and even disastrous consequences.

One of the dark secrets of sophisticated artificial intelligence is that it is based on mathematical approaches and deep learning algorithms that are so complicated that even its creators are baffled as to how it makes accurate conclusions.

Autonomous cars, for example, depend on exclusively computer-written instructions while they watch people driving in real-world situations.

But how can a self-driving automobile learn to anticipate the unexpected?

Will attempts to adjust AI-generated code to decrease apparent faults, omissions, and impenetrability lessen the likelihood of unintended negative consequences, or will they merely magnify existing problems and produce new ones? Although it is unclear how to mitigate the risks of artificial intelligence, it is likely that society will rely on well-established and presumably trustworthy machine-learning systems to automatically provide rationales for their actions, as well as examine newly developed cognitive computing systems on our behalf.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



Also see: Algorithmic Error and Bias; Autonomy and Complacency; Beneficial AI, Asilo mar Meeting on; Campaign to Stop Killer Robots; Driverless Vehicles and Liability; Explainable AI; Product Liability and AI; Trolley Problem.


Further Reading

De Visser, Ewart Jan. 2012. “The World Is Not Enough: Trust in Cognitive Agents.” Ph.D. diss., George Mason University.

Forester, Tom, and Perry Morrison. 1990. “Computer Unreliability and Social Vulnerability.” Futures 22, no. 5 (June): 462–74.

Lee, John D., and Katrina A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors 46, no. 1 (Spring): 50–80.

Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan 

M. Ćirković, 308–45. New York: Oxford University Press.



Artificial Intelligence - Robot Ethics.

     



    What Is Robot Ethics?


    Robot ethics is a branch of technology ethics that studies, clarifies, and addresses the moral possibilities and concerns that come from the design, development, and deployment of robots and other autonomous systems.

    "Robot ethics" is an umbrella phrase that encompasses a number of similar but distinct projects and undertakings.

    The earliest known articulation of a robot ethics may be found in fiction, notably in Isaac Asimov's collection of robot tales, I, Robot (1950).



    Asimov presented the three rules of robotics in the short story "Runaround," which initially published in the March 1942 edition of Astounding Science Fiction: 


    1. A robot may not damage a human being or enable a human being to come to danger as a result of its inactivity.

    2. Except when such directions contradict with the First Law, a robot shall follow the orders issued to it by humans.

    3. A robot must defend its own existence as long as doing so does not violate the First or Second Laws.



    (Asimov, 40, 1950) In his 1985 book Robots and Empire, Asimov adds a fourth element to the sequence, which he refers to as the "zeroth rule," to maintain the dominance of lower-numbered components over higher-numbered ones.

    The rules are both functionalist and anthropocentric by design, outlining a series of layered limits on robot conduct in order to protect human persons and communities' interests and well-being.

    Despite this, many have attacked the legislation as weak and impracticable for enforcing a true moral code.

    The principles were created by Asimov to create captivating science fiction tales, not to address real-world problems involving machine action and robot behavior.

    As a result, Asimov's rules were never meant to constitute a full and final set of instructions for real robots.

    He used the rules to create dramatic tension, imaginative circumstances, and character struggle in his stories.

    "Asimov's Three Laws of Robotics are literary techniques, not technical concepts," writes Lee McCauley (2007, 160).

    Asimov's rules have been discovered to be severely inadequate for daily practice by theorists and practitioners in the domains of robotics and computer ethics.

    Susan Leigh Anderson tackles this problem front on, showing not only that Asimov ignored his own principles as a basis for machine ethics, but also that the laws are inadequate as a foundation for an ethical framework or system (Anderson 2008, 487–93).

    As a result, although academics and developers are acquainted with the Three Rules of Robotics, they are also aware that the laws are neither computable or implementable in any meaningful way.

    Beyond Asimov's original science fiction invention, the scientific literature has evolved various variations of robot ethics.

    Robot ethics, roboethics, and robot rights are examples of these.

    Gianmarco Veruggio, a roboticist, coined the term "roboethics" in 2002.

    It was first addressed in public in 2004 at the First International Symposium on Roboethics, and has since been expanded upon and explained in a number of publications.

    "Roboethics is an applied ethics whose goal is to build scientific/cultural/technical instruments that may be shared by diverse social groups and beliefs," according to Veruggio.

    These technologies are intended to promote and support the development of robotics for the benefit of human society and people, as well as to assist in the prevention of its abuse against humanity" (Veruggio and Operto 2008, 1504).

    "Roboethics is neither the ethics of robots, nor any artificial ethics," says one definition, "but it is the human ethics of robots' inventors, makers, and users" (Veruggio and Operto 2008, 1504).

    As a result, roboethics is often used to establish a professional ethics for roboticists, and is therefore comparable to other professional, applied ethics formulations such as bioethics or computer ethics.

    The European Robotics Research Network (EURON) Roboethics Roadmap, which aimed to develop an ethical framework for "the design, manufacturing, and use of robots" (Veruggio 2006, 612) and the Foundation for Responsible Robotics (FRR), which recognizes that because "robots are tools without moral intelligence," their creators must "be accountable for the ethical developments that must come with technological innovation" (Veruggio 2006, 612). (FRR 2019).

    There's also the issue of robot ethics. Robot ethics, according to Veruggio et al. (2011, 21), refers to the code of behavior that designers adopt in robot artificial intelligence.

    This entails a kind of artificial ethics capable of ensuring that autonomous robots behave ethically in all scenarios where they interact with humans or when their activities may have negative implications for humans or the environment.

    Robot ethics is concerned with the moral behavior of the machine itself, as opposed to roboethics, which is concerned with the moral conduct of the human creator, developer, or user.

    Robot ethics is often confused with "machine ethics," and Veruggio uses both terms interchangeably.

    Machine ethics is concerned with the moral capabilities of machines themselves, as opposed to computer ethics, which is concerned with the moral behavior of the human inventor, developer, or user of the system (Anderson and Anderson 2007, 15).

    Under the title Moral Machines, Wendell Wallach and Colin Allen have explored a similar line of thought.

    "The area of machine morality extends the study of computer ethics beyond concern about what humans do with their computers to issues about what machines do by themselves," according to Wallach and Allen (2009, 6).

    Robot ethics, like computer ethics before it, believes technology to be a more or less transparent tool or instrument of human moral decision-making and behavior.

    Patrick Lin et al. (2012 and 2017) attempted to bring all of these works together under a broader definition of the word, describing it as an emerging discipline of applied moral philosophy.

    To present, the majority of robot ethics research has focused on concerns of accountability, either as it pertains to human designers of robotic systems or as it pertains to or is attributed to the robotic device itself.

    However, this is just one side of the story.

    As Luciano Floridi and J. W. Sanders (2001, 349–50) correctly point out, ethics is about social connections between two interacting components: the actor (or agent) and the action receiver.

    The majority of roboethics and robot ethics initiatives may be classified as solely agent-oriented endeavors.

    "Robot rights," a term coined by philosophers Mark Coeckelbergh (2010) and David Gunkel (2018), as well as legal scholars Kate Darling (2012) and Alain Bensoussan and Jérémy Bensoussan. 

    For these researchers, robot ethics include not just the robot's moral behavior, but also the artifact's moral and legal standing, as well as its place in our ethical and legal systems as a potential subject rather than merely an object.

    The European Parliament has put this notion to the test, proposing a new legal category of electronic person to cope with the societal integration of increasingly autonomous robotics and AI systems.

    In conclusion, the phrase "robot ethics" encompasses a wide range of initiatives relating to robots and their societal influence and repercussions.

    In its more specialized form, roboethics refers to a field of applied or professional ethics concerned with moral dilemmas connected to the design, development, and implementation of robots and other autonomous technologies.

    In a broader sense, robot ethics refers to a branch of moral philosophy concerned with the moral and legal implications of robots acting as both agents and patients.




    Robot ethics is a rising multidisciplinary study endeavor that aims to understand the ethical implications and repercussions of robotic technology, particularly autonomous robots. 


    It is generally located at the crossroads of applied ethics and robotics. 

    Researchers, thinkers, and academics from fields as varied as robotics, computer science, psychology, law, philosophy, and others are tackling the difficult ethical issues surrounding the development and deployment of robotic technology in society. 

    Many fields of robotics are touched, particularly those that include robots interacting with people, such as elder care and medical robotics, as well as robots for different search and rescue tasks, including military robots, and other types of service and entertainment robots. 

    While military robots were initially at the forefront of the debate (e.g., whether and when autonomous robots should be allowed to use lethal force, whether they should be allowed to make those decisions autonomously, etc. ), the impact of other types of robots, particularly social robots, has grown in importance in recent years. 


    The IEEE-RAS Technical Committee on Robot Ethics' goal is to offer a platform for the IEEE-RAS to raise and solve the pressing ethical issues raised by and linked with robotics research and technology. 


    The TC (now in its third generation) has been organizing various types of meetings (from satellite workshops at main conferences to standalone venues) to draw attention to the increasingly urgent ethical issues raised by rapidly advancing robotics technology since its inception almost a decade ago in 2004. 

    For example, in recent major conferences, an increasing number of workshops and special sessions have been offered (such as ICRA, IACAP, AISB and others). 

    There are also plans for further seminars, special sessions, and stand-alone locations. 

    Furthermore, an increasing number of publications, public lectures, and interviews by former and current TC co-chairs and other researchers invested in this topic are aimed at raising awareness of the urgent need for researchers and non-researchers alike to understand the social impact and ethical implications of robot technology. 

    The TC continues to promote public awareness and plans to arrange a standalone worldwide event on robot ethics in the near future, in addition to special sessions and seminars on robot ethics at important international venues. 

    Artificial Intelligence and Robotics Ethics. 

    Artificial intelligence (AI) and robots are digital technologies that will have a major influence on humanity's future progress in the near future. 

    They've highlighted basic concerns about what we should do with these systems, what they should do for us, what hazards they pose, and how we might manage them. 



    Context. 


    The ethics of AI and robots are often centered on different "concerns," which is a common reaction to new technology. 

    Many of these concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they claim that technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will render going out obsolete); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); and some are egregiously wrong when they claim that technology will fundamentally change humans (telephones will destroy personal (cars will kill children and fundamentally change the landscape). 

    The purpose of a piece like this is to dissect the concerns and deflate the non-issues. 

    Some technologies, such as nuclear power, automobiles, and plastics, have sparked ethical and political debate as well as considerable regulatory initiatives to limit their trajectory, generally after some harm has been done. 

    New technologies, in addition to "ethical issues," challenge present norms and conceptual frameworks, which is of special interest to philosophy. 

    Finally, after we've grasped the context of a technology, we must create our society reaction, which includes regulation and legislation. 

    All of these characteristics are present in modern AI and robotics technologies, as well as the more basic worry that they will usher in the end of the period of human control on Earth. 

    In recent years, the ethics of AI and robotics have gotten a lot of press attention, which helps support related research but also risks undermining it: the press frequently talks as if the issues under discussion are just predictions of what future technology will bring, and as if we already know what would be most ethical and how to get there. 

    Risk, security (Brundage et al. 2018, see the Other Internet Resources section below, henceforth [OIR]), and effect prediction are therefore the focus of press attention (e.g., on the job market). 

    As a consequence, a discussion of mostly technical issues focuses on how to accomplish a desired result. 

    Image and public relations are also driving current policy and industrial debates, where the term "ethical" is nothing more than the new "green," maybe used for "ethics washing." In order for an issue to qualify as a dilemma for AI ethics, we must be unsure of what the proper way to do is. 

    Under this view, job loss, stealing, or death using AI are not ethical issues; the question is whether they are permitted in particular situations. 

    This article focuses on serious ethical issues for which we do not have immediate solutions. 

    Last but not least, AI and robotics ethics is a relatively new field within applied ethics, with significant dynamics but few well-established issues and authoritative overviews—though there is a promising outline (European Group on Ethics in Science and New Technologies 2018) and there are beginnings on societal impact (Floridi et al. 2018; Taddeo and Floridi 2018; S. Taylor et al. 2018; Walsh 2018; Bryson 2019; Gibert 2019; Whittlestone et (AI HLEG 2019 [OIR]; IEEE 2019). 

    As a result, this page must not only repeat what the community has already accomplished, but also offer an order where none exists. 



    Artificial Intelligence and Robotics 



    The term "artificial intelligence" (AI) refers to any kind of artificial computer system that exhibits intelligent behavior, i.e., complicated behavior that is conducive to achieving objectives. 

    We don't want to limit "intelligence" to what would need intelligence if performed by people, as Minsky proposed (1985). 

    This means we include a variety of machines, including "technical AI" computers that have limited learning and reasoning skills but excel at automating certain activities, as well as "general AI" machines that attempt to produce a generally intelligent agent. 

    As a result, the topic of "philosophy of AI" has emerged as a way for AI to reach closer to human skin than previous technologies. 


    Perhaps this is because AI's goal is to develop computers that have a trait that is important to how we humans understand ourselves: feelings, thoughts, and intelligence. 

    Sensing, modeling, planning, and action are probably the most important functions of an artificially intelligent agent, but current AI applications also include perception, text analysis, natural language processing (NLP), logical reasoning, game-playing, decision support systems, data analytics, predictive analytics, autonomous vehicles, and other forms of robotics (P. Stone et al. 2016). 

    To accomplish these goals, AI may use a variety of computing strategies, such as traditional symbol-manipulating AI inspired by natural cognition, or machine learning using neural networks (Goodfellow, Bengio, and Courville 2016; Silver et al. 2018). 


    It's worth mentioning that the word "AI" was widely used from 1950 to 1975, then fell out of favor during the "AI winter" of 1975–1995, and was restricted. 


    As a consequence, terms like "machine learning," "natural language processing," and "data science" were often omitted from the definition of "AI." Since about 2010, the definition has been expanded yet further, and "AI" now encompasses practically everything of computer science and even high-tech. 

    Now it's a household name, a booming industry with significant capital investment (Shoham et al. 2018), and it's on the verge of regaining popularity. 

    It may enable us to nearly eradicate global poverty, dramatically decrease sickness, and give better education to essentially everyone on the earth, as Erik Brynjolfsson pointed out. 

    (according to Anderson, Rainie, and Luchsinger, 2018) Robots, on the other hand, are physical machines that move. 


    While AI may be totally software, robots are physical machines that move. 

    Robots are exposed to physical force via "sensors," and they impose physical force on the environment through "actuators," such as a gripper or a rotating wheel. 

    As a result, self-driving automobiles or aircraft are robots, and only a small percentage of robots are "humanoid" (human-shaped), as shown in movies. 

    Some robots use artificial intelligence, whereas others do not: Typical industrial robots mindlessly execute scripts with limited sensory input and no learning or thinking (about 500,000 new industrial robots are deployed each year (IFR 2019 [OIR])). 

    While robotics systems are likely to generate more anxiety among the general public, AI systems are more likely to have a bigger influence on humans. 

    Furthermore, AI or robotics systems that are designed to do a certain set of tasks are less likely to introduce new challenges than systems that are more flexible and autonomous. 

    As a result, robotics and AI may be thought of as two overlapping sets of systems: AI-only systems, robotics-only systems, and systems that are both. 

    We're interested in all three, thus the scope of this essay isn't only the intersection of the two sets, but also their union. 




    Some Thoughts on AI Robotics Policy 


    One of the issues raised in this essay is policy. 

    There is a lot of public debate on AI ethics, and politicians often declare that the issue needs new policy, which is easier said than done: Technology policy is challenging to develop and implement in practice. 

    Incentives and financing, infrastructure, taxes, or good-will messages, as well as regulation by different parties and the law, are all examples. 

    AI policy may inadvertently collide with other goals of technology policy or general policy. 

    In recent years, governments, parliaments, organizations, and business circles in industrialized nations have published studies and white papers, and some have coined catchphrases ("trusted/responsible/humane/human-centered/good/beneficial AI"), but is it all that is required? See Jobin, Ienca, and Vayena (2019) for a survey, as well as V. 

    Müller's list of PT-AI Policy Documents and Institutions. 

    People working in ethics and policy may have a propensity to exaggerate the influence and risks posed by new technologies, while underestimating the scope of present regulation (e.g., for product liability). 

    Businesses, the military, and certain government agencies, on the other hand, have a propensity to "simply speak" and conduct some "ethical washing" in order to maintain a positive public image and go on as before. 

    Putting in place legally enforceable regulations would put established corporate structures and practices to the test. 

    Actual policy is not only an application of ethical theory; it is also influenced by society power structures, and those with power will resist any restrictions. 

    As a result, there's a good chance that regulation will be rendered useless in the face of economic and political power. 

    There have been several remarkable starts, despite the fact that virtually little real policy has been produced: The current EU policy statement says that "trustworthy AI" should be legal, ethical, and technically sound, and then lists seven criteria: human supervision, technological robustness, privacy and data governance, transparency, fairness, well-being, and accountability (AI HLEG 2019 [OIR]). 

    Much European research today operates under the banner of "responsible research and innovation," and "technology evaluation" has been a common area since nuclear power's inception. 

    In the subject of information technology, professional ethics is also a standard field, and this covers topics that are pertinent to this article. 

    Perhaps a "code of ethics" for AI developers, similar to medical practitioners' codes of ethics, is a possibility here (Véliz 2019). 

    In this article, we look at what data science should be doing (L. Taylor and Purtova 2019). 

    We also believe that, rather than the area as a whole, much regulation will ultimately address individual applications or technologies of AI and robots. 

    In, you'll find a handy overview of an ethical framework for AI (European Group on Ethics in Science and New Technologies 2018: 13ff). 

    Calo (2018), as well as Crawford and Calo (2016), Stahl, Timmermans, and Mittelstadt (2016), Johnson and Verdicchio (2017), and Giubilini and Savulescu (2017), discuss AI policy in general (2018). 

    In the discipline of "Science and Technology Studies," a more political perspective on technology is often emphasized (STS). 

    Concerns in STS are frequently fairly similar to those in ethics, as works like The Ethics of Invention (Jasanoff 2016) demonstrate (Jacobs et al. 2019 [OIR]). 

    Rather than discussing AI or robotics in general, we discuss policy for each type of issue separately in this article. 

     



    Human Use Of AI Robotics Has Ethical Issues.


    We look at challenges that emerge with specific applications of AI and robotics systems that can be more or less autonomous in this part, which means we look at issues that arise with certain usage of the technologies that do not arise with others. 

    It's important to remember, too, that technological advancements will always make certain usage simpler and hence more common, while hindering others. 

    As a result, the design of technological objects has ethical implications for their usage (Houkes and Vermaas 2010; Verbeek 2011), therefore we need "responsible design" in this sector in addition to "responsible use." The emphasis on usage does not presume which ethical systems are most suited to addressing these difficulties; virtue ethics (Vallor 2017) may be more appropriate than consequentialist or value-based ethics (Floridi et al. 2018). 

    This section is also unaffected by the debate over whether AI systems have true "intelligence" or other mental properties: It would also apply if AI and robots were just seen as the present face of automation (see Müller forthcoming-b). 




    Surveillance & Privacy 


    There is a broader debate over privacy and surveillance in information technology (e.g., Macnish 2017; Roessler 2017), which primarily concerns access to private data and individually identifiable information. 

    "The right to be left alone," "information privacy," "privacy as a feature of personhood," "control over one's own information," and "the right to secrecy" are all well-known facets of privacy (Bennett and Raab 2006). 

    Surveillance by other state agents, businesses, and even individuals is now included in privacy studies, which previously focused on state surveillance by secret services. 

    Technology has advanced dramatically in recent decades, but regulation has lagged behind (though there is the Regulation (EU) 2016/679), resulting in a state of anarchy that is used by the most powerful parties, sometimes in plain sight, sometimes in secret. 

    The digital environment has substantially expanded: All data collection and storage is now digital, our lives are becoming more digital, the majority of digital data is now linked to a single Internet, and sensor technology is rapidly being used to create data on non-digital parts of our lives. 

    AI broadens the scope of intelligent data collecting as well as the scope of data analysis. 

    This applies to both broad monitoring of whole populations and traditional targeted surveillance. 

    Furthermore, most of the information is shared between agents for a charge. 

    Controlling who collects which data and who has access, on the other hand, is considerably more difficult in the digital world than it was in the analogue world of paper and phone conversations. 


    Many new AI technologies magnify already identified problems. 


    Face recognition in images and videos, for example, enables for identification and hence profiling and searching for people (Whittaker et al. 2018: 15ff). 

    This is followed by the use of additional identifying methods, such as "device fingerprinting," which are prevalent on the Internet (and occasionally stated in the "privacy policy"). 

    As a consequence, "there is a disturbingly full image of ourselves in this immense ocean of data" (Smolan 2016: 1:01). 

    As a consequence, there's a controversy that hasn't gotten the attention it deserves. 

    Our "free" services are paid for by the data trail we leave behind—but we aren't notified about the data collecting or the worth of this new raw material, and we are pushed into leaving even more data. 

    The major data-collection aspect of the business for the "big 5" corporations (Amazon, Google/Alphabet, Microsoft, Apple, and Facebook) seems to be built on deceit, exploiting human vulnerabilities, promoting procrastination, inducing addiction, and manipulation (Harris 2016 [OIR]). 


    In this "surveillance economy," the major goal of social media, gaming, and much of the Internet is to acquire, keep, and direct attention—and therefore data supply. 

    "The Internet's economic model is surveillance" (Schneier 2015). 

    "Surveillance capitalism" is a term used to describe the surveillance and attention economy (Zuboff 2019). 

    It has resulted in several efforts to break free from these companies' hold, such as via "minimalism" (Newport 2019) and the open source movement, but it seems that today's people lack the degree of autonomy required to break free while continuing to live and work normally. 

    If "ownership" is the proper connection here, we have lost ownership of our data. 

    We have, in some ways, lost control of our data. 

    These systems often disclose truths about us that we prefer to keep hidden or are unaware of: they know more about us than we do. 

    Even just watching our online behavior provides information into our mental processes (Burr and Christianini 2019) and may be used to manipulate us (see below section 2.2). 

    As a result, calls for the protection of "derived data" have been made (Wachter and Mittelstadt 2019). 

    Harari questions about the long-term repercussions of AI in the last phrase of his best-selling book Homo Deus: What will happen to society, politics, and everyday life when non-conscious yet extremely intelligent algorithms know us better than we do? (462, 2016) Except for security patrols, robotic devices have not yet played a significant role in this sector, but that will change as they become more ubiquitous outside of industrial contexts. 

    They are destined to become part of the data-gathering machinery, alongside the "Internet of things," so-called "smart" systems (phone, TV, oven, lamp, virtual assistant, house,...), "smart city" (Sennett 2018), and "smart government." (Relative) anonymisation, access control (plus encryption), and other models where computation is carried out with fully or partially encrypted input data are now standard in data science (Stahl and Wright 2018); in the case of "differential privacy," this is done by adding calibrated noise to encrypt the output of queries (Dwork et al. 2006; Abowd 2017). 

    While more time and money are required, such solutions may help to avoid many of the privacy concerns. 

    Better privacy has also been considered by certain firms as a competitive advantage that can be exploited and sold for a profit. 

    One of the most challenging aspects of regulation is enforcing it, both at the state level and at the level of the person who has a claim. 

    They must locate a court that declares itself competent, identify the liable legal entity, prove the action, maybe establish purpose, and find a court that declares itself competent... and finally persuade the court to follow through on its judgment. 

    Consumer rights, product responsibility, and other civil liabilities, as well as protection of intellectual property rights, are often lacking or difficult to enforce with digital goods. 

    This implies that enterprises with a "digital" history are used to testing their goods on customers without fear of legal repercussions while vigorously maintaining their intellectual property rights. 

    This "Internet Libertarianism" is frequently misinterpreted as implying that technological innovations would solve social issues on their own (Mozorov 2013). 

    Manipulation of Behaviour is a term that refers to the act of manipulating someone's behavior. 

    The ethical difficulties raised by AI in surveillance extend beyond the collection of data and the focus of attention: They include the use of data to influence behavior, both online and offline, in a manner that impairs rational decision-making autonomy. 

    Of course, attempts to control behavior are nothing new, but when AI systems are used, they may take on a new dimension. 

    Users are subject to "nudges," manipulation, and deceit because of their intensive connection with data systems and the rich information about persons that this gives. 

    With enough past data, algorithms may be used to target people or small groups with precisely the kind of information that would most likely effect them. 

    A 'nudge' alters the environment in such a manner that it impacts behavior in a predictable, positive way that is simple and inexpensive to avoid (Thaler & Sunstein 2008). 

    From here, it's a short step to paternalism and manipulation. 

    Many advertisers, marketers, and internet vendors will use behavioural biases, deceit, and addiction development to maximize profit (Costa and Halpern 2019 [OIR]). 

    The economic strategy for most of the gambling and gaming businesses involves manipulation, but it is expanding to other sectors, such as low-cost airlines. 

    This manipulation is done using "dark patterns" in web page or gaming interface design (Mathur et al. 2019). 

    Gambling and the selling of addictive drugs are heavily regulated at the present, but online manipulation and addiction are not—despite the fact that manipulating online behavior is becoming a key Internet business model. 

    Furthermore, political propaganda is now primarily distributed through social media. 

    As in the Facebook-Cambridge Analytica "crisis" (Woolley and Howard 2017; Bradshaw, Neudert, and Howard 2019), this influence may be utilized to sway voting behavior, and if effective, it may jeopardize human liberty (Susser, Roessler, and Nissenbaum 2019). 

    Improved AI "faking" technologies turn what was previously dependable evidence into unreliable evidence—digital pictures, voice recordings, and video have all been affected. 

    It will be rather simple to produce (rather than change) "deep fake" text, images, and video material with any desired content in the near future. 

    Real-time engagement with people by text, phone, or video will soon be faked as well. 

    As a result, we can't trust digital connections while still becoming more reliant on them. 

    Another difficulty is that AI machine learning algorithms depend on large volumes of data for training. 

    As a result, there will often be a trade-off between privacy and data rights vs. 

    product technical excellence. 

    This has an impact on the consequentialist assessment of privacy-invading behaviors. 

    This field's policy has its ups and downs: Businesses' lobbying, secret services, and other governmental agencies that rely on monitoring are putting a lot of pressure on civil liberties and the preservation of individual rights. 

    In comparison to the pre-digital period, when communication was dependent on letters, analogue telephone conversations, and human interaction, and monitoring was subject to severe legal limits, privacy protection has deteriorated dramatically. 

    Despite the fact that the EU General Data Protection Regulation (Regulation (EU) 2016/679) has enhanced privacy protection, the US and China choose growth with fewer regulation (Thompson and Bremmer 2018), most likely in the expectation of gaining a competitive edge. 

    It is evident that, with the assistance of AI technology, state and corporate actors have expanded their power to breach privacy and influence individuals, and will continue to do so to serve their own interests—unless legislation in the public interest intervenes.



    AI Systems' Transparency. 


    The primary difficulties in what is now referred to as "data ethics" or "big data ethics" are opacity and prejudice (Floridi and Taddeo 2016; Mittelstadt and Floridi 2016). 

    "Significant issues regarding a lack of due process, accountability, community participation, and auditing" are raised by AI systems for automated decision assistance and "predictive analytics" (Whittaker et al. 2018: 18ff). 

    They are part of a power structure that "creates decision-making procedures that restrict and limit human involvement" (Danaher 2016b: 245). 

    At the same time, the impacted individual will often be unable to understand how the system arrived at this result, i.e., the system will be "opaque" to that person. 


    Even an expert will struggle to understand how a certain pattern was detected, much alone what the pattern is, if the system uses machine learning. 

    This opacity exacerbates bias in decision systems and data sets. 

    So, at least in circumstances where there is a desire to eliminate bias, opacity and bias analysis go hand in hand, and political responses must address both challenges simultaneously. 

    Many AI systems depend on supervised, semi-supervised, or unsupervised machine learning methods in (simulated) neural networks to extract patterns from a given dataset, with or without "correct" answers supplied. 

    With these methods, the "learning" identifies patterns in the data and labels them in a manner that looks relevant to the choice the system makes, despite the fact that the programmer has no idea which patterns in the data the system has employed. 

    In reality, the algorithms are changing, so when fresh data or feedback ("this was accurate," "this was wrong") is received, the learning system's patterns alter. 

    This implies that the end result is opaque to the user and coders. 

    Furthermore, the program's quality is largely reliant on the data given, as the old adage goes, "garbage in, trash out." So, if the data previously has a bias (for example, police data on suspects' skin color), the algorithm will duplicate that bias. 

    There have been ideas for a common representation of datasets in the form of a "datasheet," which would make detecting bias more straightforward (Gebru et al. 2018 [OIR]). 

    There's also a lot of new research on the limits of machine learning systems, which are basically powerful data filters (Marcus 2018 [OIR]). 

    Some have contended that today's ethical issues are the consequence of AI's technological "shortcuts" (Cristianini forthcoming). 

    Starting with (Van Lent, Fisher, and Mancuso 1999; Lomas et al. 2012) and more recently, a DARPA initiative, numerous technological efforts aimed at "explainable AI" have been undertaken (Gunning 2017 [OIR]). 

    The requirement for a system for clarifying and articulating the power structures, biases, and impacts that computational artefacts exert in society is often referred to as "algorithmic accountability reporting" (Diakopoulos 2015: 398). 

    This isn't to say that we should expect an AI to "explain its reasoning"—doing so would need significantly greater moral autonomy than we now provide AI systems (see section 2.10). 

    If we rely on a system that is supposedly superior to humans but cannot explain its decisions, there is a fundamental problem for democratic decision-making, according to politician Henry Kissinger. 

    We may have "created a potentially dominant technology in search of a guiding philosophy," according to him (Kissinger 2018). 

    Danaher (2016b) refers to this issue as "the menace of algocracy" (adding to Aneesh 2002 [OIR], 2006's usage of the term "algocracy"). 

    To prevent AI becoming a force that leads to a Kafka-style impenetrable suppression mechanism in public administration and elsewhere, Cave (2019) emphasizes the necessity for a larger social shift toward more "democratic" decision-making. 

    In her renowned book Weapons of Math Destruction (2016), O'Neil, as well as Yeung and Lodge, have emphasized the political aspect of this debate (2019). 

    Some of these concerns have been addressed in the EU with the (Regulation (EU) 2016/679), which stipulates that consumers would have a legal "right to explanation" when confronted with a choice based on data processing—how far this goes and to what extent it can be enforced is debatable (Goodman and Flaxman 2017; Wachter, Mittelstadt, and Floridi 2016; Wachter, Mittelstadt, and Russell 2017). 

    According to Zerilli et al. 

    (2019), there may be a double standard here, in which we require a high degree of explanation for machine-based judgments despite people not always meeting that threshold. 


     AI Bias.


    Automated AI decision support systems and "predictive analytics" work with data to create a "output" judgment. 

    This output might be anything from "this restaurant matches your tastes" to "the patient in this X-ray has finished bone development," "credit card application refused," "donor organ will be donated to another patient," "bail is denied," or "target identified and engaged." Data analysis is often used in "predictive analytics" in business, healthcare, and other industries to forecast future events; as prediction becomes simpler, it will become a cheaper commodity. 

    Prediction is used in "predictive policing" (NIJ 2014 [OIR]), which many worry will erode civil rights (Ferguson 2017) since it takes authority away from those whose behavior is expected. 

    Many of the concerns about policing, however, seem to be based on future scenarios in which law enforcement anticipates and punishes planned activities rather than waiting until a crime is committed (as in the 2002 film "Minority Report"). 

    One problem is that these systems may amplify bias already present in the data used to create the system, for as by boosting police patrols in a certain region and uncovering more crime in that area. 

    Actual "predictive policing" or "intelligence-led policing" tactics are primarily concerned with determining where and when police personnel will be most required. 

    In workflow support software (e.g., "ArcGIS"), police officers may also be given additional data, giving them greater power and allowing them to make better judgments. 

    The right amount of faith in the technical quality of these systems, as well as the appraisal of the police work's goals, determine if this is an issue. 

    "AI ethics in predictive policing: From threat models to an ethics of caring," according to a recent study title, may lead in the correct way (Asaro 2019). 

    When a person makes an unjust judgment due of a feature that is truly unrelated to the subject at hand, such as a prejudiced assumption about members of a group, bias is likely to emerge. 

    As a result, one kind of bias is a person's taught cognitive trait, which is frequently not made apparent. 

    The individual in question may not be conscious of their prejudice, and they may even be openly and publicly opposed to a bias that is discovered (e.g., through priming, cf. Graham and Lowery 2004). 

    Binns discusses fairness vs. bias in machine learning (2018). 

    Apart from the social phenomena of learnt prejudice, the human brain system is prone to a variety of "cognitive biases," such as the "confirmation bias," in which people perceive information as supporting their existing beliefs. 

    This second kind of prejudice is generally considered to impair rational judgment (Kahnemann 2011)—though certain cognitive biases, such as the efficient use of resources for intuitive judgment, may provide an evolutionary benefit. 

    It's debatable whether AI systems should or might exhibit cognitive bias. 

    When data contains systematic mistake, such as "statistical bias," a third kind of bias is present. 

    Strictly speaking, each given dataset will only be unbiased for a single kind of problem, therefore just creating one increases the risk that it will be utilized for a different type of issue and so be biased for that type. 

    On the basis of such data, machine learning will not only fail to recognize the prejudice, but also codify and automate the "historical bias." An automated recruitment screening system at Amazon (discontinued early 2017) was revealed to be biased against women, likely because the corporation has a history of discriminating against women in the employment process. 

    The "Correctional Offender Management Profiling for Alternative Sanctions" (COMPAS), a system for predicting whether a defendant will reoffend, was found to be as accurate (65.2 percent) as a group of random humans (Dressel and Farid 2018), with more false positives and fewer false negatives for black defendants. 

    As a result, the issue with such systems is prejudice, as well as people' undue faith in them. 

    Eubanks investigates the political aspects of such automated systems in the United States (2018). 

    There are substantial technological efforts underway to identify and eliminate bias from AI systems, but they are still in their infancy: see the UK Institute for Ethical AI & Machine Learning (Brownsword, Scotford, and Yeung 2017; Yeung and Lodge 2019). 

    Technical remedies tend to have limitations in that they need a mathematical definition of fairness, which is difficult to come by (Whittaker et al. 2018: 24ff; Selbst et al. 

    2019), as well as a formal notion of "race" (see Benthall and Haynes 2019). 

    A proposal for an institution has been submitted (Veale and Binns 2017). 



    Interaction between humans and robots. 


    Human-robot interaction (HRI) is a distinct academic discipline that today pays close attention to ethical issues, perception dynamics on both sides, and both the diversity of interests and the complexities of the social milieu, including co-working (e.g., Arnold and Scheutz 2017). 

    Calo, Froomkin, and Kerr (2016), Royakkers and van Est (2016), and Tzafestas (2016) are useful surveys for robotics ethics; Lin, Abney, and Jenkins is a common collection of studies (2017). 

    While AI may be used to persuade people to believe and act in certain ways (see section 2.2), it can also be used to control robots that are problematic if their methods or appearance are deceptive, endanger human dignity, or violate Kant's "respect for humanity" criterion. 

    Humans are quick to ascribe mental traits to things and empathize with them, particularly when the items' exterior appearance resembles that of live creatures. 

    This may be used to trick people (or animals) into giving robots or AI systems more intellectual or even emotional weight than they deserve. 

    Other aspects of humanoid robots (e.g., Hiroshi Ishiguro's remote-controlled Geminoids) are problematic in this sense, and some examples have been plainly fraudulent for public relations reasons (e.g., Hanson Robotics' "Sophia"). 

    Of course, certain very fundamental corporate ethical and legal limits apply to robots as well, such as product safety and responsibility, or non-deception in advertising. 

    Many of the problems that have been mentioned seem to be addressed by the present limits. 

    However, there are several elements of human-human connection that seem to be uniquely human in ways that robots may not be able to replicate: compassion, love, and sex. 




    CareRobots



    The employment of robots in human health care is presently limited to concept research in actual surroundings, but it might become a practical technology in a few years, raising fears of a dystopian future of dehumanized care (A. Sharkey and N. Sharkey 2011; Robert Sparrow 2016). 

    Robots that assist human caregivers (e.g., in lifting patients or transporting materials), robots that enable patients to do certain tasks on their own (e.g., eat with a robotic arm), and robots that are given to patients as companions and comfort (e.g., the "Paro" robot seal) are all examples of current systems. 

    See van Wynsberghe (2016), Nrskov (2017), Fosch-Villaronga and Albo-Canals (2019) for an overview, and Draper et al. (2019) for a survey of users (2014). 

    People have claimed that we will need robots in ageing societies, which is one reason why the topic of care has risen to the fore. 

    This argument is flawed because it assumes that as people live longer, they will need more care and that it will be impossible to recruit more people into caring professions. 

    It might also reveal an age prejudice (Jecker forthcoming). 

    Most crucially, it misses the essence of automation, which is about assisting people to work more effectively rather than replacing them. 

    It's not apparent if there's a problem here, given that the conversation largely centers on the fear of robots dehumanizing care, yet the actual and anticipated robots in care are helpful robots for traditional technical work automation. 

    They are therefore "care robots" solely in the sense that they execute activities in healthcare facilities, not in the sense that a human "cares" for the patients. 

    The effectiveness of "being cared for" seems to be dependent on this deliberate sensation of "care," which prospective robots cannot give. 

    The concern of robots in care isn't so much the lack of such purposeful care as it is the necessity for fewer human caregivers. 

    Surprisingly, caring for anything, even a virtual agent, may be beneficial to the caregiver (Lee et al. 2019). 

    Unless the deception is offset by a sufficiently high utility benefit, a system that purports to care would be misleading and hence problematic (Coeckelbergh 2016). 

    Some robots that pretend to "care" on a rudimentary level are already on the market (Paro seal), while others are under development. 

    To some degree, feeling cared for by a machine may be progress for certain people. 





    Sex Robots


    Several tech optimists have stated that humans will be interested in sex and friendship with robots and will be comfortable with the notion (Levy 2007). 

    This seems extremely possible, given the diversity of human sexual tastes, including sex toys and sex dolls: The debate is whether such gadgets should be produced and marketed, and if there should be any restrictions in this sensitive field. 

    It seems to have just entered the mainstream of "robot philosophy" (Sullins 2012; Danaher and McArthur 2017; N. Sharkey et al. 2017 [OIR]; Bendel 2018; Devlin 2018) . 

    Humans have traditionally had strong emotional relationships to items, so maybe friendship or even love with a dependable android is appealing, particularly to those who have difficulty interacting with real people and already prefer dogs, cats, birds, computers, or tamagotchis. 

    Danaher (2019b) counters Nyholm and Frank (2017) by arguing that these may be real friendships and hence a worthwhile objective. 

    Even if it is shallow, it seems that a friendship might boost overall usefulness. 

    There is a problem of dishonesty in these conversations, since a robot cannot (at this time) mean what it says or have affections for a person. 

    Humans are renowned for attributing emotions and ideas to creatures that act as though they had sentience, even to plainly inanimate objects that display no behavior at all. 

    In addition, it seems that paying for deceit is an integral aspect of the conventional sex economy. 

    Finally, there are worries that have always accompanied sex issues, such as permission (Frank and Nyholm 2017), aesthetic considerations, and the fear that certain experiences would "corrupt" people. 

    Human behavior is shaped by experience, and pornography or sex robots are likely to encourage the idea of other people as simply objects of desire, or even recipients of abuse, and therefore destroy a deeper sexual and romantic experience. 

    The "Campaign Against Sex Robots" claims that these gadgets constitute a continuation of slavery and prostitution in this line (Richardson 2016). 





    Employment and Automation.


    AI and robots, it seems, will result in large increases in productivity and consequently global prosperity. 

    Though the focus on "growth" is a recent phenomena, the desire to boost productivity has long been a characteristic of the economy (Harari 2016: 240). 

    Automation, on the other hand, often means that fewer individuals are needed to produce the same amount of output. 

    However, this does not always imply a reduction in total employment since accessible wealth rises, which might boost demand enough to offset productivity gains. 

    In the long term, increased productivity in industrial society has resulted in an increase in total wealth. 

    Historically, major labor market changes have occurred; for example, farming employed almost 60% of the workforce in Europe and North America in 1800, but just 5% in 2010 in the EU, and even less in the richest nations (European Commission 2013). 

    Between 1950 and 1970, the number of employed agricultural labourers in the United Kingdom fell by half (Zayed and Loft 2019). 

    Some of these disruptions result in more labor-intensive companies relocating to lower-cost locations. 

    This is a continuous procedure. 

    Digital automation, unlike physical machinery, substitutes human cognition or information processing (Bostrom and Yudkowsky 2014). 

    As a result, a more drastic shift in the labor market is possible. 

    So, the big concern is whether the impacts will be different this time. 

    Will the development of new jobs and wealth be able to keep up with the employment losses? And, even if it isn't, what are the transition expenses, and who is responsible for them? Do we need to undertake social changes to ensure that the costs and benefits of digital automation are distributed fairly? The fearful (Frey and Osborne 2013; Westlake 2014) to the neutral (Metcalf, Keller, and Boyd 2016 [OIR]; Calo 2018; Frey 2019) to the hopeful (Metcalf, Keller, and Boyd 2016 [OIR]; Calo 2018; Frey 2019). 

    (Brynjolfsson and McAfee 2016; Harari 2016; Danaher 2019a). 

    In principle, the effect of automation on the labor market appears to be fairly well understood as involving two channels: 

    I the nature of interactions between differently skilled workers and new technologies affecting labor demand, and (ii) the equilibrium effects of technological progress through subsequent changes in labor supply and product markets. 

    (Goos et al., 2018: 362) "Job polarisation" or the "dumbbell" form (Goos, Manning, and Salomons 2009) seems to be occurring in the labor market as a consequence of AI and robotics automation: High-skilled technical jobs are in high demand and well compensated, while low-skilled service jobs are in high demand but poorly compensated, but the majority of jobs in factories and offices, i.e., the majority of jobs, are under pressure and being eliminated because they are relatively predictable and most likely to be automated (Baldwin 2019). 

    Perhaps enormous productivity gains will allow the "age of leisure" to come to pass, as predicted by Keynes in 1930 (assuming a 1% annual growth rate). 

    Actually, we've already achieved the amount he predicted for 2030, but we're still working—consuming more and constructing ever higher organizational layers. 

    Harari describes how economic growth enabled mankind to overcome famine, sickness, and war—and now, via artificial intelligence, we want immortality and everlasting happiness, thus his moniker Homo Deus (Harari 2016: 75). 

    Unemployment is, in general, a question of how commodities in a community should be divided fairly. 

    A common belief is that distributive justice should be chosen logically from behind a "veil of ignorance" (Rawls 1971), that is, as if one had no idea what place in society one would be occupying (labourer or industrialist, etc.). 

    Rawls believed that the selected principles would then promote fundamental rights and a distribution that benefited the poorest members of society the most. 


    The AI economy seems to have three characteristics that make such justice unlikely: 


    • For starters, it works in a highly uncontrolled environment in which blame is sometimes difficult to assign. 
    • Second, it functions in marketplaces where monopolies form fast due to a "winner takes all" characteristic. 
    • Third, the digital service industries' "new economy" is founded on intangible assets, commonly known as "capitalism without money" (Haskel and Westlake 2017). 


    This implies that international digital firms that do not have a physical presence in a certain region are difficult to manage. 

    These three characteristics seem to indicate that if we leave wealth distribution to free market forces, the consequence will be a very unequal distribution: And this is a trend that we are currently seeing. 

    One fascinating subject that has gotten little attention is whether AI development is ecologically sustainable: AI systems, like other computer systems, generate trash that is difficult to recycle and require enormous amounts of energy, particularly when training machine learning systems (and even while "mining" cryptocurrencies). 

    It appears that some players in this space offload these costs to the general public. 




    Autonomous Systems


    In the context of autonomous systems, there are numerous definitions of autonomy. 

    In philosophical disputes, where autonomy is the foundation for accountability and personality, a stronger concept is at play (Christman 2003 [2018]). 

    In this context, accountability implies autonomy, but not the other way around, therefore systems with varying degrees of technological autonomy may exist without generating responsibility concerns. 

    In robotics, the weaker, more technical concept of autonomy is relative and progressive: To some extent, a system is considered to be autonomous in terms of human control (Müller 2012). 

    Since autonomy also involves a power relationship: who is in charge and who is accountable, there is a connection to the difficulties of bias and opacity in AI. 

    In general, one concern is whether autonomous robots pose difficulties to which our current conceptual systems must adapt, or whether they just need technological changes. 

    To settle such difficulties, most nations have a complex system of civil and criminal responsibility. 

    Technical norms, such as those governing the safe use of machines in medical settings, will very certainly need to be revised. 

    For such safety-critical systems and "security applications," there is already a discipline called "verifiable AI." The IEEE (Institute of Electrical and Electronics Engineers) and the BSI (British Standards Institution) have published "standards," focusing on more technical issues such data security and transparency. 

    We look at two examples of autonomous systems: autonomous cars and autonomous weapons, which may be found on land, sea, under water, in the air, or in space.



    Autonomous Vehicles


    Autonomous vehicles have the potential to lessen the enormous harm that human driving now causes—roughly 1 million people are killed each year, many more are wounded, the environment is polluted, the land is coated with concrete and asphalt, cities are full of parked automobiles, and so on. 

    However, there seem to be concerns about how autonomous cars should act, as well as how responsibility and risk should be shared in the complex system in which they operate. 

    (There is also much dispute on how long it will take to build completely autonomous, or "level 5" automobiles (SAE International 2018).) In this context, there is considerable discussion of "trolley difficulties." Various challenges are described in the famous "trolley problems" (Thomson 1976; Woollard and Howard-Snyder 2016: part 2) The simplest form is a trolley train on a track that is headed straight towards five people and would kill them unless the train is redirected onto a side track, but that side track has one person on it who will be killed if the train takes it. 

    The example stems from a comment in (Foot 1967: 6) on a variety of dilemma scenarios in which the permitted and desired effects of an action diverge. 

    "Trolley dilemmas" aren't designed to be used to illustrate real ethical issues or to be addressed by making the "correct" decision. 

    Rather, they are thought experiments in which the agent's choice is arbitrarily limited to a small number of unique one-off alternatives and the agent possesses complete information. 


    The distinction between actively doing something vs. allowing something to happen, intended vs. acceptable effects, and consequentialist vs. alternative normative approaches are all investigated using these difficulties as a theoretical tool (Kamm 2016). 


    Many of the issues observed in real driving and autonomous driving have been reminiscent of this kind of issue (Lin 2016). 

    However, it's unlikely that a real driver or a self-driving vehicle would ever have to deal with trolley issues (but see Keeling 2020). 

    While autonomous car trolley issues have garnered a lot of media attention (Awad et al. 2018), they don't seem to add anything to ethical theory or autonomous vehicle programming. 

    The most prevalent ethical issues in driving, such as speeding, unsafe overtaking, failing to maintain a safe distance, and so on, are typical cases of personal gain vs. the collective good. 

    The great majority of them are covered under driver's license laws. 

    Programming the automobile to drive "by the laws" rather than "in the best interests of the passengers" or "to maximize utility" reduces the challenge to a basic problem of ethical machine programming (see section 2.9). 

    There are likely more discretionary politeness rules and fascinating concerns about when to violate the norms (Lin 2016), but this seems to be more of a matter of applying conventional considerations (rules vs. 

    usefulness) to the instance of autonomous cars. 

    In this arena, notable policy initiatives include the report (German Federal Ministry of Transport and Digital Infrastructure 2017), which emphasizes the importance of safety. 

    The tenth rule says In the case of automated and connected driving systems, responsibility for infrastructure, policy, and legal choices passes from the person to the producers and operators of the technical systems, as well as the entities responsible for infrastructure, policy, and legal decisions. 

    (See section 2.10.1 for further information.) The ensuing German and EU legislation on licensing autonomous driving are much more stringent than their American equivalents, where some corporations utilize "testing on customers" as a strategy—without the consumers' or potential victims' informed agreement. 




    Autonomous Weapons.


    The concept of automated weaponry is not new: Instead of simple guided missiles or remotely piloted vehicles, for example, we might deploy fully autonomous land, sea, and air vehicles capable of complicated, long-range surveillance and strike operations. 

    (1) (DARPA 1983) At the time, this concept was mocked as "fantasy" (Dreyfus, Dreyfus, and Athanasiou 1986: ix), but it is now a reality, at least for more clearly recognizable targets (missiles, aircraft, ships, tanks, and so on), but not for human fighters. 

    The primary reasons against (lethal) autonomous weapon systems (AWS or LAWS) are that they encourage extrajudicial executions, remove accountability from people, and increase the likelihood of conflicts or killings—see Lin, Bekey, and Abney (2008: 73–86) for a thorough list of problems. 

    It seems that decreasing the barrier to using such systems (autonomous cars, "fire-and-forget" missiles, or drones carrying explosives) and lowering the risk of being held responsible will enhance their usage. 

    In traditional drone battles with remote controlled weaponry, the key imbalance remains where one side may kill with impunity and so has few reasons not to (e.g., US in Pakistan). 

    It's simple to envisage a tiny drone searching for, identifying, and killing a single person—or possibly a certain sort of human. 

    The Campaign to Stop Killer Robots and other activist organizations have brought forward examples like these. 

    Some appear to imply that autonomous weapons are, in fact, weapons..., and that weapons kill, but we continue to manufacture them in massive quantities. 

    In terms of accountability, autonomous weapons may make it more difficult to identify and prosecute the culpable agents—but this is unclear, given the digital records that may be kept, at least in conventional warfare. 

    The "retribution gap" is a term used to describe the difficulties of distributing punishment (Danaher 2016a). 

    Another concern is whether the use of autonomous weapons in conflict would make wars worse or better. 

    If robots lessen war crimes and crimes in war, the response is likely to be good, and this has been used as both a proponent and a detractor of these weapons (Arkin 2009; Müller 2016a) (Amoroso and Tamburrini 2018). 

    The major concern, according to some, is not the deployment of such weapons in traditional combat, but rather in asymmetric conflicts or by non-state actors, such as criminals. 

    Autonomous weapons are also claimed to be incompatible with International Humanitarian Law, which requires armed confrontation to adhere to the principles of distinction (between combatants and civilians), proportionality (of force), and military necessity (of force) (A. Sharkey 2019). 

    True, distinguishing between fighters and non-combatants is difficult, but distinguishing between civilian and military ships is simple—all this means is that such weapons should not be built or used if they violate Humanitarian Law. 

    Additional concerns have been expressed that being murdered by an autonomous weapon endangers human dignity, however even proponents of a ban on these weapons seem to dismiss these worries: There are other weapons and technology that jeopardize human dignity as well. 

    Given this, as well as the ambiguity in the idea, it is preferable to use a variety of concerns in opposition to AWS rather than relying just on human dignity. 

    (2019, A. Sharkey) The military instruction on weaponry has made much of keeping people "in the loop" or "on the loop"—these approaches of spelling out "meaningful control" are explored in (Santoni de Sio and van den Hoven 2018). 

    There have been talks concerning the problems of assigning blame for an autonomous weapon's deaths, and a "responsibility gap" has been proposed (e.g., Rob Sparrow 2007), implying that neither the person nor the machine can be held accountable. 

    On the other hand, we don't presume that someone is to blame for every occurrence; instead, the true problem may be risk allocation (Simpson and Müller 2016). 

    According to risk analysis (Hansson 2013), determining who is at risk, who is a possible benefit, and who makes the choices is critical (Hansson 2018: 1822–1824). 

     



    Machine Ethics


    Machine ethics is the study of ethics for machines, or "ethical machines," as opposed to the human usage of machines as objects. 

    It's not always clear if this is meant to encompass all of AI ethics or just a portion of it (Floridi and Saunders 2004; Moor 2006; Anderson and Anderson 2011; Wallach and Asaro 2017). 

    It seems at times that the (dubious) conclusion is at work here: if robots operate in morally significant ways, then we need a machine ethics. 

    As a result, some people employ a wider definition: machine ethics is concerned with ensuring that robots' conduct toward humans, and maybe other machines, is morally acceptable. 

    Anderson and Anderson (2007), p. 15 This might involve simple product safety concerns, for example. 

    Other authors sound more ambitious, but they use a narrower definition: AI reasoning should be able to consider societal values, moral and ethical considerations; weigh the relative priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and ensure transparency. 

    (Dignum, 2018, pp. 1–2) Some of the debate in machine ethics is predicated on the idea that robots may be ethical agents accountable for their acts, or "autonomous moral agents," in some way (see van Wynsberghe and Robbins 2019). 

    The fundamental concept of machine ethics is now making its way into practical robotics, where the premise that these machines are artificial moral actors in any meaningful sense is seldom made (Winfield et al. 2019). 

    It has been noted that a robot trained to obey ethical principles may readily be reprogrammed to follow immoral ones (Vanderelst and Winfield 2018). 

    Isaac Asimov notably studied the concept that machine ethics may take the shape of "rules," proposing "three laws of robotics" (Asimov 1942): First Law: A robot may not hurt a human being or enable a human being to come to harm via inactivity. 

    Second Law—Except when such commands clash with the First Law, a robot must follow human directions. 

    Third Law—A robot must defend its own existence as long as it does not contradict the First or Second Laws. 

    In a series of scenarios, Asimov demonstrated how, despite their hierarchical organization, conflicts between these three rules would make it difficult to apply them. 

    Weaker forms of "machine ethics" risk limiting "having an ethics" to ideas that would not ordinarily be deemed adequate (e.g., without "reflection" or even "activity"); stronger conceptions that advance towards artificial moral beings may describe a—currently—empty set. 




    Moral Agents Created by Machines.


    If one considers machine ethics to be about moral agents in any meaningful way, these agents might be referred to as "artificial moral agents" with rights and obligations. 

    However, the debate over artificial creatures calls into question a number of fundamental ethical assumptions, and it may be quite helpful to comprehend these concepts in isolation from the human scenario (cf. Misselhorn 2020; Powers and Ganascia forthcoming). 

    Several writers use the term "artificial moral agent" in a less demanding meaning, drawing from the term "agent" in software engineering, where issues of duty and rights aren't a concern (Allen, Varner, and Zinser 2000). 

    Ethical impact agents (e.g., robot jockeys), implicit ethical agents (e.g., safe autopilot), explicit ethical agents (e.g., using formal methods to estimate utility), and full ethical agents (who "can make explicit ethical judgments and generally is competent to reasonably justify them," according to James Moor (2006)). 

    A complete ethical agent is an ordinary adult person.") Several approaches to achieving "explicit" or "full" ethical agents have been proposed, including programming it in (operational morality), "developing" the ethics itself (functional morality), and finally full-blown morality with full intelligence and sentience (full-blown morality with full intelligence and sentience) (Allen, Smit, and Wallach 2005; Moor 2006). 

    Because programmed agents, like neurons in the brain, are "capable without cognition," they are not often regarded "complete" agents (Dennett 2017; Hakli and Mäkelä 2019). 

    In various debates, the concept of a "moral patient" is brought up: Because ethical patients matter, ethical agents have obligations and ethical patients have rights. 

    Some creatures, such as basic animals that may feel pain but cannot make rational decisions, seem to be patients without being agents. 

    On the other hand, it is often assumed that all agents will be patients as well (e.g., in a Kantian framework). 

    Being a person is often seen to be what qualifies an entity as a responsible agent, someone who can carry out responsibilities and be the subject of ethical issues. 

    Personhood is usually a profound concept connected to phenomenal awareness, intention, and free will (Frankfurt 1971; Strawson 1998). 

    Torrance (2011) proposes that "artificial (or machine) ethics could be defined as designing machines that do things that, when done by humans, are indicative of the possession of 'ethical status' in those humans" (2011: 116)—which he defines as "ethical productivity and ethical receptivity" (2011: 117)—as his expressions for moral agents and patients.

     

    Robots' Responsibilities


    There is widespread agreement that accountability, liability, and the rule of law are fundamental requirements that must be upheld in the face of new technologies (European Group on Ethics in Science and New Technologies 2018, 18), but the question in the case of robots is how to do so and how responsibility should be distributed. 

    Will the robots be held responsible, liable, or accountable for their conduct if they act? Should the distribution of risk, rather than talks about accountability, take precedence? Traditional responsibility distribution already exists: a vehicle producer is responsible for the automobile's technical safety, a driver is responsible for driving, a technician is responsible for appropriate maintenance, and the government is responsible for the road's technical conditions, among other things. 

    Generally speaking The outcomes of AI-based choices or actions are often the consequence of several interactions involving numerous players, including designers, developers, users, software, and hardware.... Spread agency entails distributed accountability. 751 (Taddeo and Floridi 2018). 

    The manner in which this distribution occurs is not a problem unique to AI, but it takes on added significance in this context (Nyholm 2018a, 2018b). 

    Distributed control is often performed in traditional control engineering using a control hierarchy and control loops that span these hierarchies. 



    Rights for Robots



    According to certain scholars, it should be carefully studied if modern robots need rights (Gunkel 2018a, 2018b; Danaher forthcoming; Turner 2019). 

    This stance seems to be based mostly on opponents' criticisms and the factual observation that robots and other non-humans are occasionally treated as though they had rights. 

    In this spirit, a "relational turn" has been proposed: If we treat robots as if they had rights, we may be better off not looking into whether they "actually" have (Coeckelbergh 2010, 2012, 2018). 

    This begs the issue of how far such anti-realism or quasi-realism may go, and what it means to declare in a human-centered perspective that "robots have rights" (Gerdes 2016). 

    Bryson, on the other hand, has said that robots should not have rights (Bryson 2010), albeit she acknowledges that this is a possibility (Gunkel and Bryson 2014). 

    The question of whether robots (or other AI systems) should be classified as "legal entities" or "legal people" is a different one. 

    While governments, firms, and organizations are "entities," they may have legal rights and obligations. 

    The European Parliament has discussed giving robots this status to deal with civil responsibility (EU Parliament 2016; Bertolini and Aiello 2018), but not criminal culpability, which is reserved for human beings. 

    It would also be feasible to give robots merely a subset of rights and responsibilities. 

    "Such legislative action would be ethically unnecessary and legally difficult," it has been said, since it would not promote the interests of humanity (Bryson, Diamantis, and Grant 2017: 273). 

    There has long been a debate in environmental ethics concerning the legal rights of natural things such as trees (C. D. Stone 1972). 

    It has also been suggested that the ethical justifications for constructing robots with rights, or artificial moral patients, in the future are dubious (van Wynsberghe and Robbins 2019). 

    Some writers have urged for a "moratorium on synthetic phenomenology" among the community of "artificial consciousness" researchers since producing such awareness would probably include ethical responsibility to a sentient entity, such as avoiding harming it or ending its life by switching it off (Bentley et al. 2018: 28f). 




    Singularity and Superintelligence



    The goal of modern AI, according to some, is to create a "artificial general intelligence" (AGI), as opposed to a technical or "narrow" AI. 

    Traditional concepts of AI as a general-purpose system, as well as Searle's notion of "strong AI," in which computers with the correct instructions may physically comprehend and experience other cognitive states, are widely differentiated from AGI. 

    (Searle, 1980, pp. 417-420) The concept of singularity states that if artificial intelligence progresses to the point where systems have a human level of intellect, these systems will be able to construct AI systems that transcend human intelligence, i.e. they will be "superintelligent" (see below). 

    Such superintelligent AI systems would rapidly enhance themselves or evolve into ever more intelligent systems. 

    This abrupt change in circumstances after achieving superintelligent AI is known as the "singularity," a point at which AI development is beyond human control and difficult to anticipate (Kurzweil 2005: 487). 

    The dread that "the robots we built will take over the world" captivated human imagination long before computers (e.g., Butler 1863) and is the core topic of apek's renowned play (apek 1920), which popularized the term "robot." Irvin Good initially proposed this worry as a potential path for present AI to lead to a "intelligence explosion": Allow an ultraintelligent machine to be described as a machine capable of far surpassing all of a man's intellectual pursuits, regardless of how bright he is. 

    Because one of these intellectual activity is machine design, an ultraintelligent machine might create even better machines; there would undoubtedly be a "intelligence explosion," and man's intelligence would be far behind. 

    As a result, the first ultraintelligent machine is the final innovation that man will ever need to produce, assuming that the machine is docile enough to teach us how to keep it in check. 

    (Good 1965: 33) Kurzweil (1999, 2005, 2012) elucidates the optimistic argument from acceleration to singularity by stating that computing power has been increasing exponentially, i.e., doubling every 2 years since 1970 in accordance with "Moore's Law" on the number of transistors, and will continue to do so for some time in the future. 

    Kurzweil (1999) projected that supercomputers will approach human processing power by 2010, that "mind uploading" would be achievable by 2030, and that the "singularity" would occur by 2045. 

    Kurzweil mentions an increase in the amount of processing power that can be acquired for a given price, but the money accessible to AI startups have also risen dramatically in recent years: According to Amodei and Hernandez (2018 [OIR]), the real computational power available to train an AI system doubled every 3.4 months from 2012 to 2018, resulting in a 300,000x increase—not the 7x gain that doubling every two years would have produced. 

    A popular version of this argument (Chalmers 2010) speaks about a growth in the AI system's "intelligence" (rather than sheer processing capacity), but the critical point of "singularity" remains the moment at which AI systems take control and push AI development beyond human levels. 

    Bostrom (2014) goes into great length on what might happen at that time and the dangers it poses to mankind. 

    Eden et al. (2012), Armstrong (2014), and Shanahan (2015) summarize the topic (2015). 

    Additional than increasing computing power, there are other avenues to superintelligence, such as perfect computer simulation of the human brain (Kurzweil 2012; Sandberg 2013), biological paths, or networks and organizations (Bostrom 2014: 22–51). 

    Despite the apparent flaws in equating "intelligence" with processing capacity, Kurzweil seems to be correct in his assertion that people tend to underestimate the potential of exponential development. 

    Mini-test: How far would you travel in 30 steps if you walked in steps that were twice as long as the previous one, beginning with a one-metre step? (The answer is about three times the distance between the Earth's sole permanent natural satellite and the moon.) Indeed, most AI advancements may be attributed to the availability of processors that are orders of magnitude quicker, bigger storage, and increased funding (Müller 2018). 

    (Müller and Bostrom 2016; Bostrom, Dafoe, and Flynn forthcoming) address the real acceleration and its rates; Sandberg (2019) claims that development will continue for some time. 

    The participants in this argument are all technophiles in the sense that they anticipate technology to advance quickly and bring about a wide range of positive changes—but they are divided into two groups: those who concentrate on benefits (such as Kurzweil) and those who focus on hazards (e.g., Bostrom). 

    Both parties sympathize with "transhuman" beliefs of humankind's survival in a new physical form, such as being transferred into a computer (Moravec 1990, 1998; Bostrom 2003a, 2003c). 

    They also study the possibilities of "human improvement" in different areas, including as intellect (commonly referred to as "IA") (intelligence augmentation). 

    It's possible that future AI may be employed to improve human performance or lead to the collapse of the cleanly defined human single individual. 

    Robin Hanson offers a comprehensive analysis of what would happen economically if human "brain emulation" allows genuinely intelligent robots or "ems" to be created (Hanson 2016). 

    Contrary to Kantian ethical traditions, which have argued that higher levels of rationality or intelligence would go along with a better understanding of what is moral and a better ability to act morally, the argument from superintelligence to risk requires the assumption that superintelligence does not imply benevolence (Gewirth 1978; Chalmers 2010: 36f). 

    The "orthogonality thesis" (Bostrom 2012; Armstrong 2013; Bostrom 2014: 105–109) asserts that rationality and morality are completely separate dimensions (Bostrom 2012; Armstrong 2013; Bostrom 2014: 105–109). 

    The singularity story has been criticized from a variety of perspectives. 

    Both Kurzweil and Bostrom seem to believe that intelligence is a one-dimensional feature and that the set of intelligent beings is completely ordered in the mathematical sense, although neither book goes into detail on intelligence. 

    In general, despite some attempts, the assumptions used in the compelling story of superintelligence and singularity have not been thoroughly examined. 

    One concern is whether such a singularity would ever occur—it might be philosophically impossible, practically impossible, or simply not happen due to unforeseen circumstances, such as individuals actively working against it. 

    The intriguing philosophical topic is if singularity is really a "myth" (Floridi 2016; Ganascia 2017), rather than a real AI development trajectory. 

    This is something that many practitioners take for granted (e.g., Brooks 2017 [OIR]). 

    They may do so because of fear of public reaction, an overestimation of practical issues, or a strong belief that superintelligence is an improbable consequence of present AI research (Müller forthcoming-a). 

    This debate raises the issue of whether the "singularity" worry is really a story about fictitious AI based on human anxieties. 

    Even if one believes the negative arguments are convincing and that the singularity is unlikely to occur, there is still a chance that one is mistaken. 

    Perhaps AI and robots aren't on the "safe road of a science" (Kant 1791: B15), and perhaps philosophy isn't either (Müller 2020). 

    So, even if one believes the possibility of such a singularity ever happening is very low, it seems that addressing the extremely high-impact danger of singularity has reason. 

    Superintelligence poses an existential threat. 

    Thinking about superintelligence in the long run raises the question of whether it will lead to the extinction of the human species, which is referred to as a "existential risk" (or XRisk): superintelligent systems may have preferences that conflict with the existence of humans on Earth, and thus may decide to end that existence—and given their superior intelligence, they will have the power to do so (or they may happen to end it because they do not really care). 

    The ability to think long-term is a key aspect of this literature. 

    It makes little difference if the singularity (or another catastrophic catastrophe) happens in 30 years, 300 years, or 3000 years (Baum et al. 2019). 

    Perhaps there is an astronomical pattern in which an intelligent species is destined to discover AI at some point in the future, resulting in its own extinction. 

    Such a "great filter" might help explain the "Fermi conundrum," which explains why there is no trace of life in the known universe despite the high chance of it arising. 

    It would be awful news if we discovered that the "great filter" is still ahead of us, rather than a barrier that Earth has already overcome. 

    These challenges are often framed more narrowly as a threat to human extinction (Bostrom 2013), or more widely as a threat to the species in general (Rees 2018)—of which AI is merely one (Häggström 2016; Ord 2020). 

    For dangers that are sufficiently high along the two dimensions of "scope" and "severity," Bostrom uses the concept of "global catastrophic risk" (Bostrom and irkovi 2011; Bostrom 2013). 

    These risk considerations are often unrelated to the broader issue of ethics in perilous situations (e.g., Hansson 2013, 2018). 

    The long-term perspective has its own methodological challenges, but it has sparked a lot of debate: (Tegmark 2017) focuses on AI and human life "3.0" after the singularity, while Russell, Dewey, and Tegmark (2015) and Bostrom, Dafoe, and Flynn (forthcoming) look at longer-term ethical AI policy challenges. 

    Several collections of studies (Müller 2016b; Callaghan et al. 

    2017; Yampolskiy 2018) have looked at the hazards of artificial general intelligence (AGI) and the elements that can make this development more or less risky (Müller 2016b; Callaghan et al. 2017; Yampolskiy 2018). (Drexler 2019). 




    Is it Possible to Control Superintelligence?



    In a nutshell, the "control dilemma" is how we humans can keep control of a superintelligent AI system after it has become superintelligent (Bostrom 2014: 127ff). 

    In a broader sense, it's the difficulty of ensuring that an AI system will turn out to be beneficial in the eyes of humans (Russell 2019); this is frequently referred to as "value alignment." The speed with which a superintelligence system "takes off" determines how simple or difficult it is to regulate it. 

    As a result, systems that promote self-improvement, such as AlphaZero, have gotten a lot of attention (Silver et al. 2018). 

    One facet of this dilemma is that we may determine that a specific feature is desirable, only to discover that it has unintended repercussions that are so detrimental that we no longer want it. 

    This is the age-old conundrum of King Midas, who desired that everything he touched turn to gold. 

    Various instances of this topic have been studied, such as the "paperclip maximiser" (Bostrom 2003b) or the chess performance optimization algorithm (Omohundro 2014). 

    Speculations about omniscient creatures, profound alterations on a "later day," and the possibility of immortality via transcendence of our present corporeal form are all common themes in discussions about superintelligence (Capurro 1993; Geraci 2008, 2010; O'Connell 2017: 160ff). 

    These concerns also raise a well-known epistemological issue: Can we understand the omniscient's methods (Danaher 2015)? The regular foes have already arrived: People fear that computers will become too clever and take over the world, but the true issue is that they are too dumb and have already taken over the world, according to one atheist (Domingos 2015) According to the new nihilists, "techno-hypnosis" through digital technology has now become our primary means of avoiding the loss of meaning (Gertz 2018). 

    Both opponents would argue that an ethics is needed for the "little" difficulties that arise with AI and robotics (parts 2.1 through 2.9 above), but not for the "large ethics" of existential peril from AI (section 2.10). 





    Conclusion.


    As a result, the singularity raises the issue of AI once again. 

    It's amazing how imagination, or "vision," has played such an important part in the field from its inception at the "Dartmouth Summer Research Project" (McCarthy et al. 1955 [OIR]; Simon and Newell 1958). 

    And the assessment of this vision is always changing: We've gone from mantras like "AI is impossible" (Dreyfus 1972) and "AI is merely automation" (Lighthill 1973) to "AI will solve all issues" (Kurzweil 1999) and "AI may kill us all" (Lighthill 1973). (Bostrom 2014). 

    This drew attention from the media and prompted public relations efforts, but it also raised the question of how much of this "AI philosophy and ethics" is really about AI rather than a hypothetical technology. 

    As we have said, AI and robots have posed basic challenges about what we should do with these systems, what they should accomplish, and what threats they pose in the long run. 

    They also cast doubt on humanity's status as the planet's most intellectual and dominating species. 

    We've seen challenges that have arisen, and we'll have to keep a careful eye on technical and social advances in order to catch new issues early on, establish a philosophical explanation, and learn for conventional philosophical problems.





    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.



    See also: 


    Accidents and Risk Assessment; Algorithmic Bias and Error; Autonomous Weapons Systems, Ethics of; Driverless Cars and Trucks; Moral Turing Test; Robot Ethics; Trolley Problem.



    References & Further Reading:


    Anderson, Michael, and Susan Leigh Anderson. 2007. “Machine Ethics: Creating an Ethical Intelligent Agent.” AI Magazine 28, no. 4 (Winter): 15–26.

    Anderson, Susan Leigh. 2008. “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics.” AI & Society 22, no. 4 (March): 477–93.

    Asimov, Isaac. 1950. “Runaround.” In I, Robot, 30–47. New York: Doubleday.

    Asimov, Isaac. 1985. Robots and Empire. Garden City, NY: Doubleday.

    Bensoussan, Alain, and Jérémy Bensoussan. 2015. Droit des Robots. Brussels: Éditions Larcier.

    Coeckelbergh, Mark. 2010. “Robot Rights? Towards a Social-Relational Justification of  Moral Consideration.” Ethics and Information Technology 12, no. 3 (September): 209–21.

    Darling, Kate. 2012. “Extending Legal Protection to Social Robots.” IEEE Spectrum, September 10, 2012. https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/extending-legal-protection-to-social-robots.

    Floridi, Luciano, and J. W. Sanders. 2001. “Artificial Evil and the Foundation of Computer Ethics.” Ethics and Information Technology 3, no. 1 (March): 56–66.

    Foundation for Responsible Robotics (FRR). 2019. Mission Statement. https://responsiblerobotics.org/about-us/mission/.

    Gunkel, David J. 2018. Robot Rights. Cambridge, MA: MIT Press. 

    Lin, Patrick, Keith Abney, and George A. Bekey. 2012. Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: MIT Press.

    Lin, Patrick, Ryan Jenkins, and Keith Abney. 2017. Robot Ethics 2.0: New Challenges in Philosophy, Law, and Society. New York: Oxford University Press.

    McCauley, Lee. 2007. “AI Armageddon and the Three Laws of Robotics.” Ethics and Information Technology 9, no. 2 (July): 153–64.

    Veruggio, Gianmarco. 2006. “The EURON Roboethics Roadmap.” In 2006 6th IEEE RAS International Conference on Humanoid Robots, 612–17. Genoa, Italy: IEEE.

    Veruggio, Gianmarco, and Fiorella Operto. 2008. “Roboethics: Social and Ethical Implications of Robotics.” In Springer Handbook of Robotics, edited by Bruno Siciliano and Oussama Khatib, 1499–1524. New York: Springer.

    Veruggio, Gianmarco, Jorge Solis, and Machiel Van der Loos. 2011. “Roboethics: Ethics Applied to Robotics.” IEEE Robotics & Automation Magazine 18, no. 1 (March): 21–22.

    Wallach, Wendell, and Colin Allen. 2009. Moral Machines: Teaching Robots Right from Wrong. Oxford, UK: Oxford University Press.


    • Abowd, John M, 2017, “How Will Statistical Agencies Operate When All Data Are Private?”, Journal of Privacy and Confidentiality, 7(3): 1–15. doi:10.29012/jpc.v7i3.404
    • Allen, Colin, Iva Smit, and Wendell Wallach, 2005, “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches”, Ethics and Information Technology, 7(3): 149–155. doi:10.1007/s10676-006-0004-4
    • Allen, Colin, Gary Varner, and Jason Zinser, 2000, “Prolegomena to Any Future Artificial Moral Agent”, Journal of Experimental & Theoretical Artificial Intelligence, 12(3): 251–261. doi:10.1080/09528130050111428
    • Amoroso, Daniele and Guglielmo Tamburrini, 2018, “The Ethical and Legal Case Against Autonomy in Weapons Systems”, Global Jurist, 18(1): art. 20170012. doi:10.1515/gj-2017-0012
    • Anderson, Janna, Lee Rainie, and Alex Luchsinger, 2018, Artificial Intelligence and the Future of Humans, Washington, DC: Pew Research Center.
    • Anderson, Michael and Susan Leigh Anderson, 2007, “Machine Ethics: Creating an Ethical Intelligent Agent”, AI Magazine, 28(4): 15–26.
    • ––– (eds.), 2011, Machine Ethics, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511978036
    • Aneesh, A., 2006, Virtual Migration: The Programming of Globalization, Durham, NC and London: Duke University Press.
    • Arkin, Ronald C., 2009, Governing Lethal Behavior in Autonomous Robots, Boca Raton, FL: CRC Press.
    • Armstrong, Stuart, 2013, “General Purpose Intelligence: Arguing the Orthogonality Thesis”, Analysis and Metaphysics, 12: 68–84.
    • –––, 2014, Smarter Than Us, Berkeley, CA: MIRI.
    • Arnold, Thomas and Matthias Scheutz, 2017, “Beyond Moral Dilemmas: Exploring the Ethical Landscape in HRI”, in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction—HRI ’17, Vienna, Austria: ACM Press, 445–452. doi:10.1145/2909824.3020255
    • Asaro, Peter M., 2019, “AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care”, IEEE Technology and Society Magazine, 38(2): 40–53. doi:10.1109/MTS.2019.2915154
    • Asimov, Isaac, 1942, “Runaround: A Short Story”, Astounding Science Fiction, March 1942. Reprinted in “I, Robot”, New York: Gnome Press 1950, 1940ff.
    • Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan, 2018, “The Moral Machine Experiment”, Nature, 563(7729): 59–64. doi:10.1038/s41586-018-0637-6
    • Baldwin, Richard, 2019, The Globotics Upheaval: Globalisation, Robotics and the Future of Work, New York: Oxford University Press.
    • Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019, “Long-Term Trajectories of Human Civilization”, Foresight, 21(1): 53–83. doi:10.1108/FS-04-2018-0037
    • Bendel, Oliver, 2018, “Sexroboter aus Sicht der Maschinenethik”, in Handbuch Filmtheorie, Bernhard Groß and Thomas Morsch (eds.), (Springer Reference Geisteswissenschaften), Wiesbaden: Springer Fachmedien Wiesbaden, 1–19. doi:10.1007/978-3-658-17484-2_22-1
    • Bennett, Colin J. and Charles Raab, 2006, The Governance of Privacy: Policy Instruments in Global Perspective, second edition, Cambridge, MA: MIT Press.
    • Benthall, Sebastian and Bruce D. Haynes, 2019, “Racial Categories in Machine Learning”, in Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19, Atlanta, GA, USA: ACM Press, 289–298. doi:10.1145/3287560.3287575
    • Bentley, Peter J., Miles Brundage, Olle Häggström, and Thomas Metzinger, 2018, “Should We Fear Artificial Intelligence? In-Depth Analysis”, European Parliamentary Research Service, Scientific Foresight Unit (STOA), March 2018, PE 614.547, 1–40. [Bentley et al. 2018 available online]
    • Bertolini, Andrea and Giuseppe Aiello, 2018, “Robot Companions: A Legal and Ethical Analysis”, The Information Society, 34(3): 130–140. doi:10.1080/01972243.2018.1444249
    • Binns, Reuben, 2018, “Fairness in Machine Learning: Lessons from Political Philosophy”, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in Proceedings of Machine Learning Research, 81: 149–159.
    • Bostrom, Nick, 2003a, “Are We Living in a Computer Simulation?”, The Philosophical Quarterly, 53(211): 243–255. doi:10.1111/1467-9213.00309
    • –––, 2003b, “Ethical Issues in Advanced Artificial Intelligence”, in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Volume 2, Iva Smit, Wendell Wallach, and G.E. Lasker (eds), (IIAS-147-2003), Tecumseh, ON: International Institute of Advanced Studies in Systems Research and Cybernetics, 12–17. [Botstrom 2003b revised available online]
    • –––, 2003c, “Transhumanist Values”, in Ethical Issues for the Twenty-First Century, Frederick Adams (ed.), Bowling Green, OH: Philosophical Documentation Center Press.
    • –––, 2012, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, Minds and Machines, 22(2): 71–85. doi:10.1007/s11023-012-9281-3
    • –––, 2013, “Existential Risk Prevention as Global Priority”, Global Policy, 4(1): 15–31. doi:10.1111/1758-5899.12002
    • –––, 2014, Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.
    • Bostrom, Nick and Milan M. Ćirković (eds.), 2011, Global Catastrophic Risks, New York: Oxford University Press.
    • Bostrom, Nick, Allan Dafoe, and Carrick Flynn, forthcoming, “Policy Desiderata for Superintelligent AI: A Vector Field Approach (V. 4.3)”, in Ethics of Artificial Intelligence, S Matthew Liao (ed.), New York: Oxford University Press. [Bostrom, Dafoe, and Flynn forthcoming – preprint available online]
    • Bostrom, Nick and Eliezer Yudkowsky, 2014, “The Ethics of Artificial Intelligence”, in The Cambridge Handbook of Artificial Intelligence, Keith Frankish and William M. Ramsey (eds.), Cambridge: Cambridge University Press, 316–334. doi:10.1017/CBO9781139046855.020 [Bostrom and Yudkowsky 2014 available online]
    • Bradshaw, Samantha, Lisa-Maria Neudert, and Phil Howard, 2019, “Government Responses to Malicious Use of Social Media”, Working Paper 2019.2, Oxford: Project on Computational Propaganda. [Bradshaw, Neudert, and Howard 2019 available online/]
    • Brownsword, Roger, Eloise Scotford, and Karen Yeung (eds.), 2017, The Oxford Handbook of Law, Regulation and Technology, Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199680832.001.0001
    • Brynjolfsson, Erik and Andrew McAfee, 2016, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, New York: W. W. Norton.
    • Bryson, Joanna J., 2010, “Robots Should Be Slaves”, in Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues, Yorick Wilks (ed.), (Natural Language Processing 8), Amsterdam: John Benjamins Publishing Company, 63–74. doi:10.1075/nlp.8.11bry
    • –––, 2019, “The Past Decade and Future of Ai’s Impact on Society”, in Towards a New Enlightenment: A Transcendent Decade, Madrid: Turner - BVVA. [Bryson 2019 available online]
    • Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant, 2017, “Of, for, and by the People: The Legal Lacuna of Synthetic Persons”, Artificial Intelligence and Law, 25(3): 273–291. doi:10.1007/s10506-017-9214-9
    • Burr, Christopher and Nello Cristianini, 2019, “Can Machines Read Our Minds?”, Minds and Machines, 29(3): 461–494. doi:10.1007/s11023-019-09497-4
    • Butler, Samuel, 1863, “Darwin among the Machines: Letter to the Editor”, Letter in The Press (Christchurch), 13 June 1863. [Butler 1863 available online]
    • Callaghan, Victor, James Miller, Roman Yampolskiy, and Stuart Armstrong (eds.), 2017, The Technological Singularity: Managing the Journey, (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-54033-6
    • Calo, Ryan, 2018, “Artificial Intelligence Policy: A Primer and Roadmap”, University of Bologna Law Review, 3(2): 180-218. doi:10.6092/ISSN.2531-6133/8670
    • Calo, Ryan, A. Michael Froomkin, and Ian Kerr (eds.), 2016, Robot Law, Cheltenham: Edward Elgar.
    • ÄŒapek, Karel, 1920, R.U.R., Prague: Aventium. Translated by Peter Majer and Cathy Porter, London: Methuen, 1999.
    • Capurro, Raphael, 1993, “Ein Grinsen Ohne Katze: Von der Vergleichbarkeit Zwischen ‘Künstlicher Intelligenz’ und ‘Getrennten Intelligenzen’”, Zeitschrift für philosophische Forschung, 47: 93–102.
    • Cave, Stephen, 2019, “To Save Us from a Kafkaesque Future, We Must Democratise AI”, The Guardian , 04 January 2019. [Cave 2019 available online]
    • Chalmers, David J., 2010, “The Singularity: A Philosophical Analysis”, Journal of Consciousness Studies, 17(9–10): 7–65. [Chalmers 2010 available online]
    • Christman, John, 2003 [2018], “Autonomy in Moral and Political Philosophy”, (Spring 2018) Stanford Encyclopedia of Philosophy (EDITION NEEDED), URL = <https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/>
    • Coeckelbergh, Mark, 2010, “Robot Rights? Towards a Social-Relational Justification of Moral Consideration”, Ethics and Information Technology, 12(3): 209–221. doi:10.1007/s10676-010-9235-5
    • –––, 2012, Growing Moral Relations: Critique of Moral Status Ascription, London: Palgrave. doi:10.1057/9781137025968
    • –––, 2016, “Care Robots and the Future of ICT-Mediated Elderly Care: A Response to Doom Scenarios”, AI & Society, 31(4): 455–462. doi:10.1007/s00146-015-0626-3
    • –––, 2018, “What Do We Mean by a Relational Ethics? Growing a Relational Approach to the Moral Standing of Plants, Robots and Other Non-Humans”, in Plant Ethics: Concepts and Applications, Angela Kallhoff, Marcello Di Paola, and Maria Schörgenhumer (eds.), London: Routledge, 110–121.
    • Crawford, Kate and Ryan Calo, 2016, “There Is a Blind Spot in AI Research”, Nature, 538(7625): 311–313. doi:10.1038/538311a
    • Cristianini, Nello, forthcoming, “Shortcuts to Artificial Intelligence”, in Machines We Trust, Marcello Pelillo and Teresa Scantamburlo (eds.), Cambridge, MA: MIT Press. [Cristianini forthcoming – preprint available online]
    • Danaher, John, 2015, “Why AI Doomsayers Are Like Sceptical Theists and Why It Matters”, Minds and Machines, 25(3): 231–246. doi:10.1007/s11023-015-9365-y
    • –––, 2016a, “Robots, Law and the Retribution Gap”, Ethics and Information Technology, 18(4): 299–309. doi:10.1007/s10676-016-9403-3
    • –––, 2016b, “The Threat of Algocracy: Reality, Resistance and Accommodation”, Philosophy & Technology, 29(3): 245–268. doi:10.1007/s13347-015-0211-1
    • –––, 2019a, Automation and Utopia: Human Flourishing in a World without Work, Cambridge, MA: Harvard University Press.
    • –––, 2019b, “The Philosophical Case for Robot Friendship”, Journal of Posthuman Studies, 3(1): 5–24. doi:10.5325/jpoststud.3.1.0005
    • –––, forthcoming, “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism”, Science and Engineering Ethics, first online: 20 June 2019. doi:10.1007/s11948-019-00119-x
    • Danaher, John and Neil McArthur (eds.), 2017, Robot Sex: Social and Ethical Implications, Boston, MA: MIT Press.
    • DARPA, 1983, “Strategic Computing. New-Generation Computing Technology: A Strategic Plan for Its Development an Application to Critical Problems in Defense”, ADA141982, 28 October 1983. [DARPA 1983 available online]
    • Dennett, Daniel C, 2017, From Bacteria to Bach and Back: The Evolution of Minds, New York: W.W. Norton.
    • Devlin, Kate, 2018, Turned On: Science, Sex and Robots, London: Bloomsbury.
    • Diakopoulos, Nicholas, 2015, “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures”, Digital Journalism, 3(3): 398–415. doi:10.1080/21670811.2014.976411
    • Dignum, Virginia, 2018, “Ethics in Artificial Intelligence: Introduction to the Special Issue”, Ethics and Information Technology, 20(1): 1–3. doi:10.1007/s10676-018-9450-z
    • Domingos, Pedro, 2015, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, London: Allen Lane.
    • Draper, Heather, Tom Sorell, Sandra Bedaf, Dag Sverre Syrdal, Carolina Gutierrez-Ruiz, Alexandre Duclos, and Farshid Amirabdollahian, 2014, “Ethical Dimensions of Human-Robot Interactions in the Care of Older People: Insights from 21 Focus Groups Convened in the UK, France and the Netherlands”, in International Conference on Social Robotics 2014, Michael Beetz, Benjamin Johnston, and Mary-Anne Williams (eds.), (Lecture Notes in Artificial Intelligence 8755), Cham: Springer International Publishing, 135–145. doi:10.1007/978-3-319-11973-1_14
    • Dressel, Julia and Hany Farid, 2018, “The Accuracy, Fairness, and Limits of Predicting Recidivism”, Science Advances, 4(1): eaao5580. doi:10.1126/sciadv.aao5580
    • Drexler, K. Eric, 2019, “Reframing Superintelligence: Comprehensive AI Services as General Intelligence”, FHI Technical Report, 2019-1, 1-210. [Drexler 2019 available online]
    • Dreyfus, Hubert L., 1972, What Computers Still Can’t Do: A Critique of Artificial Reason, second edition, Cambridge, MA: MIT Press 1992.
    • Dreyfus, Hubert L., Stuart E. Dreyfus, and Tom Athanasiou, 1986, Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, New York: Free Press.
    • Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith, 2006, Calibrating Noise to Sensitivity in Private Data Analysis, Berlin, Heidelberg.
    • Eden, Amnon H., James H. Moor, Johnny H. Søraker, and Eric Steinhart (eds.), 2012, Singularity Hypotheses: A Scientific and Philosophical Assessment, (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-32560-1
    • Eubanks, Virginia, 2018, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, London: St. Martin’s Press.
    • European Commission, 2013, “How Many People Work in Agriculture in the European Union? An Answer Based on Eurostat Data Sources”, EU Agricultural Economics Briefs, 8 (July 2013). [Anonymous 2013 available online]
    • European Group on Ethics in Science and New Technologies, 2018, “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, 9 March 2018, European Commission, Directorate-General for Research and Innovation, Unit RTD.01. [European Group 2018 available online ]
    • Ferguson, Andrew Guthrie, 2017, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement, New York: NYU Press.
    • Floridi, Luciano, 2016, “Should We Be Afraid of AI? Machines Seem to Be Getting Smarter and Smarter and Much Better at Human Jobs, yet True AI Is Utterly Implausible. Why?”, Aeon, 9 May 2016. URL = <Floridi 2016 available online>
    • Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena, 2018, “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and Machines, 28(4): 689–707. doi:10.1007/s11023-018-9482-5
    • Floridi, Luciano and Jeff W. Sanders, 2004, “On the Morality of Artificial Agents”, Minds and Machines, 14(3): 349–379. doi:10.1023/B:MIND.0000035461.63578.9d
    • Floridi, Luciano and Mariarosaria Taddeo, 2016, “What Is Data Ethics?”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083): 20160360. doi:10.1098/rsta.2016.0360
    • Foot, Philippa, 1967, “The Problem of Abortion and the Doctrine of the Double Effect”, Oxford Review, 5: 5–15.
    • Fosch-Villaronga, Eduard and Jordi Albo-Canals, 2019, “‘I’ll Take Care of You,’ Said the Robot”, Paladyn, Journal of Behavioral Robotics, 10(1): 77–93. doi:10.1515/pjbr-2019-0006
    • Frank, Lily and Sven Nyholm, 2017, “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?”, Artificial Intelligence and Law, 25(3): 305–323. doi:10.1007/s10506-017-9212-y
    • Frankfurt, Harry G., 1971, “Freedom of the Will and the Concept of a Person”, The Journal of Philosophy, 68(1): 5–20.
    • Frey, Carl Benedict, 2019, The Technology Trap: Capital, Labour, and Power in the Age of Automation, Princeton, NJ: Princeton University Press.
    • Frey, Carl Benedikt and Michael A. Osborne, 2013, “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, Oxford Martin School Working Papers, 17 September 2013. [Frey and Osborne 2013 available online]
    • Ganascia, Jean-Gabriel, 2017, Le Mythe De La Singularité, Paris: Éditions du Seuil.
    • EU Parliament, 2016, “Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(Inl))”, Committee on Legal Affairs, 10.11.2016. https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
    • EU Regulation, 2016/679, “General Data Protection Regulation: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/Ec”, Official Journal of the European Union, 119 (4 May 2016), 1–88. [Regulation (EU) 2016/679 available online]
    • Geraci, Robert M., 2008, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence”, Journal of the American Academy of Religion, 76(1): 138–166. doi:10.1093/jaarel/lfm101
    • –––, 2010, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195393026.001.0001
    • Gerdes, Anne, 2016, “The Issue of Moral Consideration in Robot Ethics”, ACM SIGCAS Computers and Society, 45(3): 274–279. doi:10.1145/2874239.2874278
    • German Federal Ministry of Transport and Digital Infrastructure, 2017, “Report of the Ethics Commission: Automated and Connected Driving”, June 2017, 1–36. [GFMTDI 2017 available online]
    • Gertz, Nolen, 2018, Nihilism and Technology, London: Rowman & Littlefield.
    • Gewirth, Alan, 1978, “The Golden Rule Rationalized”, Midwest Studies in Philosophy, 3(1): 133–147. doi:10.1111/j.1475-4975.1978.tb00353.x
    • Gibert, Martin, 2019, “Éthique Artificielle (Version Grand Public)”, in L’Encyclopédie Philosophique, Maxime Kristanek (ed.), accessed: 16 April 2020, URL = <Gibert 2019 available online>
    • Giubilini, Alberto and Julian Savulescu, 2018, “The Artificial Moral Advisor. The ‘Ideal Observer’ Meets Artificial Intelligence”, Philosophy & Technology, 31(2): 169–188. doi:10.1007/s13347-017-0285-z
    • Good, Irving John, 1965, “Speculations Concerning the First Ultraintelligent Machine”, in Advances in Computers 6, Franz L. Alt and Morris Rubinoff (eds.), New York & London: Academic Press, 31–88. doi:10.1016/S0065-2458(08)60418-0
    • Goodfellow, Ian, Yoshua Bengio, and Aaron Courville, 2016, Deep Learning, Cambridge, MA: MIT Press.
    • Goodman, Bryce and Seth Flaxman, 2017, “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’”, AI Magazine, 38(3): 50–57. doi:10.1609/aimag.v38i3.2741
    • Goos, Maarten, 2018, “The Impact of Technological Progress on Labour Markets: Policy Challenges”, Oxford Review of Economic Policy, 34(3): 362–375. doi:10.1093/oxrep/gry002
    • Goos, Maarten, Alan Manning, and Anna Salomons, 2009, “Job Polarization in Europe”, American Economic Review, 99(2): 58–63. doi:10.1257/aer.99.2.58
    • Graham, Sandra and Brian S. Lowery, 2004, “Priming Unconscious Racial Stereotypes about Adolescent Offenders”, Law and Human Behavior, 28(5): 483–504. doi:10.1023/B:LAHU.0000046430.65485.1f
    • Gunkel, David J., 2018a, “The Other Question: Can and Should Robots Have Rights?”, Ethics and Information Technology, 20(2): 87–99. doi:10.1007/s10676-017-9442-4
    • –––, 2018b, Robot Rights, Boston, MA: MIT Press.
    • Gunkel, David J. and Joanna J. Bryson (eds.), 2014, Machine Morality: The Machine as Moral Agent and Patient special issue of Philosophy & Technology, 27(1): 1–142.
    • Häggström, Olle, 2016, Here Be Dragons: Science, Technology and the Future of Humanity, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198723547.001.0001
    • Hakli, Raul and Pekka Mäkelä, 2019, “Moral Responsibility of Robots and Hybrid Agents”, The Monist, 102(2): 259–275. doi:10.1093/monist/onz009
    • Hanson, Robin, 2016, The Age of Em: Work, Love and Life When Robots Rule the Earth, Oxford: Oxford University Press.
    • Hansson, Sven Ove, 2013, The Ethics of Risk: Ethical Analysis in an Uncertain World, New York: Palgrave Macmillan.
    • –––, 2018, “How to Perform an Ethical Risk Analysis (eRA)”, Risk Analysis, 38(9): 1820–1829. doi:10.1111/risa.12978
    • Harari, Yuval Noah, 2016, Homo Deus: A Brief History of Tomorrow, New York: Harper.
    • Haskel, Jonathan and Stian Westlake, 2017, Capitalism without Capital: The Rise of the Intangible Economy, Princeton, NJ: Princeton University Press.
    • Houkes, Wybo and Pieter E. Vermaas, 2010, Technical Functions: On the Use and Design of Artefacts, (Philosophy of Engineering and Technology 1), Dordrecht: Springer Netherlands. doi:10.1007/978-90-481-3900-2
    • IEEE, 2019, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (First Version), <IEEE 2019 available online>.
    • Jasanoff, Sheila, 2016, The Ethics of Invention: Technology and the Human Future, New York: Norton.
    • Jecker, Nancy S., forthcoming, Ending Midlife Bias: New Values for Old Age, New York: Oxford University Press.
    • Jobin, Anna, Marcello Ienca, and Effy Vayena, 2019, “The Global Landscape of AI Ethics Guidelines”, Nature Machine Intelligence, 1(9): 389–399. doi:10.1038/s42256-019-0088-2
    • Johnson, Deborah G. and Mario Verdicchio, 2017, “Reframing AI Discourse”, Minds and Machines, 27(4): 575–590. doi:10.1007/s11023-017-9417-6
    • Kahnemann, Daniel, 2011, Thinking Fast and Slow, London: Macmillan.
    • Kamm, Frances Myrna, 2016, The Trolley Problem Mysteries, Eric Rakowski (ed.), Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190247157.001.0001
    • Kant, Immanuel, 1781/1787, Kritik der reinen Vernunft. Translated as Critique of Pure Reason, Norman Kemp Smith (trans.), London: Palgrave Macmillan, 1929.
    • Keeling, Geoff, 2020, “Why Trolley Problems Matter for the Ethics of Automated Vehicles”, Science and Engineering Ethics, 26(1): 293–307. doi:10.1007/s11948-019-00096-1
    • Keynes, John Maynard, 1930, “Economic Possibilities for Our Grandchildren”. Reprinted in his Essays in Persuasion, New York: Harcourt Brace, 1932, 358–373.
    • Kissinger, Henry A., 2018, “How the Enlightenment Ends: Philosophically, Intellectually—in Every Way—Human Society Is Unprepared for the Rise of Artificial Intelligence”, The Atlantic, June 2018. [Kissinger 2018 available online]
    • Kurzweil, Ray, 1999, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, London: Penguin.
    • –––, 2005, The Singularity Is Near: When Humans Transcend Biology, London: Viking.
    • –––, 2012, How to Create a Mind: The Secret of Human Thought Revealed, New York: Viking.
    • Lee, Minha, Sander Ackermans, Nena van As, Hanwen Chang, Enzo Lucas, and Wijnand IJsselsteijn, 2019, “Caring for Vincent: A Chatbot for Self-Compassion”, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems—CHI ’19, Glasgow, Scotland: ACM Press, 1–13. doi:10.1145/3290605.3300932
    • Levy, David, 2007, Love and Sex with Robots: The Evolution of Human-Robot Relationships, New York: Harper & Co.
    • Lighthill, James, 1973, “Artificial Intelligence: A General Survey”, Artificial intelligence: A Paper Symposion, London: Science Research Council. [Lighthill 1973 available online]
    • Lin, Patrick, 2016, “Why Ethics Matters for Autonomous Cars”, in Autonomous Driving, Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner (eds.), Berlin, Heidelberg: Springer Berlin Heidelberg, 69–85. doi:10.1007/978-3-662-48847-8_4
    • Lin, Patrick, Keith Abney, and Ryan Jenkins (eds.), 2017, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, New York: Oxford University Press. doi:10.1093/oso/9780190652951.001.0001
    • Lin, Patrick, George Bekey, and Keith Abney, 2008, “Autonomous Military Robotics: Risk, Ethics, and Design”, ONR report, California Polytechnic State University, San Luis Obispo, 20 December 2008), 112 pp. [Lin, Bekey, and Abney 2008 available online]
    • Lomas, Meghann, Robert Chevalier, Ernest Vincent Cross, Robert Christopher Garrett, John Hoare, and Michael Kopack, 2012, “Explaining Robot Actions”, in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI ’12, Boston, MA: ACM Press, 187–188. doi:10.1145/2157689.2157748
    • Macnish, Kevin, 2017, The Ethics of Surveillance: An Introduction, London: Routledge.
    • Mathur, Arunesh, Gunes Acar, Michael J. Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan, 2019, “Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites”, Proceedings of the ACM on Human-Computer Interaction, 3(CSCW): art. 81. doi:10.1145/3359183
    • Minsky, Marvin, 1985, The Society of Mind, New York: Simon & Schuster.
    • Misselhorn, Catrin, 2020, “Artificial Systems with Moral Capacities? A Research Design and Its Implementation in a Geriatric Care System”, Artificial Intelligence, 278: art. 103179. doi:10.1016/j.artint.2019.103179
    • Mittelstadt, Brent Daniel and Luciano Floridi, 2016, “The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts”, Science and Engineering Ethics, 22(2): 303–341. doi:10.1007/s11948-015-9652-2
    • Moor, James H., 2006, “The Nature, Importance, and Difficulty of Machine Ethics”, IEEE Intelligent Systems, 21(4): 18–21. doi:10.1109/MIS.2006.80
    • Moravec, Hans, 1990, Mind Children, Cambridge, MA: Harvard University Press.
    • –––, 1998, Robot: Mere Machine to Transcendent Mind, New York: Oxford University Press.
    • Mozorov, Eygeny, 2013, To Save Everything, Click Here: The Folly of Technological Solutionism, New York: Public Affairs.
    • Müller, Vincent C., 2012, “Autonomous Cognitive Systems in Real-World Environments: Less Control, More Flexibility and Better Interaction”, Cognitive Computation, 4(3): 212–215. doi:10.1007/s12559-012-9129-4
    • –––, 2016a, “Autonomous Killer Robots Are Probably Good News”, In Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons, Ezio Di Nucci and Filippo Santoni de Sio (eds.), London: Ashgate, 67–81.
    • ––– (ed.), 2016b, Risks of Artificial Intelligence, London: Chapman & Hall - CRC Press. doi:10.1201/b19187
    • –––, 2018, “In 30 Schritten zum Mond? Zukünftiger Fortschritt in der KI”, Medienkorrespondenz, 20: 5–15. [Müller 2018 available online]
    • –––, 2020, “Measuring Progress in Robotics: Benchmarking and the ‘Measure-Target Confusion’”, in Metrics of Sensory Motor Coordination and Integration in Robots and Animals, Fabio Bonsignorio, Elena Messina, Angel P. del Pobil, and John Hallam (eds.), (Cognitive Systems Monographs 36), Cham: Springer International Publishing, 169–179. doi:10.1007/978-3-030-14126-4_9
    • –––, forthcoming-a, Can Machines Think? Fundamental Problems of Artificial Intelligence, New York: Oxford University Press.
    • ––– (ed.), forthcoming-b, Oxford Handbook of the Philosophy of Artificial Intelligence, New York: Oxford University Press.
    • Müller, Vincent C. and Nick Bostrom, 2016, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion”, in Fundamental Issues of Artificial Intelligence, Vincent C. Müller (ed.), Cham: Springer International Publishing, 555–572. doi:10.1007/978-3-319-26485-1_33
    • Newport, Cal, 2019, Digital Minimalism: On Living Better with Less Technology, London: Penguin.
    • Nørskov, Marco (ed.), 2017, Social Robots, London: Routledge.
    • Nyholm, Sven, 2018a, “Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci”, Science and Engineering Ethics, 24(4): 1201–1219. doi:10.1007/s11948-017-9943-x
    • –––, 2018b, “The Ethics of Crashes with Self-Driving Cars: A Roadmap, II”, Philosophy Compass, 13(7): e12506. doi:10.1111/phc3.12506
    • Nyholm, Sven, and Lily Frank, 2017, “From Sex Robots to Love Robots: Is Mutual Love with a Robot Possible?”, in Danaher and McArthur 2017: 219–243.
    • O’Connell, Mark, 2017, To Be a Machine: Adventures among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death, London: Granta.
    • O’Neil, Cathy, 2016, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Largo, ML: Crown.
    • Omohundro, Steve, 2014, “Autonomous Technology and the Greater Human Good”, Journal of Experimental & Theoretical Artificial Intelligence, 26(3): 303–315. doi:10.1080/0952813X.2014.895111
    • Ord, Toby, 2020, The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury.
    • Powers, Thomas M. and Jean-Gabriel Ganascia, forthcoming, “The Ethics of the Ethics of AI”, in Oxford Handbook of Ethics of Artificial Intelligence, Markus D. Dubber, Frank Pasquale, and Sunnit Das (eds.), New York: Oxford.
    • Rawls, John, 1971, A Theory of Justice, Cambridge, MA: Belknap Press.
    • Rees, Martin, 2018, On the Future: Prospects for Humanity, Princeton: Princeton University Press.
    • Richardson, Kathleen, 2016, “Sex Robot Matters: Slavery, the Prostituted, and the Rights of Machines”, IEEE Technology and Society Magazine, 35(2): 46–53. doi:10.1109/MTS.2016.2554421
    • Roessler, Beate, 2017, “Privacy as a Human Right”, Proceedings of the Aristotelian Society, 117(2): 187–206. doi:10.1093/arisoc/aox008
    • Royakkers, Lambèr and Rinie van Est, 2016, Just Ordinary Robots: Automation from Love to War, Boca Raton, LA: CRC Press, Taylor & Francis. doi:10.1201/b18899
    • Russell, Stuart, 2019, Human Compatible: Artificial Intelligence and the Problem of Control, New York: Viking.
    • Russell, Stuart, Daniel Dewey, and Max Tegmark, 2015, “Research Priorities for Robust and Beneficial Artificial Intelligence”, AI Magazine, 36(4): 105–114. doi:10.1609/aimag.v36i4.2577
    • SAE International, 2018, “Taxonomy and Definitions for Terms Related to Driving Automation Systems for on-Road Motor Vehicles”, J3016_201806, 15 June 2018. [SAE International 2015 available online]
    • Sandberg, Anders, 2013, “Feasibility of Whole Brain Emulation”, in Philosophy and Theory of Artificial Intelligence, Vincent C. Müller (ed.), (Studies in Applied Philosophy, Epistemology and Rational Ethics, 5), Berlin, Heidelberg: Springer Berlin Heidelberg, 251–264. doi:10.1007/978-3-642-31674-6_19
    • –––, 2019, “There Is Plenty of Time at the Bottom: The Economics, Risk and Ethics of Time Compression”, Foresight, 21(1): 84–99. doi:10.1108/FS-04-2018-0044
    • Santoni de Sio, Filippo and Jeroen van den Hoven, 2018, “Meaningful Human Control over Autonomous Systems: A Philosophical Account”, Frontiers in Robotics and AI, 5(February): 15. doi:10.3389/frobt.2018.00015
    • Schneier, Bruce, 2015, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World, New York: W. W. Norton.
    • Searle, John R., 1980, “Minds, Brains, and Programs”, Behavioral and Brain Sciences, 3(3): 417–424. doi:10.1017/S0140525X00005756
    • Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi, 2019, “Fairness and Abstraction in Sociotechnical Systems”, in Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT* ’19, Atlanta, GA: ACM Press, 59–68. doi:10.1145/3287560.3287598
    • Sennett, Richard, 2018, Building and Dwelling: Ethics for the City, London: Allen Lane.
    • Shanahan, Murray, 2015, The Technological Singularity, Cambridge, MA: MIT Press.
    • Sharkey, Amanda, 2019, “Autonomous Weapons Systems, Killer Robots and Human Dignity”, Ethics and Information Technology, 21(2): 75–87. doi:10.1007/s10676-018-9494-0
    • Sharkey, Amanda and Noel Sharkey, 2011, “The Rights and Wrongs of Robot Care”, in Robot Ethics: The Ethical and Social Implications of Robotics, Patrick Lin, Keith Abney and George Bekey (eds.), Cambridge, MA: MIT Press, 267–282.
    • Shoham, Yoav, Perrault Raymond, Brynjolfsson Erik, Jack Clark, James Manyika, Juan Carlos Niebles, … Zoe Bauer, 2018, “The AI Index 2018 Annual Report”, 17 December 2018, Stanford, CA: AI Index Steering Committee, Human-Centered AI Initiative, Stanford University. [Shoam et al. 2018 available online]
    • Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis, 2018, “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play”, Science, 362(6419): 1140–1144. doi:10.1126/science.aar6404
    • Simon, Herbert A. and Allen Newell, 1958, “Heuristic Problem Solving: The Next Advance in Operations Research”, Operations Research, 6(1): 1–10. doi:10.1287/opre.6.1.1
    • Simpson, Thomas W. and Vincent C. Müller, 2016, “Just War and Robots’ Killings”, The Philosophical Quarterly, 66(263): 302–322. doi:10.1093/pq/pqv075
    • Smolan, Sandy (director), 2016, “The Human Face of Big Data”, PBS Documentary, 24 February 2016, 56 mins.
    • Sparrow, Robert, 2007, “Killer Robots”, Journal of Applied Philosophy, 24(1): 62–77. doi:10.1111/j.1468-5930.2007.00346.x
    • –––, 2016, “Robots in Aged Care: A Dystopian Future?”, AI & Society, 31(4): 445–454. doi:10.1007/s00146-015-0625-4
    • Stahl, Bernd Carsten, Job Timmermans, and Brent Daniel Mittelstadt, 2016, “The Ethics of Computing: A Survey of the Computing-Oriented Literature”, ACM Computing Surveys, 48(4): art. 55. doi:10.1145/2871196
    • Stahl, Bernd Carsten and David Wright, 2018, “Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation”, IEEE Security Privacy, 16(3): 26–33.
    • Stone, Christopher D., 1972, “Should Trees Have Standing - toward Legal Rights for Natural Objects”, Southern California Law Review, 45: 450–501.
    • Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller, 2016, “Artificial Intelligence and Life in 2030”, One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016. [Stone et al. 2016 available online]
    • Strawson, Galen, 1998, “Free Will”, in Routledge Encyclopedia of Philosophy, Taylor & Francis. doi:10.4324/9780415249126-V014-1
    • Sullins, John P., 2012, “Robots, Love, and Sex: The Ethics of Building a Love Machine”, IEEE Transactions on Affective Computing, 3(4): 398–409. doi:10.1109/T-AFFC.2012.31
    • Susser, Daniel, Beate Roessler, and Helen Nissenbaum, 2019, “Technology, Autonomy, and Manipulation”, Internet Policy Review, 8(2): 30 June 2019. [Susser, Roessler, and Nissenbaum 2019 available online]
    • Taddeo, Mariarosaria and Luciano Floridi, 2018, “How AI Can Be a Force for Good”, Science, 361(6404): 751–752. doi:10.1126/science.aat5991
    • Taylor, Linnet and Nadezhda Purtova, 2019, “What Is Responsible and Sustainable Data Science?”, Big Data & Society, 6(2): art. 205395171985811. doi:10.1177/2053951719858114
    • Taylor, Steve, et al., 2018, “Responsible AI – Key Themes, Concerns & Recommendations for European Research and Innovation: Summary of Consultation with Multidisciplinary Experts”, June. [Taylor, et al. 2018 available online]
    • Tegmark, Max, 2017, Life 3.0: Being Human in the Age of Artificial Intelligence, New York: Knopf.
    • Thaler, Richard H and Sunstein, Cass, 2008, Nudge: Improving decisions about health, wealth and happiness, New York: Penguin.
    • Thompson, Nicholas and Ian Bremmer, 2018, “The AI Cold War That Threatens Us All”, Wired, 23 November 2018. [Thompson and Bremmer 2018 available online]
    • Thomson, Judith Jarvis, 1976, “Killing, Letting Die, and the Trolley Problem”, Monist, 59(2): 204–217. doi:10.5840/monist197659224
    • Torrance, Steve, 2011, “Machine Ethics and the Idea of a More-Than-Human Moral World”, in Anderson and Anderson 2011: 115–137. doi:10.1017/CBO9780511978036.011
    • Trump, Donald J, 2019, “Executive Order on Maintaining American Leadership in Artificial Intelligence”, 11 February 2019. [Trump 2019 available online]
    • Turner, Jacob, 2019, Robot Rules: Regulating Artificial Intelligence, Berlin: Springer. doi:10.1007/978-3-319-96235-1
    • Tzafestas, Spyros G., 2016, Roboethics: A Navigating Overview, (Intelligent Systems, Control and Automation: Science and Engineering 79), Cham: Springer International Publishing. doi:10.1007/978-3-319-21714-7
    • Vallor, Shannon, 2017, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190498511.001.0001
    • Van Lent, Michael, William Fisher, and Michael Mancuso, 2004, “An Explainable Artificial Intelligence System for Small-Unit Tactical Behavior”, in Proceedings of the 16th Conference on Innovative Applications of Artifical Intelligence, (IAAI’04), San Jose, CA: AAAI Press, 900–907.
    • van Wynsberghe, Aimee, 2016, Healthcare Robots: Ethics, Design and Implementation, London: Routledge. doi:10.4324/9781315586397
    • van Wynsberghe, Aimee and Scott Robbins, 2019, “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics, 25(3): 719–735. doi:10.1007/s11948-018-0030-8
    • Vanderelst, Dieter and Alan Winfield, 2018, “The Dark Side of Ethical Robots”, in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA: ACM, 317–322. doi:10.1145/3278721.3278726
    • Veale, Michael and Reuben Binns, 2017, “Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data”, Big Data & Society, 4(2): art. 205395171774353. doi:10.1177/2053951717743530
    • Véliz, Carissa, 2019, “Three Things Digital Ethics Can Learn from Medical Ethics”, Nature Electronics, 2(8): 316–318. doi:10.1038/s41928-019-0294-2
    • Verbeek, Peter-Paul, 2011, Moralizing Technology: Understanding and Designing the Morality of Things, Chicago: University of Chicago Press.
    • Wachter, Sandra and Brent Daniel Mittelstadt, 2019, “A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI”, Columbia Business Law Review, 2019(2): 494–620.
    • Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi, 2017, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”, International Data Privacy Law, 7(2): 76–99. doi:10.1093/idpl/ipx005
    • Wachter, Sandra, Brent Mittelstadt, and Chris Russell, 2018, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”, Harvard Journal of Law & Technology, 31(2): 842–887. doi:10.2139/ssrn.3063289
    • Wallach, Wendell and Peter M. Asaro (eds.), 2017, Machine Ethics and Robot Ethics, London: Routledge.
    • Walsh, Toby, 2018, Machines That Think: The Future of Artificial Intelligence, Amherst, MA: Prometheus Books.
    • Westlake, Stian (ed.), 2014, Our Work Here Is Done: Visions of a Robot Economy, London: Nesta. [Westlake 2014 available online]
    • Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, … Jason Schultz, 2018, “AI Now Report 2018”, New York: AI Now Institute, New York University. [Whittaker et al. 2018 available online]
    • Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, Kanta Dihal, and Stephen Cave, 2019, “Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research”, Cambridge: Nuffield Foundation, University of Cambridge. [Whittlestone 2019 available online]
    • Winfield, Alan, Katina Michael, Jeremy Pitt, and Vanessa Evers (eds.), 2019, Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems, special issue of Proceedings of the IEEE, 107(3): 501–632.
    • Woollard, Fiona and Frances Howard-Snyder, 2016, “Doing vs. Allowing Harm”, Stanford Encyclopedia of Philosophy (Winter 2016 edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/doing-allowing/>
    • Woolley, Samuel C. and Philip N. Howard (eds.), 2017, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media, Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.001.0001
    • Yampolskiy, Roman V. (ed.), 2018, Artificial Intelligence Safety and Security, Boca Raton, FL: Chapman and Hall/CRC. doi:10.1201/9781351251389
    • Yeung, Karen and Martin Lodge (eds.), 2019, Algorithmic Regulation, Oxford: Oxford University Press. doi:10.1093/oso/9780198838494.001.0001
    • Zayed, Yago and Philip Loft, 2019, “Agriculture: Historical Statistics”, House of Commons Briefing Paper, 3339(25 June 2019): 1-19. [Zayed and Loft 2019 available online]
    • Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan, 2019, “Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?”, Philosophy & Technology, 32(4): 661–683. doi:10.1007/s13347-018-0330-6
    • Zuboff, Shoshana, 2019, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, New York: Public Affairs.


    [OTR]








    What Is Artificial General Intelligence?

    Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...