Showing posts sorted by relevance for query Autonomous. Sort by date Show all posts
Showing posts sorted by relevance for query Autonomous. Sort by date Show all posts

Artificial Intelligence - AI Systems That Are Autonomous Or Semiautonomous.


Autonomous and semiautonomous systems are characterized by their decision-making dependence on external orders.

They have something in common with conditionally autonomous and automated systems.

Semiautonomous systems depend on a human user somewhere "in the loop" for decision-making, behavior management, or contextual interventions, while autonomous systems may make decisions within a defined region of operation without human input.

Under some situations, conditionally autonomous systems may operate independently.

Automated systems differ from semiautonomous and autonomous systems (autonomy) (automation).

The actions of the earlier systems are preset sequences directly related to specific inputs, while the later systems' actions are predefined sequences directly tied to specified inputs.

When a system's actions and possibilities for action are established in advance as reactions to certain inputs, it is termed automated.

A garage door that automatically stops closing when a sensor detects an impediment in its path is an example of an automated system.

Sensors and user interaction may both be used to collect data.

An automated dishwasher or clothes washer, for example, is a user-initiated automatic system in which the human user sets the sequences of events and behaviors via a user interface, and the machine subsequently executes the commands according to established mechanical sequences.

Autonomous systems, on the other hand, are ones in which the capacity to evaluate conditions and choose actions is intrinsic to the system.

The autonomous system, like an automated system, depends on sensors, cameras, or human input to give data, but its responses are marked by more complicated decision-making based on the contextual evaluation of many simultaneous inputs such as user intent, environment, and capabilities.

When it comes to real-world instances of systems, the terms automated, semiautonomous, and autonomous are used interchangeably depending on the nature of the tasks at hand and the intricacies of decision-making.

These categories aren't usually defined clearly or exactly.

Finally, the degree to which these categories apply is determined by the size and scope of the activity in question.

While the above-mentioned basic differences between automated, semiautonomous, and autonomous systems are widely accepted, there is some dispute as to whether these system types exist in real systems.

The degrees of autonomy established by SAE (previously the Society of Automotive Engineers) for autonomous automobiles are one example of such ambiguity.

Depending on road or weather conditions, as well as situational indices like the existence of road barriers, lane markings, geo-fencing, adjacent cars, or speed, a single system may be Level 2 partly autonomous, Level 3 conditionally autonomous, or Level 4 autonomous.

The degree of autonomy may also be determined by how an automobile job is characterized.

In this sense, a system's categorization is determined as much by its technical structure as by the conditions of its operation or the characteristics of the activity focus.


E Vehicles that are self-driving.

 The contrasts between automated, semiautonomous, conditionally autonomous, and completely autonomous vehicle systems are shown using automated, semiautonomous, conditionally autonomous, and fully autonomous car systems.

Automated technology, like as cruise control, is an example.

The user specifies a vehicle speed goal, and the vehicle maintains that speed while adjusting acceleration and deceleration as needed by the terrain.

However, in the case of semiautonomous vehicles, a vehicle may be equipped with an adaptive cruise control feature (one that regulates a vehicle's speed in relation to a leading vehicle and to a user's input), as well as lane keeping assistance, automatic braking, and collision mitigation technology.

Semiautonomous cars are now available on the market.

Many possible inputs (surrounding cars, lane markings, human input, impediments, speed restrictions, etc.) may be interpreted by systems, which can then regulate longitudinal and latitudinal control to semiautonomously direct the vehicle's trajectory.

The human user is still involved in decision-making, monitoring, and interventions in this system.

Conditional autonomy refers to a system that allows a human user to "leave the loop" of control and decision-making under certain situations.

The vehicle analyzes emergent inputs and controls its behavior to accomplish the objective without human supervision or intervention after a goal is set (e.g., to continue on a route).

Internal to the activity (defined by the purpose and accessible methods), behaviors are governed and controlled without the involvement of the human user.

It's crucial to remember that every categorization is conditional on the aim and activity being operationalized.

Finally, an autonomous system has fewer constraints than conditional autonomy and is capable of controlling all tasks in a given activity.

An autonomous system, like conditional autonomy, functions inside the activity structure without the involvement of a human user.

Autonomous Robotics

For a number of reasons, autonomous systems may be found in the area of robotics.

There are a variety of reasons why autonomous robots should be used to replace or augment humans, including safety (for example, spaceflight or planetary surface exploration), undesirable circumstances (monotonous tasks such as domestic chores and strenuous labor such as heavy lifting), and situations where human action is limited or impossible (search and rescue in confined conditions).

Robotics applications, like automobile applications, may be deemed autonomous within the confines of a carefully defined domain or activity area, such as a factory assembly line or a residence.

The degree of autonomy, like autonomous cars, is dependent on the specific area and, in many situations, excludes maintenance and repair.

Unlike automated systems, however, an autonomous robot inside such a defined activity structure would behave to achieve a set objective by sensing its surroundings, analyzing contextual inputs, and regulating behavior appropriately without the need for human interaction.

Autonomous robots are now used in a wide range of applications, including domestic uses such as autonomous lawn care robots and interplanetary exploration applications such as the Mars rovers MER-A and MER-B.

Semiautonomous Weapons

 is an acronym for "Semiautonomous Weapons." As part of contemporary military capabilities, autonomous and semiautonomous weapon systems are now being developed.

The definition of, and difference between, autonomous and semiautonomous changes significantly depending on the operationalization of the terminology, the context, and the sphere of activity, much as it does in the preceding automobile and robotics instances.

Consider a landmine as an example of an automated weapon that is not self-contained.

It reacts with fatal force when a sensor is activated, and there is no decision-making capabilities or human interaction.

A semiautonomous system, on the other hand, processes inputs and acts on them for a collection of tasks that form weaponry activity in collaboration with a human user.

The weapons system and the human operator must work together to complete a single task.

To put it another way, the human user is "in the know." Identifying a target, aiming, and shooting are examples of these activities.

Navigation toward a target, placement, and reloading are all possible.

These duties are shared between the system and the human user in a semiautonomous weapon system.

An autonomous system, on the other hand, would be accountable for all of these duties without the need for human monitoring, decision-making, or intervention after the objective was determined and the parameters provided.

There are presently no completely autonomous weapons systems that meet these requirements.

These meanings, as previously stated, are technologically, socially, legally, and linguistically dependent.

The distinction between semiautonomous and autonomous systems has ethical, moral, and political implications, particularly in the case of weapons systems.

This is particularly relevant for assessing accountability, since causal agency and decision-making may be distributed across developers and consumers.

As in the case of machine learning algorithms, the sources of agency and decision-making may also be ambiguous.



The various obstacles in building optimum user interfaces for semiautonomous and autonomous systems are mirrored in the ambiguity of their definitions.

For example, in the case of automobiles, ensuring that the user and the system (as designed by the system's designers) have a consistent model of the capabilities being automated (as well as the intended distribution and degree of control) is crucial for the safe transfer of control responsibility.

In the sense that once an activity area is specified, control and responsibility are binary, autonomous systems pose similar user-interface issues (either the system or the human user is responsible).

The problem is reduced to defining the activity and relinquishing control in this case.

Because the description of an activity domain has no required relationship to the composition, structure, and interaction of constituent activities, semiautonomous systems create more difficult issues for the design of user interfaces.

Particular tasks (such as a car maintaining lateral position in a lane) may be decided by an engineer's use of specific technical equipment (and the restrictions that come with it) and therefore have no link to the user's mental representation of that work.

An obstacle detection task, in which a semiautonomous system moves about an environment by avoiding impediments, is an example.

The machine's obstacle detection technologies (camera, radar, optical sensors, touch sensors, thermal sensors, mapping, and so on) define what is and isn't an impediment, and such restrictions may be opaque to the user.

As a consequence of the ambiguity, the system must communicate with a human user when assistance is required, and the system (and its designers) must recognize and anticipate any conflict between system and user models.

Other considerations for designing semiautonomous and autonomous systems (specifically in relation to the ethical and legal dimensions complicated by the distribution of agency among developers and users) include identification and authorization methods and protocols, in addition to the issues raised above.

The difficulty of identifying and approving users for autonomous technology activation is crucial since once activated, systems no longer need continuous monitoring, intermittent decision-making, or interaction.

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Autonomy and Complacency; Driverless Cars and Trucks; Lethal Autonomous Weapons Systems.

Further Reading

Antsaklis, Panos J., Kevin M. Passino, and Shyh Jong Wang. 1991. “An Introduction to Autonomous Control Systems.” IEEE Control Systems 11, no. 4 (June): 5–13.

Bekey, George A. 2005. Autonomous Robots: From Biological Inspiration to Implementation and Control. Cambridge, MA: MIT Press.

Norman, Donald A., Andrew Ortony, and Daniel M. Russell. 2003. “Affect and Machine Design: Lessons for the Development of Autonomous Machines.” IBM Systems Journal 42, no. 1: 38–44.

Roff, Heather. 2015. “Autonomous or ‘Semi’ Autonomous Weapons? A Distinction without a Difference?” Huffington Post, January 16, 2015.

SAE International. 2014. “Taxonomy and Definitions for Terms Related to On-Road 
Motor Vehicle Automated Driving Systems.” J3016. SAE International Standard.

Artificial Intelligence - How Do Autonomous Vehicles Leverage AI?

Using a virtual driver system, driverless automobiles and trucks, also known as self-driving or autonomous vehicles, are capable of moving through settings with little or no human control.

A virtual driver system is a set of characteristics and capabilities that augment or replicate the actions of an absent driver to the point that, at the maximum degree of autonomy, the driver may not even be present.

Diverse technology uses, restricting circumstances, and categorization methods make reaching an agreement on what defines a driverless car difficult.

A semiautonomous system, in general, is one in which the human performs certain driving functions (such as lane maintaining) while others are performed autonomously (such as acceleration and deceleration).

All driving activities are autonomous only under certain circumstances in a conditionally autonomous system.

All driving duties are automated in a fully autonomous system.

Automobile manufacturers, technology businesses, automotive suppliers, and universities are all testing and developing applications.

Each builder's car or system, as well as the technical road that led to it, demonstrates a diverse range of technological answers to the challenge of developing a virtual driving system.

Ambiguities exist at the level of defining circumstances, so that a same technological system may be characterized in a variety of ways depending on factors such as location, speed, weather, traffic density, human attention, and infrastructure.

When individual driving duties are operationalized for feature development and context plays a role in developing solutions, more complexity is generated (such as connected vehicles, smart cities, and regulatory environment).

Because of this complication, producing driverless cars often necessitates collaboration across several roles and disciplines of study, such as hardware and software engineering, ergonomics, user experience, legal and regulatory, city planning, and ethics.

The development of self-driving automobiles is both a technical and a socio-cultural enterprise.

The distribution of mobility tasks across an array of equipment to collectively perform a variety of activities such as assessing driver intent, sensing the environment, distinguishing objects, mapping and wayfinding, and safety management are among the technical problems of engineering a virtual driver system.

LIDAR, radar, computer vision, global positioning, odometry, and sonar are among the hardware and software components of a virtual driving system.

There are many approaches to solving discrete autonomous movement problems.

With cameras, maps, and sensors, sensing and processing can be centralized in the vehicle, or it can be distributed throughout the environment and across other vehicles, as with intelligent infrastructure and V2X (vehicle to everything) capability.

The burden and scope of this processing—and the scale of the problems to be solved—are closely related to the expected level of human attention and intervention, and as a result, the most frequently referenced classification of driverless capability is formally structured along the lines of human attentional demands and intervention requirements by the Society of Automotive Engineers, and has been adopted in 2 years.

These companies use various levels of driver automation, ranging from Level 0 to Level 5.

Level 0 refers to no automation, which means the human driver is solely responsible for longitudinal and latitudinal control (steering) (acceleration and deceleration).

On Level 0, the human driver is in charge of keeping an eye on the environment and reacting to any unexpected safety hazards.

Automated systems that take control of longitudinal or latitudinal control are classified as Level 1, or driver aid.

The driver is in charge of observation and intervention.

Level 2 denotes partial automation, in which the virtual driver system is in charge of both lateral and longitudinal control.

The human driver is deemed to be in the loop, which means that they are in charge of monitoring the environment and acting in the event of a safety-related emergency.

Level 2 capability has not yet been achieved by commercially available systems.

The monitoring capabilities of the virtual driving system distinguishes Level 3 conditional autonomy from Level 2.

At this stage, the human driver may be disconnected from the surroundings and depend on the autonomous system to keep track of it.

The person is required to react to calls for assistance in a range of situations, such as during severe weather or in construction zones.

A navigation system (e.g., GPS) is not required at this level.

To operate at Level 2 or Level 3, a vehicle does not need a map or a specific destination.

A human driver is not needed to react to a request for intervention at Level 4, often known as high automation.

The virtual driving system is in charge of navigation, locomotion, and monitoring.

When a specific condition cannot be satisfied, such as when a navigation destination is obstructed, it may request that a driver intervene.

If the human driver does not choose to interfere, the system may safely stop or redirect based on the engineering approach.

The classification of this situation is based on standards of safe driving, which are established not only by technical competence and environmental circumstances, but also by legal and regulatory agreements and lawsuit tolerance.

Level 5 autonomy, often known as complete automation, refers to a vehicle that is capable of doing all driving activities in any situation that a human driver could handle.

Although Level 4 and Level 5 systems do not need the presence of a person, they still necessitate substantial technological and social cooperation.

While efforts to construct autonomous vehicles date back to the 1920s, Leonardo Da Vinci is credited with the concept of a self-propelled cart.

In his 1939 New York World's Fair Futurama display, Norman Bel Geddes envisaged a smart metropolis of the future inhabited by self-driving automobiles.

Automobiles, according to Bel Geddes, will be outfitted with "technology that would rectify the mistakes of human drivers" by 1960.

General Motors popularized the concept of smart infrastructure in the 1950s by building a "automated highway" with steering-assist circuits.

In 1960, the business tested a working prototype car, but owing to the expensive expense of infrastructure, it quickly moved its focus from smart cities to smart autos.

A team lead by Sadayuki Tsugawa of Tsukuba Mechanical Engineering Laboratory in Japan created an early prototype of an autonomous car.

Their 1977 vehicle operated under predefined environmental conditions dictated by lateral guiding rails.

The truck used cameras to track the rails, and most of the processing equipment was aboard.

The EUREKA (European Research Organization) pooled money and experience in the 1980s to enhance the state-of-the-art in cameras and processing for autonomous cars.

Simultaneously, Carnegie Mellon University in Pittsburgh, Pennsylvania pooled their resources for research on autonomous navigation utilizing GPS data.

Since then, automakers including General Motors, Tesla, and Ford Motor Company, as well as technology firms like ARGO AI and Waymo, have been working on autonomous cars or critical components.

The technology is becoming less dependent on very limited circumstances and more adaptable to real-world scenarios.

Manufacturers are currently producing Level 4 autonomous test cars, and testing are being undertaken in real-world traffic and weather situations.

Commercially accessible Level 4 self-driving cars are still a long way off.

There are supporters and opponents of autonomous driving.

Supporters point to a number of benefits that address social problems, environmental concerns, efficiency, and safety.

The provision of mobility services and a degree of autonomy to those who do not already have access, such as those with disabilities (e.g., blindness or motor function impairment) or those who are unable to drive, such as the elderly and children, is one such social benefit.

The capacity to decrease fuel economy by managing acceleration and braking has environmental benefits.

Because networked cars may go bumper to bumper and are routed according to traffic optimization algorithms, congestion is expected to be reduced.

Finally, self-driving vehicles have the potential to be safer.

They may be able to handle complicated information more quickly and thoroughly than human drivers, resulting in fewer collisions.

Self-driving car negative repercussions may be included in any of these areas.

In terms of society, driverless cars may limit access to mobility and municipal services.

Autonomous mobility may be heavily regulated, costly, or limited to places that are inaccessible to low-income commuters.

Non-networked or manually operated cars might be kept out of intelligent geo-fenced municipal infrastructure.

Furthermore, if no adult or responsible human party is present during transportation, autonomous automobiles may pose a safety concern for some susceptible passengers, such as children.

Greater convenience may have environmental consequences.

Drivers may sleep or work while driving autonomously, which may have the unintended consequence of extending commutes and worsening traffic congestion.

Another security issue is widespread vehicle hacking, which could bring individual automobiles and trucks, or even a whole city, to a halt. 

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Accidents and Risk Assessment; Autonomous and Semiautonomous Systems; Autonomy and Complacency; Intelligent Transportation; Trolley Problem.

Further Reading:

Antsaklis, Panos J., Kevin M. Passino, and Shyh J. Wang. 1991. “An Introduction to Autonomous Control Systems.” IEEE Control Systems Magazine 11, no. 4: 5–13.

Bel Geddes, Norman. 1940. Magic Motorways. New York: Random House.

Bimbraw, Keshav. 2015. “Autonomous Cars: Past, Present, and Future—A Review of the Developments in the Last Century, the Present Scenario, and the Expected Future of Autonomous Vehicle Technology.” In ICINCO: 2015—12th International Conference on Informatics in Control, Automation and Robotics, vol. 1, 191–98. Piscataway, NJ: IEEE.

Kröger, Fabian. 2016. “Automated Driving in Its Social, Historical and Cultural Contexts.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner, 41–68. Berlin: Springer.

Lin, Patrick. 2016. “Why Ethics Matters for Autonomous Cars.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner, 69–85. Berlin: Springer.

Weber, Marc. 2014. “Where To? A History of Autonomous Vehicles.” Computer History Museum.

Artificial Intelligence - AI And Robotics In The Battlefield.


Because of the growth of artificial intelligence (AI) and robots and their application to military matters, generals on the contemporary battlefield are seeing a possible tactical and strategic revolution.

Unmanned aerial vehicles (UAVs), also known as drones, and other robotic devices played a key role in the wars in Afghanistan (2001–) and Iraq (2003–2011).

It is possible that future conflicts will be waged without the participation of humans.

Without human control or guidance, autonomous robots will fight in war on land, in the air, and beneath the water.

While this vision remains in the realm of science fiction, battlefield AI and robotics raise a slew of practical, ethical, and legal issues that military leaders, technologists, jurists, and philosophers must address.

When many people think about AI and robotics on the battlefield, the first image that springs to mind is "killer robots," armed machines that indiscriminately destroy everything in their path.

There are, however, a variety of applications for battlefield AI that do not include killing.

In recent wars, the most notable application of such technology has been peaceful in character.

UAVs are often employed for surveillance and reconnaissance.

Other robots, like as iRobot's PackBot (the same firm that makes the vacuum-cleaning Roomba), are employed to locate and assess improvised explosive devices (IEDs), making their safe disposal easier.

Robotic devices can navigate treacherous terrain, such as Afghanistan's caves and mountain crags, as well as areas too dangerous for humans, such as under a vehicle suspected of being rigged with an IED.

Unmanned Underwater Vehicles (UUVs) are also used to detect mines underwater.

IEDs and explosives are so common on today's battlefields that these robotic gadgets are priceless.

Another potential life-saving capacity of battlefield robots that has yet to be realized is in the realm of medicine.

Robots can safely collect injured troops on the battlefield in areas that are inaccessible to their human counterparts, without jeopardizing their own lives.

Robots may also transport medical supplies and medications to troops on the battlefield, as well as conduct basic first aid and other emergency medical operations.

AI and robots have the greatest potential to change the battlefield—whether on land, sea, or in the air—in the arena of deadly power.

The Aegis Combat System (ACS) is an example of an autonomous system used by several militaries across the globe aboard destroyers and other naval combat vessels.

Through radar and sonar, the system can detect approaching threats, such as missiles from the surface or air, mines, or torpedoes from the water.

The system is equipped with a powerful computer system and can use its own munitions to eliminate identified threats.

Despite the fact that Aegis is activated and supervised manually, it has the potential to operate autonomously in order to counter threats faster than humans could.

In addition to partly automated systems like the ACS and UAVs, completely autonomous military robots capable of making judgments and acting on their own may be developed in the future.

The most significant feature of AI-powered robotics is the development of lethal autonomous weapons (LAWs), sometimes known as "killer robots." On a sliding scale, robot autonomy exists.

At one extreme of the spectrum are robots that are designed to operate autonomously, but only in reaction to a specific stimulus and in one direction.

This degree of autonomy is shown by a mine that detonates autonomously when stepped on.

Remotely operated machines, which are unmanned yet controlled remotely by a person, are also available at the lowest end of the range.

Semiautonomous systems occupy the midpoint of the spectrum.

These systems may be able to work without the assistance of a person, but only to a limited extent.

A robot commanded to launch, go to a certain area, and then return at a specific time is an example of such a system.

The machine does not make any "decisions" on its own in this situation.

Semiautonomous devices may also be configured to accomplish part of a task before waiting for further inputs before moving on to the next step.

Full autonomy is the last step.

Fully autonomous robots are designed with a purpose and are capable of achieving it entirely on their own.

This might include the capacity to use deadly force without direct human guidance in warfare circumstances.

Robotic gadgets that are lethally equipped, AI-enhanced, and totally autonomous have the ability to radically transform the current warfare.

Military ground troops made up of both humans and robots, or entirely of robots with no humans, would expand armies.

Small, armed UAVs would not be constrained by the requirement for human operators, and they might be assembled in massive swarms to overwhelm bigger, but less mobile troops.

Such technological advancements will entail equally dramatic shifts in tactics, strategy, and even the notion of combat.

This technology will become less expensive as it becomes more widely accessible.

This might disturb the present military power balance.

Even minor governments, and maybe even non-state organizations like terrorist groups, may be able to develop their own robotic army.

Fully autonomous LAWs bring up a slew of practical, ethical, and legal issues.

One of the most pressing practical considerations is safety.

A completely autonomous robot with deadly armament that malfunctions might represent a major threat to everyone who comes in contact with it.

Fully autonomous missiles might theoretically wander off course and kill innocent people due to a mechanical failure.

Unpredictable technological faults and malfunctions may occur in any kind of apparatus.

Such issues offer a severe safety concern to individuals who deploy deadly robotic gadgets as well as unwitting bystanders.

Even if there are no possible problems, restrictions in program ming may result in potentially disastrous errors.

Programming robots to discriminate between combatants and noncombatants, for example, is a big challenge, and it's simple to envisage misidentification leading to unintentional fatalities.

The greatest concern, though, is that robotic AI may grow too quickly and become independent of human control.

Sentient robots might turn their armament on humans, like in popular science fiction movies and literature, and in fulfillment of eminent scientist Stephen Hawking's grim forecast that the development of AI could end in humanity's annihilation.

Laws may also lead to major legal issues.

The rules of war apply to human beings.

Robots cannot be held accountable for prospective law crimes, whether criminally, civilly, or in any other manner.

As a result, there's a chance that war crimes or other legal violations may go unpunished.

Here are some serious issues to consider: Can the programmer or engineer of a robot be held liable for the machine's actions? Could a person who gave the robot its "command" be held liable for the robot's unpredictability or blunders on a mission that was otherwise self-directed? Such considerations must be thoroughly considered before any completely autonomous deadly equipment is deployed.

Aside from legal issues of duty, a slew of ethical issues must be addressed.

The conduct of war necessitates split-second moral judgments.

Will self-driving robots be able to tell the difference between a kid and a soldier, or between a wounded and helpless soldier and an active combatant? Will a robotic military force always be seen as a cold, brutal, and merciless army of destruction, or can a robot be designed to behave kindly when the situation demands it? Because combat is riddled with moral dilemmas, LAWs involved in war will always be confronted with them.

Experts wonder that dangerous autonomous robots can ever be trusted to do the right thing.

Moral action requires not just rationality—which robots may be capable of—but also emotions, empathy, and wisdom.

These later items are much more difficult to implement in code.

Many individuals have called for an absolute ban on research in this field because to legal, ethical, and practical problems highlighted by the potential of ever more powerful AI-powered robotic military hardware.

Others, on the other hand, believe that scientific advancement cannot be halted.

Rather than prohibiting such study, they argue that scientists and society as a whole should seek realistic answers to the difficulties.

Some argue that keeping continual human supervision and control over robotic military units may address many of the ethical and legal issues.

Others argue that direct supervision is unlikely in the long term because human intellect will be unable to match the pace with which computers think and act.

As the side that gives its robotic troops more autonomy gains an overwhelming advantage over those who strive to preserve human control, there will be an inevitable trend toward more and more autonomy.

They warn that fully autonomous forces will always triumph.

Despite the fact that it is still in its early stages, the introduction of more complex AI and robotic equipment to the battlefield has already resulted in significant change.

AI and robotics on the battlefield have the potential to drastically transform the future of warfare.

It remains to be seen if and how this technology's technical, practical, legal, and ethical limits can be addressed.

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Autonomous Weapons Systems, Ethics of; Lethal Autonomous Weapons 

Further Reading

Borenstein, Jason. 2008. “The Ethics of Autonomous Military Robots.” Studies in Ethics, 
Law, and Technology 2, no. 1: n.p.

Morris, Zachary L. 2018. “Developing a Light Infantry-Robotic Company as a System.” 
Military Review 98, no. 4 (July–August): 18–29.

Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New 
York: W. W. Norton.

Singer, Peter W. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st 
Century. London: Penguin.

Sparrow, Robert. 2007. “Killer Robots,” Journal of Applied Philosophy 24, no. 1: 62–77.

Artificial Intelligence - Who Is Elon Musk?


Elon Musk (1971–) is an American businessman and inventor.

Elon Musk is an engineer, entrepreneur, and inventor who was born in South Africa.

He is a dual citizen of South Africa, Canada, and the United States, and resides in California.

Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century, as well as an important influencer and contributor to the development of artificial intelligence.

Despite his controversial personality, Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century and an important influencer and contributor to the development of artificial intelligence.

Musk's business instincts and remarkable technological talent were evident from an early age.

By the age of 10, he had self-taught himself how program computers, and by the age of twelve, he had produced a video game and sold the source code to a computer magazine.

Musk has included allusions to some of his favorite novels in SpaceX's Falcon Heavy rocket launch and Tesla's software since he was a youngster.

Musk's official schooling was centered on economics and physics rather than engineering, interests that are mirrored in his subsequent work, such as his efforts in renewable energy and space exploration.

He began his education at Queen's University in Canada, but later transferred to the University of Pennsylvania, where he earned bachelor's degrees in Economics and Physics.

Musk barely stayed at Stanford University for two days to seek a PhD in energy physics before departing to start his first firm, Zip2, with his brother Kimbal Musk.

Musk has started or cofounded many firms, including three different billion-dollar enterprises: SpaceX, Tesla, and PayPal, all driven by his diverse interests and goals.

• Zip2 was a web software business that was eventually purchased by Compaq.

• an online bank that merged with PayPal to become the online payments corporation PayPal.

• Tesla, Inc.: an electric car and solar panel maker 

• SpaceX: a commercial aircraft manufacturer and space transportation services provider (via its subsidiarity SolarCity) 

• Neuralink: a neurotechnology startup focusing on brain-computer connections 

• The Boring Business: an infrastructure and tunnel construction corporation

 • OpenAI: a nonprofit AI research company focused on the promotion and development of friendly AI Musk is a supporter of environmentally friendly energy and consumption.

Concerns over the planet's future habitability prompted him to investigate the potential of establishing a self-sustaining human colony on Mars.

Other projects include the Hyperloop, a high-speed transportation system, and the Musk electric jet, a jet-powered supersonic electric aircraft.

Musk sat on President Donald Trump's Strategy and Policy Forum and Manufacturing Jobs Initiative for a short time before stepping out when the US withdrew from the Paris Climate Agreement.

Musk launched the Musk Foundation in 2002, which funds and supports research and activism in the domains of renewable energy, human space exploration, pediatric research, and science and engineering education.

Musk's effect on AI is significant, despite his best-known work with Tesla and SpaceX, as well as his contentious social media pronouncements.

In 2015, Musk cofounded the charity OpenAI with the objective of creating and supporting "friendly AI," or AI that is created, deployed, and utilized in a manner that benefits mankind as a whole.

OpenAI's objective is to make AI open and accessible to the general public, reducing the risks of AI being controlled by a few privileged people.

OpenAI is especially concerned about the possibility of Artificial General Intelligence (AGI), which is broadly defined as AI capable of human-level (or greater) performance on any intellectual task, and ensuring that any such AGI is developed responsibly, transparently, and distributed evenly and openly.

OpenAI has had its own successes in taking AI to new levels while staying true to its goals of keeping AI friendly and open.

In June of 2018, a team of OpenAI-built robots defeated a human team in the video game Dota 2, a feat that could only be accomplished through robot teamwork and collaboration.

Bill Gates, a cofounder of Microsoft, praised the achievement on Twitter, calling it "a huge milestone in advancing artificial intelligence" (@BillGates, June 26, 2018).

Musk resigned away from the OpenAI board in February 2018 to prevent any conflicts of interest while Tesla advanced its AI work for autonomous driving.

Musk became the CEO of Tesla in 2008 after cofounding the company in 2003 as an investor.

Musk was the chairman of Tesla's board of directors until 2018, when he stepped down as part of a deal with the US Securities and Exchange Commission over Musk's false claims about taking the company private.

Tesla produces electric automobiles with self-driving capabilities.

Tesla Grohmann Automation and Solar City, two of its subsidiaries, offer relevant automotive technology and manufacturing services and solar energy services, respectively.

Tesla, according to Musk, will reach Level 5 autonomous driving capabilities in 2019, as defined by the National Highway Traffic Safety Administration's (NHTSA) five levels of autonomous driving.

Tes la's aggressive development with autonomous driving has influenced conventional car makers' attitudes toward electric cars and autonomous driving, and prompted a congressional assessment of how and when the technology should be regulated.

Musk is widely credited as a key influencer in moving the automotive industry toward autonomous driving, highlighting the benefits of autonomous vehicles (including reduced fatalities in vehicle crashes, increased worker productivity, increased transportation efficiency, and job creation) and demonstrating that the technology is achievable in the near term.

Tesla's autonomous driving code has been created and enhanced under the guidance of Musk and Tesla's Director of AI, Andrej Karpathy (Autopilot).

The computer vision analysis used by Tesla, which includes an array of cameras on each car and real-time image processing, enables the system to make real-time observations and predictions.

The cameras, as well as other exterior and internal sensors, capture a large quantity of data, which is evaluated and utilized to improve Autopilot programming.

Tesla is the only autonomous car maker that is opposed to the LIDAR laser sensor (an acronym for light detection and ranging).

Tesla uses cameras, radar, and ultrasonic sensors instead.

Though academics and manufacturers disagree on whether LIDAR is required for fully autonomous driving, the high cost of LIDAR has limited Tesla's rivals' ability to produce and sell vehicles at a pricing range that allows a large number of cars on the road to gather data.

Tesla is creating its own AI hardware in addition to its AI programming.

Musk stated in late 2017 that Tesla is building its own silicon for artificial-intelligence calculations, allowing the company to construct its own AI processors rather than depending on third-party sources like Nvidia.

Tesla's AI progress in autonomous driving has been marred by setbacks.

Tesla has consistently missed self-imposed deadlines, and serious accidents have been blamed on flaws in the vehicle's Autopilot mode, including a non-injury accident in 2018, in which the vehicle failed to detect a parked firetruck on a California freeway, and a fatal accident in 2018, in which the vehicle failed to detect a pedestrian outside a crosswalk.

Neuralink was established by Musk in 2016.

With the stated objective of helping humans to keep up with AI breakthroughs, Neuralink is focused on creating devices that can be implanted into the human brain to better facilitate communication between the brain and software.

Musk has characterized the gadgets as a more efficient interface with computer equipment, while people now operate things with their fingertips and voice commands, directives would instead come straight from the brain.

Though Musk has made major advances to AI, his pronouncements regarding the risks linked with AI have been apocalyptic.

Musk has called AI "humanity's greatest existential danger" and "the greatest peril we face as a civilisation" (McFarland 2014).

(Morris 2017).

He cautions against the perils of power concentration, a lack of independent control, and a competitive rush to acceptance without appropriate analysis of the repercussions.

While Musk has used colorful terminology such as "summoning the devil" (McFarland 2014) and depictions of cyborg overlords, he has also warned of more immediate and realistic concerns such as job losses and AI-driven misinformation campaigns.

Though Musk's statements might come out as alarmist, many important and well-respected figures, including as Microsoft cofounder Bill Gates, Swedish-American scientist Max Tegmark, and the late theoretical physicist Stephen Hawking, share his concern.

Furthermore, Musk does not call for the cessation of AI research.

Instead, Musk supports for responsible AI development and regulation, including the formation of a Congressional committee to spend years studying AI with the goal of better understanding the technology and its hazards before establishing suitable legal limits.

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

You may also want to read more about Artificial Intelligence here.

See also: 

Bostrom, Nick; Superintelligence.

References & Further Reading:

Gates, Bill. (@BillGates). 2018. Twitter, June 26, 2018.

Marr, Bernard. 2018. “The Amazing Ways Tesla Is Using Artificial Intelligence and Big Data.” Forbes, January 8, 2018.

McFarland, Matt. 2014. “Elon Musk: With Artificial Intelligence, We Are Summoning the Demon.” Washington Post, October 24, 2014.

Morris, David Z. 2017. “Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization.’” Fortune, July 15, 2017.

Piper, Kelsey. 2018. “Why Elon Musk Fears Artificial Intelligence.” Vox Media, Novem￾ber 2, 2018.

Strauss, Neil. 2017. “Elon Musk: The Architect of Tomorrow.” Rolling Stone, November 15, 2017.

Artificial Intelligence - Ethics Of Autonomous Weapons Systems.


Autonomous weapons systems (AWS) are armaments that are designed to make judgments without the constant input of their programmers.

Navigation, target selection, and when to attack opposing fighters are just a few of the decisions that must be made.

Because of the imminence of this technology, numerous ethical questions and arguments have arisen regarding whether it should be developed and how it should be utilized.

The technology's seeming inevitability prompted Human Rights Watch to launch a campaign in 2013 called "Stop Killer Robots," which pushes for universal bans on their usage.

This movement continues to exist now.

Other academics and military strategists point to AWS' strategic and resource advantages as reasons for continuing to develop and use them.

A discussion of whether it is desirable or feasible to construct an international agreement on their development and/or usage is central to this argument.

Those who advocate for further technological advancement in these areas focus on the advantages that a military power can gain from using AWS.

These technologies have the potential to reduce collateral damage, battle casualties, the capacity to minimize needless risk, more efficient military operations, reduced psychological harm to troops from war, and armies with declining human numbers.

In other words, they concentrate on the advantages of the weapon to the military that will use it.

The essential assumption in these discussions is that the military's aims are morally worthwhile in and of themselves.

AWS may result in less civilian deaths since the systems can make judgments faster than humans; however, this is not always the case with technology, as the decision-making procedures of AWS may result in higher civilian fatalities rather than the opposite.

However, if they can avoid civilian fatalities and property damage more effectively than conventional fighting, they are more efficient and hence preferable.

In times of conflict, they might also improve efficiency by minimizing resource waste.

Transportation of people and the resources required to keep them alive is a time-consuming and challenging part of battle.

AWS provides a solution to complex logistical issues.

Drones and other autonomous systems don't need rain gear, food, drink, or medical attention, making them less cumbersome and perhaps more successful in completing their objectives.

AWS are considered as eliminating waste and offering the best possible outcome in a combat situation in these and other ways.

The employment of AWS in military operations is inextricably linked to Just War Theory.

Just War Theory examines whether it is morally acceptable or essential for a military force to engage in war, as well as what activities are ethically justifiable during wartime.

If an autonomous system may be used in a military strike, it can only be done if the attack is justifiable in the first place.

According to this viewpoint, the manner in which one is murdered is less essential than the reason for one's death.

Those who believe AWS is unethical concentrate on the hazards that such technology entails.

These scenarios include scenarios in which enemy combatants obtain weaponry and use it against the military power that deploys it, as well as scenarios in which there is increased (and uncontrollable) collateral damage, reduced retaliation capability (against enemy combatant aggressors), and loss of human dignity.

One key concern is whether being murdered by a computer without a person as the final decision-maker is consistent with human dignity.

There appears to be something demeaning about being murdered by an AWS that has had minimal human interaction.

Another key worry is the risk aspect, which includes the danger to the user of the technology that if the AWS is taken down (either because to a malfunction or an enemy assault), it will be seized and used against the owner.

Those who oppose the use of AWS are likewise concerned about the concept of just war.

The targeting of civilians by military agents is expressly prohibited under Just War Theory; the only lawful military targets are other military bases or personnel.

However, the introduction of autonomous weapons may imply that a state, particularly one without access to AWS, may be unable to react to military attacks launched by AWS.

In a scenario where one side has access to AWS but the other does not, the side without the weapons will inevitably be without a legal military target, forcing them to either target nonmilitary (civilian) targets or not react at all.

Neither alternative is feasible in terms of ethics or practicality.

Because automated weaponry is widely assumed to be on the horizon, another ethical consideration is how to regulate its use.

Because of the United States' extensive use of remote control drones in the Middle East, this debate has gotten a lot of attention.

Some advocate for a worldwide ban on the technology; although this is often seen as foolish and hence impractical, these advocates frequently point to the UN's restriction against blinding lasers, which has been ratified by 108 countries.

Others want to create an international convention that controls the proper use of these technologies, with consequences and punishments for nations that break these standards, rather than a full prohibition.

There is currently no such agreement, and each state must decide how to govern the usage of these technologies on its own.

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Battlefield AI and Robotics; Campaign to Stop Killer Robots; Lethal Autonomous Weapons Systems; Robot Ethics.

Further Reading

Arkin, Ronald C. 2010. “The Case for Ethical Autonomy in Unmanned Systems.” Journal 
of Military Ethics 9, no. 4: 332–41.

Bhuta, Nehal, Susanne Beck, Robin Geiss, Hin-Yan Liu, and Claus Kress, eds. 2016. 
Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge, UK: Cambridge 
University Press.

Killmister, Suzy. 2008. “Remote Weaponry: The Ethical Implications.” Journal of 
Applied Philosophy 25, no. 2: 121–33.

Leveringhaus, Alex. 2015. “Just Say ‘No!’ to Lethal Autonomous Robotic Weapons.” 
Journal of Information, Communication, and Ethics in Society 13, no. 3–4: 

Sparrow, Robert. 2016. “Robots and Respect: Assessing the Case Against Autonomous 
Weapon Systems.” Ethics & International Affairs 30, no. 1: 93–116.

What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...