Artificial Intelligence - AI And Robotics In The Battlefield.

 



Because of the growth of artificial intelligence (AI) and robots and their application to military matters, generals on the contemporary battlefield are seeing a possible tactical and strategic revolution.

Unmanned aerial vehicles (UAVs), also known as drones, and other robotic devices played a key role in the wars in Afghanistan (2001–) and Iraq (2003–2011).

It is possible that future conflicts will be waged without the participation of humans.

Without human control or guidance, autonomous robots will fight in war on land, in the air, and beneath the water.

While this vision remains in the realm of science fiction, battlefield AI and robotics raise a slew of practical, ethical, and legal issues that military leaders, technologists, jurists, and philosophers must address.

When many people think about AI and robotics on the battlefield, the first image that springs to mind is "killer robots," armed machines that indiscriminately destroy everything in their path.

There are, however, a variety of applications for battlefield AI that do not include killing.

In recent wars, the most notable application of such technology has been peaceful in character.

UAVs are often employed for surveillance and reconnaissance.

Other robots, like as iRobot's PackBot (the same firm that makes the vacuum-cleaning Roomba), are employed to locate and assess improvised explosive devices (IEDs), making their safe disposal easier.

Robotic devices can navigate treacherous terrain, such as Afghanistan's caves and mountain crags, as well as areas too dangerous for humans, such as under a vehicle suspected of being rigged with an IED.

Unmanned Underwater Vehicles (UUVs) are also used to detect mines underwater.

IEDs and explosives are so common on today's battlefields that these robotic gadgets are priceless.

Another potential life-saving capacity of battlefield robots that has yet to be realized is in the realm of medicine.

Robots can safely collect injured troops on the battlefield in areas that are inaccessible to their human counterparts, without jeopardizing their own lives.

Robots may also transport medical supplies and medications to troops on the battlefield, as well as conduct basic first aid and other emergency medical operations.

AI and robots have the greatest potential to change the battlefield—whether on land, sea, or in the air—in the arena of deadly power.

The Aegis Combat System (ACS) is an example of an autonomous system used by several militaries across the globe aboard destroyers and other naval combat vessels.

Through radar and sonar, the system can detect approaching threats, such as missiles from the surface or air, mines, or torpedoes from the water.

The system is equipped with a powerful computer system and can use its own munitions to eliminate identified threats.

Despite the fact that Aegis is activated and supervised manually, it has the potential to operate autonomously in order to counter threats faster than humans could.

In addition to partly automated systems like the ACS and UAVs, completely autonomous military robots capable of making judgments and acting on their own may be developed in the future.

The most significant feature of AI-powered robotics is the development of lethal autonomous weapons (LAWs), sometimes known as "killer robots." On a sliding scale, robot autonomy exists.

At one extreme of the spectrum are robots that are designed to operate autonomously, but only in reaction to a specific stimulus and in one direction.

This degree of autonomy is shown by a mine that detonates autonomously when stepped on.

Remotely operated machines, which are unmanned yet controlled remotely by a person, are also available at the lowest end of the range.

Semiautonomous systems occupy the midpoint of the spectrum.

These systems may be able to work without the assistance of a person, but only to a limited extent.

A robot commanded to launch, go to a certain area, and then return at a specific time is an example of such a system.

The machine does not make any "decisions" on its own in this situation.

Semiautonomous devices may also be configured to accomplish part of a task before waiting for further inputs before moving on to the next step.

Full autonomy is the last step.

Fully autonomous robots are designed with a purpose and are capable of achieving it entirely on their own.

This might include the capacity to use deadly force without direct human guidance in warfare circumstances.

Robotic gadgets that are lethally equipped, AI-enhanced, and totally autonomous have the ability to radically transform the current warfare.

Military ground troops made up of both humans and robots, or entirely of robots with no humans, would expand armies.

Small, armed UAVs would not be constrained by the requirement for human operators, and they might be assembled in massive swarms to overwhelm bigger, but less mobile troops.

Such technological advancements will entail equally dramatic shifts in tactics, strategy, and even the notion of combat.

This technology will become less expensive as it becomes more widely accessible.

This might disturb the present military power balance.

Even minor governments, and maybe even non-state organizations like terrorist groups, may be able to develop their own robotic army.

Fully autonomous LAWs bring up a slew of practical, ethical, and legal issues.

One of the most pressing practical considerations is safety.

A completely autonomous robot with deadly armament that malfunctions might represent a major threat to everyone who comes in contact with it.

Fully autonomous missiles might theoretically wander off course and kill innocent people due to a mechanical failure.

Unpredictable technological faults and malfunctions may occur in any kind of apparatus.

Such issues offer a severe safety concern to individuals who deploy deadly robotic gadgets as well as unwitting bystanders.

Even if there are no possible problems, restrictions in program ming may result in potentially disastrous errors.

Programming robots to discriminate between combatants and noncombatants, for example, is a big challenge, and it's simple to envisage misidentification leading to unintentional fatalities.

The greatest concern, though, is that robotic AI may grow too quickly and become independent of human control.

Sentient robots might turn their armament on humans, like in popular science fiction movies and literature, and in fulfillment of eminent scientist Stephen Hawking's grim forecast that the development of AI could end in humanity's annihilation.

Laws may also lead to major legal issues.

The rules of war apply to human beings.

Robots cannot be held accountable for prospective law crimes, whether criminally, civilly, or in any other manner.

As a result, there's a chance that war crimes or other legal violations may go unpunished.

Here are some serious issues to consider: Can the programmer or engineer of a robot be held liable for the machine's actions? Could a person who gave the robot its "command" be held liable for the robot's unpredictability or blunders on a mission that was otherwise self-directed? Such considerations must be thoroughly considered before any completely autonomous deadly equipment is deployed.

Aside from legal issues of duty, a slew of ethical issues must be addressed.

The conduct of war necessitates split-second moral judgments.

Will self-driving robots be able to tell the difference between a kid and a soldier, or between a wounded and helpless soldier and an active combatant? Will a robotic military force always be seen as a cold, brutal, and merciless army of destruction, or can a robot be designed to behave kindly when the situation demands it? Because combat is riddled with moral dilemmas, LAWs involved in war will always be confronted with them.

Experts wonder that dangerous autonomous robots can ever be trusted to do the right thing.

Moral action requires not just rationality—which robots may be capable of—but also emotions, empathy, and wisdom.

These later items are much more difficult to implement in code.

Many individuals have called for an absolute ban on research in this field because to legal, ethical, and practical problems highlighted by the potential of ever more powerful AI-powered robotic military hardware.

Others, on the other hand, believe that scientific advancement cannot be halted.

Rather than prohibiting such study, they argue that scientists and society as a whole should seek realistic answers to the difficulties.

Some argue that keeping continual human supervision and control over robotic military units may address many of the ethical and legal issues.

Others argue that direct supervision is unlikely in the long term because human intellect will be unable to match the pace with which computers think and act.

As the side that gives its robotic troops more autonomy gains an overwhelming advantage over those who strive to preserve human control, there will be an inevitable trend toward more and more autonomy.

They warn that fully autonomous forces will always triumph.

Despite the fact that it is still in its early stages, the introduction of more complex AI and robotic equipment to the battlefield has already resulted in significant change.

AI and robotics on the battlefield have the potential to drastically transform the future of warfare.

It remains to be seen if and how this technology's technical, practical, legal, and ethical limits can be addressed.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Autonomous Weapons Systems, Ethics of; Lethal Autonomous Weapons 
Systems.


Further Reading

Borenstein, Jason. 2008. “The Ethics of Autonomous Military Robots.” Studies in Ethics, 
Law, and Technology 2, no. 1: n.p. https://www.degruyter.com/view/journals/selt/2/1/article-selt.2008.2.1.1036.xml.xml.

Morris, Zachary L. 2018. “Developing a Light Infantry-Robotic Company as a System.” 
Military Review 98, no. 4 (July–August): 18–29.

Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New 
York: W. W. Norton.

Singer, Peter W. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st 
Century. London: Penguin.

Sparrow, Robert. 2007. “Killer Robots,” Journal of Applied Philosophy 24, no. 1: 62–77.



Artificial Intelligence - AI Systems That Are Autonomous Or Semiautonomous.

 



Autonomous and semiautonomous systems are characterized by their decision-making dependence on external orders.

They have something in common with conditionally autonomous and automated systems.

Semiautonomous systems depend on a human user somewhere "in the loop" for decision-making, behavior management, or contextual interventions, while autonomous systems may make decisions within a defined region of operation without human input.

Under some situations, conditionally autonomous systems may operate independently.

Automated systems differ from semiautonomous and autonomous systems (autonomy) (automation).

The actions of the earlier systems are preset sequences directly related to specific inputs, while the later systems' actions are predefined sequences directly tied to specified inputs.

When a system's actions and possibilities for action are established in advance as reactions to certain inputs, it is termed automated.

A garage door that automatically stops closing when a sensor detects an impediment in its path is an example of an automated system.

Sensors and user interaction may both be used to collect data.

An automated dishwasher or clothes washer, for example, is a user-initiated automatic system in which the human user sets the sequences of events and behaviors via a user interface, and the machine subsequently executes the commands according to established mechanical sequences.

Autonomous systems, on the other hand, are ones in which the capacity to evaluate conditions and choose actions is intrinsic to the system.

The autonomous system, like an automated system, depends on sensors, cameras, or human input to give data, but its responses are marked by more complicated decision-making based on the contextual evaluation of many simultaneous inputs such as user intent, environment, and capabilities.

When it comes to real-world instances of systems, the terms automated, semiautonomous, and autonomous are used interchangeably depending on the nature of the tasks at hand and the intricacies of decision-making.

These categories aren't usually defined clearly or exactly.

Finally, the degree to which these categories apply is determined by the size and scope of the activity in question.

While the above-mentioned basic differences between automated, semiautonomous, and autonomous systems are widely accepted, there is some dispute as to whether these system types exist in real systems.

The degrees of autonomy established by SAE (previously the Society of Automotive Engineers) for autonomous automobiles are one example of such ambiguity.

Depending on road or weather conditions, as well as situational indices like the existence of road barriers, lane markings, geo-fencing, adjacent cars, or speed, a single system may be Level 2 partly autonomous, Level 3 conditionally autonomous, or Level 4 autonomous.

The degree of autonomy may also be determined by how an automobile job is characterized.

In this sense, a system's categorization is determined as much by its technical structure as by the conditions of its operation or the characteristics of the activity focus.



EXAMPLES OF AUTONOMOUS AI SYSTEMS



E Vehicles that are self-driving.


 The contrasts between automated, semiautonomous, conditionally autonomous, and completely autonomous vehicle systems are shown using automated, semiautonomous, conditionally autonomous, and fully autonomous car systems.

Automated technology, like as cruise control, is an example.

The user specifies a vehicle speed goal, and the vehicle maintains that speed while adjusting acceleration and deceleration as needed by the terrain.

However, in the case of semiautonomous vehicles, a vehicle may be equipped with an adaptive cruise control feature (one that regulates a vehicle's speed in relation to a leading vehicle and to a user's input), as well as lane keeping assistance, automatic braking, and collision mitigation technology.

Semiautonomous cars are now available on the market.

Many possible inputs (surrounding cars, lane markings, human input, impediments, speed restrictions, etc.) may be interpreted by systems, which can then regulate longitudinal and latitudinal control to semiautonomously direct the vehicle's trajectory.

The human user is still involved in decision-making, monitoring, and interventions in this system.

Conditional autonomy refers to a system that allows a human user to "leave the loop" of control and decision-making under certain situations.

The vehicle analyzes emergent inputs and controls its behavior to accomplish the objective without human supervision or intervention after a goal is set (e.g., to continue on a route).

Internal to the activity (defined by the purpose and accessible methods), behaviors are governed and controlled without the involvement of the human user.

It's crucial to remember that every categorization is conditional on the aim and activity being operationalized.

Finally, an autonomous system has fewer constraints than conditional autonomy and is capable of controlling all tasks in a given activity.

An autonomous system, like conditional autonomy, functions inside the activity structure without the involvement of a human user.



Autonomous Robotics


For a number of reasons, autonomous systems may be found in the area of robotics.

There are a variety of reasons why autonomous robots should be used to replace or augment humans, including safety (for example, spaceflight or planetary surface exploration), undesirable circumstances (monotonous tasks such as domestic chores and strenuous labor such as heavy lifting), and situations where human action is limited or impossible (search and rescue in confined conditions).

Robotics applications, like automobile applications, may be deemed autonomous within the confines of a carefully defined domain or activity area, such as a factory assembly line or a residence.

The degree of autonomy, like autonomous cars, is dependent on the specific area and, in many situations, excludes maintenance and repair.

Unlike automated systems, however, an autonomous robot inside such a defined activity structure would behave to achieve a set objective by sensing its surroundings, analyzing contextual inputs, and regulating behavior appropriately without the need for human interaction.

Autonomous robots are now used in a wide range of applications, including domestic uses such as autonomous lawn care robots and interplanetary exploration applications such as the Mars rovers MER-A and MER-B.




Semiautonomous Weapons


 is an acronym for "Semiautonomous Weapons." As part of contemporary military capabilities, autonomous and semiautonomous weapon systems are now being developed.

The definition of, and difference between, autonomous and semiautonomous changes significantly depending on the operationalization of the terminology, the context, and the sphere of activity, much as it does in the preceding automobile and robotics instances.

Consider a landmine as an example of an automated weapon that is not self-contained.

It reacts with fatal force when a sensor is activated, and there is no decision-making capabilities or human interaction.

A semiautonomous system, on the other hand, processes inputs and acts on them for a collection of tasks that form weaponry activity in collaboration with a human user.

The weapons system and the human operator must work together to complete a single task.

To put it another way, the human user is "in the know." Identifying a target, aiming, and shooting are examples of these activities.

Navigation toward a target, placement, and reloading are all possible.

These duties are shared between the system and the human user in a semiautonomous weapon system.

An autonomous system, on the other hand, would be accountable for all of these duties without the need for human monitoring, decision-making, or intervention after the objective was determined and the parameters provided.

There are presently no completely autonomous weapons systems that meet these requirements.

These meanings, as previously stated, are technologically, socially, legally, and linguistically dependent.

The distinction between semiautonomous and autonomous systems has ethical, moral, and political implications, particularly in the case of weapons systems.

This is particularly relevant for assessing accountability, since causal agency and decision-making may be distributed across developers and consumers.

As in the case of machine learning algorithms, the sources of agency and decision-making may also be ambiguous.



USER-INTERFACE CONSIDERATIONS.

 

The various obstacles in building optimum user interfaces for semiautonomous and autonomous systems are mirrored in the ambiguity of their definitions.

For example, in the case of automobiles, ensuring that the user and the system (as designed by the system's designers) have a consistent model of the capabilities being automated (as well as the intended distribution and degree of control) is crucial for the safe transfer of control responsibility.

In the sense that once an activity area is specified, control and responsibility are binary, autonomous systems pose similar user-interface issues (either the system or the human user is responsible).

The problem is reduced to defining the activity and relinquishing control in this case.

Because the description of an activity domain has no required relationship to the composition, structure, and interaction of constituent activities, semiautonomous systems create more difficult issues for the design of user interfaces.

Particular tasks (such as a car maintaining lateral position in a lane) may be decided by an engineer's use of specific technical equipment (and the restrictions that come with it) and therefore have no link to the user's mental representation of that work.

An obstacle detection task, in which a semiautonomous system moves about an environment by avoiding impediments, is an example.

The machine's obstacle detection technologies (camera, radar, optical sensors, touch sensors, thermal sensors, mapping, and so on) define what is and isn't an impediment, and such restrictions may be opaque to the user.

As a consequence of the ambiguity, the system must communicate with a human user when assistance is required, and the system (and its designers) must recognize and anticipate any conflict between system and user models.

Other considerations for designing semiautonomous and autonomous systems (specifically in relation to the ethical and legal dimensions complicated by the distribution of agency among developers and users) include identification and authorization methods and protocols, in addition to the issues raised above.

The difficulty of identifying and approving users for autonomous technology activation is crucial since once activated, systems no longer need continuous monitoring, intermittent decision-making, or interaction.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Autonomy and Complacency; Driverless Cars and Trucks; Lethal Autonomous Weapons Systems.


Further Reading

Antsaklis, Panos J., Kevin M. Passino, and Shyh Jong Wang. 1991. “An Introduction to Autonomous Control Systems.” IEEE Control Systems 11, no. 4 (June): 5–13.

Bekey, George A. 2005. Autonomous Robots: From Biological Inspiration to Implementation and Control. Cambridge, MA: MIT Press.

Norman, Donald A., Andrew Ortony, and Daniel M. Russell. 2003. “Affect and Machine Design: Lessons for the Development of Autonomous Machines.” IBM Systems Journal 42, no. 1: 38–44.

Roff, Heather. 2015. “Autonomous or ‘Semi’ Autonomous Weapons? A Distinction without a Difference?” Huffington Post, January 16, 2015. https://www.huffpost.com/entry/autonomous-or-semi-autono_b_6487268.

SAE International. 2014. “Taxonomy and Definitions for Terms Related to On-Road 
Motor Vehicle Automated Driving Systems.” J3016. SAE International Standard. 
https://www.sae.org/standards/content/j3016_201401/.




Artificial Intelligence - What Is Bayesian Inference?

 





Bayesian inference is a method of calculating the likelihood of a proposition's validity based on a previous estimate of its likelihood plus any new and relevant facts.

In the twentieth century, Bayes' Theorem, from which Bayesian statistics are derived, was a prominent mathematical technique employed in expert systems.

The Bayesian theorem has been used to issues such as robot locomotion, weather forecasting, juri metry (the application of quantitative approaches to legislation), phylogenetics (the evolutionary links among animals), and pattern recognition in the twenty-first century.

It's also used in email spam filters and can be used to solve the famous Monty Hall issue.

The mathematical theorem was derived by Reverend Thomas Bayes (1702–1761) of England and published posthumously in the Philosophical Transactions of the Royal Society of London in 1763 as "An Essay Towards Solving a Problem in the Doctrine of Chances." Bayes' Theorem of Inverse Probabilities is another name for it.

A classic article titled "Reasoning Foundations of Medical Diagnosis," written by George Washington University electrical engineer Robert Ledley and Rochester School of Medicine radiologist Lee Lusted and published by Science in 1959, was the first notable discussion of Bayes' Theorem as applied to the field of medical artificial intelligence.

Medical information in the mid-twentieth century was frequently given as symptoms connected with an illness, rather than diseases associated with a symptom, as Lusted subsequently recalled.

They came up with the notion of expressing medical knowledge as the likelihood of a disease given the patient's symptoms using Bayesian reasoning.

Bayesian statistics are conditional, allowing one to determine the likelihood that a specific disease is present based on a specific symptom, but only with prior knowledge of how frequently the disease and symptom are correlated, as well as how frequently the symptom is present in the absence of the disease.

It's pretty similar to what Alan Turing called the evidence-based element in support of the hypothesis.

The symptom-disease complex, which involves several symptoms in a patient, may also be resolved using Bayes' Theorem.

In computer-aided diagnosis, Bayesian statistics analyzes the likelihood of each illness manifesting in a population with the chance of each symptom manifesting given each disease to determine the probability of all possible diseases given each patient's symptom-disease complex.

All induction, according to Bayes' Theorem, is statistical.

In 1960, the theory was used to generate the posterior probability of certain illnesses for the first time.

In that year, University of Utah cardiologist Homer Warner, Jr.

used Bayesian statistics to detect well-defined congenital heart problems at Salt Lake's Latter-Day Saints Hospital, thanks to his access to a Burroughs 205 digital computer.

The theory was used by Warner and his team to calculate the chances that an undiscovered patient having identifiable symptoms, signs, or laboratory data would fall into previously recognized illness categories.

As additional information became available, the computer software could be employed again and again, creating or rating diagnoses via serial observation.

The Burroughs computer outperformed any professional cardiologist in applying Bayesian conditional-probability algorithms to a symptom-disease matrix of thirty-five cardiac diseases, according to Warner.

John Overall, Clyde Williams, and Lawrence Fitzgerald for thyroid problems; Charles Nugent for Cushing's illness; Gwilym Lodwick for primary bone tumors; Martin Lipkin for hematological diseases; and Tim de Dombal for acute abdominal discomfort were among the early supporters of Bayesian estimation.

In the previous half-century, the Bayesian model has been expanded and changed several times to account for or compensate for sequential diagnosis and conditional independence, as well as to weight other elements.

Poor prediction of rare diseases, insufficient discrimination between diseases with similar symptom complexes, inability to quantify qualitative evidence, troubling conditional dependence between evidence and hypotheses, and the enormous amount of manual labor required to maintain the requisite joint probability distribution tables are all criticisms leveled at Bayesian computer-aided diagnosis.

Outside of the populations for which they were intended, Bayesian diagnostic helpers have been chastised for their shortcomings.

When rule-based decision support algorithms became more prominent in the mid-1970s, the application of Bayesian statistics in differential diagnosis reached a low.

In the 1980s, Bayesian approaches resurfaced and are now extensively employed in the area of machine learning.

From the concept of Bayesian inference, artificial intelligence researchers have developed robust techniques for supervised learning, hidden Markov models, and mixed approaches for unsupervised learning.

Bayesian inference has been controversially utilized in artificial intelligence algorithms that aim to calculate the conditional chance of a crime being committed, to screen welfare recipients for drug use, and to identify prospective mass shooters and terrorists in the real world.

The method has come under fire once again, especially when screening includes infrequent or severe incidents, where the AI system might act arbitrarily and flag too many people as being at danger of partaking in the unwanted behavior.

In the United Kingdom, Bayesian inference has also been used into the courtroom.

The defense team in Regina v.

Adams (1996) offered jurors the Bayesian approach to aid them in forming an unbiased mechanism for combining introduced evidence, which included a DNA profile and varying match probability calculations, as well as constructing a personal threshold for convicting the accused "beyond a reasonable doubt." Before Ledley, Lusted, and Warner revived Bayes' theorem in the 1950s, it had previously been "rediscovered" multiple times.

Pierre-Simon Laplace, the Marquis de Condorcet, and George Boole were among the historical figures who saw merit in the Bayesian approach to probability.

The Monty Hall dilemma, named after the presenter of the famous game show Let's Make a Deal, involves a contestant selecting whether to continue with the door they've chosen or swap to another unopened door when Monty Hall (who knows where the reward is) opens one to reveal a goat.

Switching doors, contrary to popular belief, doubles your odds of winning under conditional probability.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Computational Neuroscience; Computer-Assisted Diagnosis.


Further Reading

Ashley, Kevin D., and Stefanie Brüninghaus. 2006. “Computer Models for Legal Prediction.” Jurimetrics 46, no. 3 (Spring): 309–52.

Barnett, G. Octo. 1968. “Computers in Patient Care.” New England Journal of Medicine
279 (December): 1321–27.

Bayes, Thomas. 1763. “An Essay Towards Solving a Problem in the Doctrine of Chances.” 
Philosophical Transactions 53 (December): 370–418.

Donnelly, Peter. 2005. “Appealing Statistics.” Significance 2, no. 1 (February): 46–48.
Fox, John, D. Barber, and K. D. Bardhan. 1980. “Alternatives to Bayes: A Quantitative 
Comparison with Rule-Based Diagnosis.” Methods of Information in Medicine 19, 
no. 4 (October): 210–15.

Ledley, Robert S., and Lee B. Lusted. 1959. “Reasoning Foundations of Medical Diagnosis.” Science 130, no. 3366 (July): 9–21.

Lusted, Lee B. 1991. “A Clearing ‘Haze’: A View from My Window.” Medical Decision 
Making 11, no. 2 (April–June): 76–87.

Warner, Homer R., Jr., A. F. Toronto, and L. G. Veasey. 1964. “Experience with Bayes’ 
Theorem for Computer Diagnosis of Congenital Heart Disease.” Annals of the 
New York Academy of Sciences 115: 558–67.


Artificial Intelligence - Autonomy And Complacency In AI Systems.

 




The concepts of machine autonomy and human autonomy and complacency are intertwined.

Artificial intelligences are undoubtedly getting more independent as they are trained to learn from their own experiences and data intake.

Machines that gain more skills than humans tend to become increasingly dependent on them to make judgments and react correctly to unexpected events.

This dependence on AI systems' decision-making processes might lead to a loss of human agency and complacency.

This complacency may result in the AI's system or decision-making processes failing to respond to major faults.

Autonomous machines are ones that can function in unsupervised settings, adapt to new situations and experiences, learn from previous errors, and decide the best potential outcomes in each case without the need for fresh programming input.

To put it another way, these robots learn from their experiences and are capable of going beyond their original programming in certain respects.

The concept is that programmers won't be able to foresee every circumstance that an AI-enabled machine could experience based on its activities, thus it must be able to adapt.

This is not widely recognized, since others say that these systems' adaptability is inherent in their programming, as their programs are designed to be adaptable.

The disagreement over whether any agent, including humans, can express free will and act autonomously exacerbates these debates.

With the advancement of technology, the autonomy of AI programs is not the only element of autonomy that is being explored.

Worries have also been raised concerning the influence on human autonomy, as well as concerns about machine complacency.

People who gain from the machine's choice being irrelevant since they no longer have to make decisions as AI systems grow increasingly tuned to anticipate people's wishes and preferences.

The interaction of human employees and automated systems has gotten a lot of attention.

According to studies, humans are more prone to overlook flaws in these procedures, particularly when they get routinized, which leads to a positive expectation of success rather than a negative expectation of failure.

This sense of accomplishment causes the operators or supervisors of automated processes to place their confidence in inaccurate readouts or machine judgments, which may lead to mistakes and accidents.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Accidents and Risk Assessment; Autonomous and Semiautonomous Systems.



Further Reading


André, Quentin, Ziv Carmon, Klaus Wertenbroch, Alia Crum, Frank Douglas, William 
Goldstein, Joel Huber, Leaf Van Boven, Bernd Weber, and Haiyang Yang. 2018. 

“Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data.” Customer Needs and Solutions 5, no. 1–2: 28–37.

Bahner, J. Elin, Anke-Dorothea Hüper, and Dietrich Manzey. 2008. “Misuse of Auto￾mated Decision Aids: Complacency, Automation Bias, and the Impact of Training 
Experience.” International Journal of Human-Computer Studies 66, no. 9: 
688–99.

Lawless, W. F., Ranjeev Mittu, Donald Sofge, and Stephen Russell, eds. 2017. Autonomy 
and Intelligence: A Threat or Savior? Cham, Switzerland: Springer.

Parasuraman, Raja, and Dietrich H. Manzey. 2010. “Complacency and Bias in Human 
Use of Automation: An Attentional Integration.” Human Factors 52, no. 3: 
381–410.





Artificial Intelligence - Ethics Of Autonomous Weapons Systems.

 



Autonomous weapons systems (AWS) are armaments that are designed to make judgments without the constant input of their programmers.

Navigation, target selection, and when to attack opposing fighters are just a few of the decisions that must be made.

Because of the imminence of this technology, numerous ethical questions and arguments have arisen regarding whether it should be developed and how it should be utilized.

The technology's seeming inevitability prompted Human Rights Watch to launch a campaign in 2013 called "Stop Killer Robots," which pushes for universal bans on their usage.

This movement continues to exist now.

Other academics and military strategists point to AWS' strategic and resource advantages as reasons for continuing to develop and use them.

A discussion of whether it is desirable or feasible to construct an international agreement on their development and/or usage is central to this argument.

Those who advocate for further technological advancement in these areas focus on the advantages that a military power can gain from using AWS.

These technologies have the potential to reduce collateral damage, battle casualties, the capacity to minimize needless risk, more efficient military operations, reduced psychological harm to troops from war, and armies with declining human numbers.

In other words, they concentrate on the advantages of the weapon to the military that will use it.

The essential assumption in these discussions is that the military's aims are morally worthwhile in and of themselves.

AWS may result in less civilian deaths since the systems can make judgments faster than humans; however, this is not always the case with technology, as the decision-making procedures of AWS may result in higher civilian fatalities rather than the opposite.

However, if they can avoid civilian fatalities and property damage more effectively than conventional fighting, they are more efficient and hence preferable.

In times of conflict, they might also improve efficiency by minimizing resource waste.

Transportation of people and the resources required to keep them alive is a time-consuming and challenging part of battle.

AWS provides a solution to complex logistical issues.

Drones and other autonomous systems don't need rain gear, food, drink, or medical attention, making them less cumbersome and perhaps more successful in completing their objectives.

AWS are considered as eliminating waste and offering the best possible outcome in a combat situation in these and other ways.

The employment of AWS in military operations is inextricably linked to Just War Theory.

Just War Theory examines whether it is morally acceptable or essential for a military force to engage in war, as well as what activities are ethically justifiable during wartime.

If an autonomous system may be used in a military strike, it can only be done if the attack is justifiable in the first place.

According to this viewpoint, the manner in which one is murdered is less essential than the reason for one's death.

Those who believe AWS is unethical concentrate on the hazards that such technology entails.

These scenarios include scenarios in which enemy combatants obtain weaponry and use it against the military power that deploys it, as well as scenarios in which there is increased (and uncontrollable) collateral damage, reduced retaliation capability (against enemy combatant aggressors), and loss of human dignity.

One key concern is whether being murdered by a computer without a person as the final decision-maker is consistent with human dignity.

There appears to be something demeaning about being murdered by an AWS that has had minimal human interaction.

Another key worry is the risk aspect, which includes the danger to the user of the technology that if the AWS is taken down (either because to a malfunction or an enemy assault), it will be seized and used against the owner.

Those who oppose the use of AWS are likewise concerned about the concept of just war.

The targeting of civilians by military agents is expressly prohibited under Just War Theory; the only lawful military targets are other military bases or personnel.

However, the introduction of autonomous weapons may imply that a state, particularly one without access to AWS, may be unable to react to military attacks launched by AWS.

In a scenario where one side has access to AWS but the other does not, the side without the weapons will inevitably be without a legal military target, forcing them to either target nonmilitary (civilian) targets or not react at all.

Neither alternative is feasible in terms of ethics or practicality.

Because automated weaponry is widely assumed to be on the horizon, another ethical consideration is how to regulate its use.

Because of the United States' extensive use of remote control drones in the Middle East, this debate has gotten a lot of attention.

Some advocate for a worldwide ban on the technology; although this is often seen as foolish and hence impractical, these advocates frequently point to the UN's restriction against blinding lasers, which has been ratified by 108 countries.

Others want to create an international convention that controls the proper use of these technologies, with consequences and punishments for nations that break these standards, rather than a full prohibition.

There is currently no such agreement, and each state must decide how to govern the usage of these technologies on its own.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Battlefield AI and Robotics; Campaign to Stop Killer Robots; Lethal Autonomous Weapons Systems; Robot Ethics.



Further Reading

Arkin, Ronald C. 2010. “The Case for Ethical Autonomy in Unmanned Systems.” Journal 
of Military Ethics 9, no. 4: 332–41.

Bhuta, Nehal, Susanne Beck, Robin Geiss, Hin-Yan Liu, and Claus Kress, eds. 2016. 
Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge, UK: Cambridge 
University Press.

Killmister, Suzy. 2008. “Remote Weaponry: The Ethical Implications.” Journal of 
Applied Philosophy 25, no. 2: 121–33.

Leveringhaus, Alex. 2015. “Just Say ‘No!’ to Lethal Autonomous Robotic Weapons.” 
Journal of Information, Communication, and Ethics in Society 13, no. 3–4: 
299–313.

Sparrow, Robert. 2016. “Robots and Respect: Assessing the Case Against Autonomous 
Weapon Systems.” Ethics & International Affairs 30, no. 1: 93–116.





Artificial Intelligence - What Is Automatic Film Editing?

  



Automatic film editing is a method of assembling full motion movies in which an algorithm, taught to obey fundamental cinematography standards, cuts and sequences footage.

Automated editing is part of a larger endeavor, known as intelligent cinematography, to include artificial intelligence into filmmaking.

Alfred Hitchcock, the legendary director, predicted that an IBM computer will one day be capable of converting a written script into a polished picture in the mid-1960s.

Many of the concepts of modern filmmaking were created by Alfred Hitchcock.

His argument that, if feasible, the size of a person or item in frame should be proportionate to their importance in the plot at that precise moment in time is one well-known rule of thumb.

"Exit left, enter right," which helps the audience follow lateral motions of actors on the screen, and the 180 and 30-degree principles for preserving spatial connections between subjects and the camera, are two more film editing precepts that arose through extensive experience by filmmakers.

Over time, these principles evolved into heuristics that regulate shot selection, editing, and rhythm and tempo.

Joseph Mascelli's Five C's of Cinematography (1965), for example, has become a large knowledge base for making judgments regarding camera angles, continuity, editing, closeups, and composition.

These human-curated guidelines and human-annotated movie stock material and snippets gave birth to the first artificial intelligence film editing systems.

IDIC, created by Warren Sack and Marc Davis at the MIT Media Lab in the early 1990s, is an example of a system from that era.

IDIC is based on Herbert Simon, J. C. Shaw, and Allen Newell's General Issue Solver, an early artificial intelligence software that was supposed to answer any general problem using the same fundamental method.

IDIC was used to create fictitious Star Trek television trailers based on a human-specified narrative plan focusing on a certain plot element.

Several film editing systems depend on idioms, or standard techniques for editing and framing recorded action in certain contexts.

The idioms themselves will differ depending on the film's style, the setting, and the action to be shown.

In this manner, experienced editors' expertise may be accessed using case-based reasoning, with prior editing recipes being used to tackle comparable present and future challenges.

Editing for combat sequences, like regular character talks, follows standard idiomatic route methods.

This is the method used by Li-wei He, Michael F. Cohen, and David H. Salesin in their Virtual Cinema tographer, which uses expert idiom knowledge in the editing of fully computer-generated video for interactive virtual environments.

He's group created the Declarative Camera Control Language (DCCL), which formalizes the control of camera locations in the editing of CGI animated films to match cinematographic traditions.

Researchers have lately begun experimenting with deep learning algorithms and training data extracted from existing collections of well-known films with good cinematographic quality to develop recommended best cuts of new films.

Many of the latest apps may be used with mobile, drone, or portable devices.

Short and interesting films constructed from pictures taken by amateurs with smartphones are projected to become a preferred medium of interaction over future social media due to easy automated video editing.

Photography is presently filling that need.

In machinima films generated with 3D virtual game engines and virtual actors, automatic film editing is also used as an editing technique.




~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Workplace Automation.


Further Reading

Galvane, Quentin, Rémi Ronfard, and Marc Christie. 2015. “Comparing Film-Editing.” In Eurographics Workshop on Intelligent Cinematography and Editing, edited by William H. Bares, Marc Christie, and Rémi Ronfard, 5–12. Aire-la-Ville, Switzerland: Eurographics Association.

He, Li-wei, Michael F. Cohen, and David H. Salesin. 1996. “The Virtual Cinematographer: A Paradigm for Automatic Real-Time Camera Control and Directing.” In 

Proceedings of SIGGRAPH ’96, 217–24. New York: Association for Computing Machinery.

Ronfard, Rémi. 2012. “A Review of Film Editing Techniques for Digital Games.” In Workshop on Intelligent Cinematography and Editing. https://hal.inria.fr/hal-00694444/.

Artificial Intelligence - What Is Automated Multiphasic Health Testing?

 




Automated Multiphasic Health Testing (AMHT) is an early medical computer system for screening large numbers of ill or healthy people in a short period of time semiautomatically.

Lester Breslow, a public health official, pioneered the AMHT idea in 1948, integrating typical automated medical questionnaires with mass screening procedures for groups of individuals being examined for specific illnesses like diabetes, TB, or heart disease.

Multiphasic health testing involves integrating a number of tests into a single package to screen a group of individuals for different diseases, illnesses, or injuries.

AMHT might be related to regular physical exams or health programs.

Humans are subjected to examinations similar to those used in state inspections of autos.

In other words, AMHT approaches preventative medical care in a factory-like manner.

In the 1950s, Automated Multiphasic Health Testing (AMHT) became popular, allowing health care networks to swiftly screen new candidates.

In 1951, the Kaiser Foundation Health Plan began offering a Multiphasic Health Checkup to its members.

Morris F. Collen, an electrical engineer and physician, was the program's director from 1961 until 1979.

The "Kaiser Checkup," which used an IBM 1440 computer to crunch data from patient interviews, lab testing, and clinical findings, looked for undetected illnesses and made treatment suggestions.

Patients hand-sorted 200 prepunched cards with printed questions requiring "yes" or "no" replies at the questionnaire station (one of twenty such stations).

The computer shuffled the cards and used a probability ratio test devised by Jerzy Neyman, a well-known statistician.

Electrocardiographic, spirographic, and ballistocardiographic medical data were also captured by Kaiser's computer system.

A Kaiser Checkup takes around two and a half hours to complete.

BUPA in the United Kingdom and a nationwide program created by the Swedish government are two examples of similar AMHT initiatives that have been introduced in other countries.

The popularity of computerized health testing has fallen in recent decades.

There are issues concerning privacy as well as financial considerations.

Working with AMHT, doctors and computer scientists learned that the body typically masks symptoms.

A sick person may pass through diagnostic devices successfully one day and then die the next.

Electronic medical recordkeeping, on the other hand, has succeeded where AMHT has failed.

Without physical handling or duplication, records may be sent, modified, and returned.

Multiple health providers may utilize patient charts at the same time.

Uniform data input ensures readability and consistency in structure.

Summary reports may now be generated automatically from the information gathered in individual electronic medical records using electronic medical records software.

These "big data" reports make it possible to monitor changes in medical practice as well as evaluate results over time.

Summary reports also enable cross-patient analysis, a detailed algorithmic examination of prognoses by patient groups, and the identification of risk factors prior to the need for therapy.

The application of deep learning algorithms to medical data has sparked a surge of interest in so-called cognitive computing for health care.

IBM's Watson system and Google DeepMind Health, two current leaders, promise changes in eye illness and cancer detection and treatment.

Also unveiled by IBM is the Medical Sieve system, which analyzes both radiological images and textual documents.

Clinical Decision Support Systems, Computer-Assisted Diagnosis, INTERNIST-I, and QMR are all examples of clinical decision support systems.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Clinical Decision Support Systems; Computer-Assisted Diagnosis; INTERNIST-I and QMR.


Further Reading


Ayers, W. R., H. M. Hochberg, and C. A. Caceres. 1969. “Automated Multiphasic Health Testing.” Public Health Reports 84, no. 7 (July): 582–84.

Bleich, Howard L. 1994. “The Kaiser Permanente Health Plan, Dr. Morris F. Collen, and Automated Multiphasic Testing.” MD Computing 11, no. 3 (May–June): 136–39.

Collen, Morris F. 1965. “Multiphasic Screening as a Diagnostic Method in Preventive Medicine.” Methods of Information in Medicine 4, no. 2 (June): 71–74.

Collen, Morris F. 1988. “History of the Kaiser Permanente Medical Care Program.” Inter￾viewed by Sally Smith Hughes. Berkeley: Regional Oral History Office, Bancroft Library, University of California.

Mesko, Bertalan. 2017. “The Role of Artificial Intelligence in Precision Medicine.” Expert Review of Precision Medicine and Drug Development 2, no. 5 (September): 239–41.

Roberts, N., L. Gitman, L. J. Warshaw, R. A. Bruce, J. Stamler, and C. A. Caceres. 1969. “Conference on Automated Multiphasic Health Screening: Panel Discussion, Morning Session.” Bulletin of the New York Academy of Medicine 45, no. 12 (December): 1326–37.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...