Artificial Intelligence - Animal Consciousness, Social Cognition, Soul, And AI.



Researchers have gained a growing understanding for animal and other nonhuman intelligences in recent decades.

Ravens, bower birds, gorillas, elephants, cats, crows, dogs, dolphins, chimps, grey parrots, jackdaws, magpies, beluga whales, octopi, and a variety of other creatures have all argued for consciousness or sentience, sophisticated kinds of cognition, and personal rights.

By adding one more prejudice to the contemporary struggle against racism, classism, sexism, and ethnocentrism, the Cambridge Declaration on Consciousness and the separate Nonhuman Rights Project mirror the contemporary struggle against racism, classism, sexism, and ethnocentrism: "speciesism," coined in 1970 by psychologist Richard Ryder and popularized by philosopher Peter Singer.

Animal awareness, in fact, may open the way for the investigation and appreciation of other sorts of postulated intelligences, including artificial (traditionally regarded as "mindless machines," such as animals) and alien intelligences.

The knowability of subjective experience and objective properties of other forms of consciousness is one of the most significant topics professionals in many professions are grappling with today.

"What is it like to be a bat?" reportedly enquired philosopher Thomas Nagel, particularly because bats can use echolocation but humans cannot.

Most selfishly, a greater knowledge of animal consciousness might lead to a better comprehension of human mind by comparison.

Looking to animals may also give fresh insights into the principles behind the emergence of consciousness in humans, which may aid scientists in equipping robots with comparable characteristics, appreciating their moral standing, or sympathizing with their actions.

Animals have been utilized as a tool to achieve human goals rather than as ends in and of themselves throughout history.

Cows produce milk that is consumed by humans.

Sheep produce wool, which is used to manufacture clothes.

Horses used to be used for transportation and power in agriculture, but today they are used for amusement and gambling.

The "discovery" of animal awareness may imply that humans are no longer at the center of their own mental world.

The twentieth-"Cognitive century's Revolution," which ostensibly eliminated the soul as a scientific explanation for mental life, opened the door to studying and conducting experiments in animal perception, memory, cognition, and reasoning, as well as exploring the possibilities for incorporating sophisticated information processing convolutions and integrative capabilities into machines.

The possibility of a fundamental cognitive "software" that is shared by humans, animals, and artificial general intelligences is often addressed in emerging interdisciplinary sciences like neuroscience, evolutionary psychology, and computer science.

In his book Man and Dolphin (1961), independent researcher John Lilly was one of the first to propose that dolphins are not only intelligent, but also exhibit traits and communication abilities that are superior to humans in many aspects.

Many of his results have subsequently been validated by other researchers such as Lori Marino and Diana Reiss, and a broad consensus has been formed that dolphins' self-awareness falls somewhere between humans and chimpanzees.

Dolphins have been spotted fishing together with human fishers, while Pelorus Jack, the most renowned dolphin in history, reliably and freely accompanied ships for twenty-four years over the treacherous rocks and tidal surges of Cook Strait in New Zealand.

Some animals seem to pass the well-known self-recognition mirror test.

Dolphins and killer whales, chimps and bonobos, magpies, and elephants are among them.

The test is often performed by painting a tiny mark on an animal in a location that it cannot see without using a mirror.

The animal is reported to identify itself if they touch the mark on their own body after seeing it reflected in the mirror.

Certain detractors claim that the mirror-mark test is unfair to some animal species because it favors vision over other sense organs.

The study of animal consciousness, according to SETI experts, may help humans contend with the existential implications of self-aware alien intelligences.

Similarly, work with animals has sparked curiosity in artificial intelligences' awareness.

To give you an example, consider the following: John Lilly discusses a hypothetical Solid State Intelligence (SSI) that will eventually evolve from the labor of human computer scientists and engineers in his book The Scientist (1978).

This SSI would be built out of computer components, develop its own integrations and advancements, and eventually self-replicate to confront and destroy humans.

Some human beings would be protected by the SSI in domed "reservations" that it would maintain and govern.

The SSI would eventually develop the capacity to move the planet and traverse the cosmos in search of other intelligences similar to itself.

Artificial intelligence's self-consciousness has been criticized on many grounds.

Machines, according to John Searle, lack intentionality, or the capacity to discover meaning in the computations they do.

Inanimate things are seldom considered to have free will, and so are not considered to be human.

Furthermore, they may be considered as as lacking something, such as a soul, the capacity to distinguish between good and evil, or emotion and creativity.

Animal consciousness findings, on the other hand, have introduced a new dimension to debates over animal and robot rights since they allow for the claim that these animals have the ability to understand whether they are experiencing good or unpleasant experiences.

They also pave the way for powerful kinds of social cognition like attachment, communication, and empathy to be recognized.

A long list of chimps, gorillas, orangutans, and bonobos, including the well-known Washoe, Koko, Chantek, and Kanzi, have mastered an incredible number of gestures in American Sign Language or artificial lexigrams (keyboard symbols of objects or ideas), raising the possibility of true interspecies social exchange.

The Great Ape Project was formed in 1993 by an international group of primatologists with the stated goal of granting these creatures fundamental human rights to life, liberty, and protection from torture.

They argued that these creatures should be granted nonhuman personhood and brought to the forefront of the big mammalian "community of equals." Many well-known marine mammal biologists have become outspoken opponents of fishermen's indiscriminate slaughter of cetaceans or their usage in captivity shows.

In 2007, American lawyer Steven Wise created the Nonhuman Rights Project at the Center for the Expansion of Fundamental Rights.

The Nonhuman Rights Project aims to give animals that are today considered property of legal humans legal personhood.

Physical liberty (against incarceration) and bodily integrity would be among these core personhood rights (against laboratory experimentation).

According to the group, there is no common law norm that prevents animal personhood, and the law finally allowed human slaves to become legal people without precedent via the writ of habeas corpus.

Individuals may use habeas corpus writs to assert their right to liberty and to challenge unjust confinement.

The Nonhuman Rights Project has been fighting for animal rights in the courts since 2013.

The first case was brought in New York State to protect the rights of four confined chimps, and it contained an affidavit from renowned primatologist Jane Goodall as proof.

The North American Primate Sanctuary Alliance requested that the chimpanzees be freed and relocated to their reserve.

The applications and appeals filed by the organization were refused.

Steven Wise has found heart in the fact that, in one judgment, the Supreme Court acknowledged that the subject of personhood is determined by public policy and social norms rather than biology.

The Cambridge Declaration on Consciousness was signed by a group of neuroscientists during the Francis Crick Memorial Conference in 2012.

David Edelman of the Neurosciences Institute in La Jolla, California, Christof Koch of the California Institute of Technology, and Philip Low of Stanford University were the three scientists most directly engaged in the document's creation.

All signatories agreed that scientific methodologies have shown evidence that mammal brain circuits seem to be linked to consciousness, affective moods, and emotional actions.

Birds seem to have developed awareness in a similar way to mammals, according to the researchers.

REM sleep patterns in zebra finches and equivalent effects of hallucinogenic medications were also discovered as proof of conscious behavior in animals by the researchers.

Despite lacking a neocortex for higher-order brain activities, invertebrate cephalopods seem to exhibit self-conscious consciousness, according to the declaration's signers.

Such views have not gone unchallenged.

Humans should continue to carry legal duty for animal care, according to attorney Richard Cupp.

He also contends that animal personhood may block the rights and autonomy of people with cognitive disabilities, leaving them exposed to decreased legal personality.

Cupp also believes that animals are outside of the human moral community, and so outside of the social contract that established personhood rights in the first place.

Daniel Dennett, a philosopher and cognitive scientist, is a vocal opponent of animal sentience, saying that consciousness is a "fiction" that can only be generated via the use of human language.

Animals can't make up tales like this, thus they can't be aware.

Because consciousness is a story we tell ourselves, scientific disciplines will never be able to grasp what it means to be a conscious animal because science is based on objective facts and universal descriptions rather than tales.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Nonhuman Rights and Personhood; Sloman, Aaron.

Further Reading

Dawkins, Marian S. 2012. Why Animals Matter: Animal Consciousness, Animal Welfare, and Human Well-being. Oxford, UK: Oxford University Press.

Kaplan, Gisela. 2016. “Commentary on ‘So Little Brain, So Much Mind: Intelligence and Behavior in Nonhuman Animals’ by Felice Cimatti and Giorgio Vallortigara.” Italian Journal of Cognitive Science 4, no. 8: 237–52.

Solum, Lawrence B. 1992. “Legal Personhood for Artificial Intelligences.” North Caro￾lina Law Review 70, no. 4: 1231–87.

Wise, Steven M. 2010. “Legal Personhood and the Nonhuman Rights Project.” Animal Law Review 17, no. 1: 1–11.

Wise, Steven M. 2013. “Nonhuman Rights to Personhood.” Pace Environmental Law Review 30, no. 3: 1278–90.



Artificial Intelligence - What Is Algorithmic Error and Bias?

 




Bias in algorithmic systems has emerged as one of the most pressing issues surrounding artificial intelligence ethics.

Algorithmic bias refers to a computer system's recurrent and systemic flaws that discriminate against certain groups or people.

It's crucial to remember that bias isn't necessarily a bad thing: it may be included into a system in order to fix an unjust system or reality.

Bias causes problems when it leads to an unjust or discriminating conclusion that affects people's lives and chances.

Individuals and communities that are already weak in society are often at danger from algorithmic prejudice and inaccuracy.

As a result, algorithmic prejudice may exacerbate social inequality by restricting people's access to services and goods.

Algorithms are increasingly being utilized to guide government decision-making, notably in the criminal justice sector for sentencing and bail, as well as in migration management using biometric technology like face and gait recognition.

When a government's algorithms are shown to be biased, individuals may lose faith in the AI system as well as its usage by institutions, whether they be government agencies or private businesses.

There have been several incidents of algorithmic prejudice during the past few years.

A high-profile example is Facebook's targeted advertising, which is based on algorithms that identify which demographic groups a given advertisement should be viewed by.

Indeed, according to one research, job advertising for janitors and related occupations on Facebook are often aimed towards lower-income groups and minorities, while ads for nurses or secretaries are focused at women (Ali et al. 2019).

This involves successfully profiling persons in protected classifications, such as race, gender, and economic bracket, in order to maximize the effectiveness and profitability of advertising.

Another well-known example is Amazon's algorithm for sorting and evaluating resumes in order to increase efficiency and ostensibly impartiality in the recruiting process.

Amazon's algorithm was trained using data from the company's previous recruiting practices.

However, once the algorithm was implemented, it became evident that it was prejudiced against women, with résumés that contained the terms "women" or "gender" or indicated that the candidate had attended a women's institution receiving worse rankings.

Little could be done to address the algorithm's prejudices since it was trained on Amazon's prior recruiting practices.

While the algorithm was plainly prejudiced, this example demonstrates how such biases may mirror social prejudices, including, in this instance, Amazon's deeply established biases against employing women.

Indeed, bias in an algorithmic system may develop in a variety of ways.

Algorithmic bias occurs when a group of people and their lived experiences are not taken into consideration while the algorithm is being designed.

This can happen at any point during the algorithm development process, from collecting data that isn't representative of all demographic groups to labeling data in ways that reproduce discriminatory profiling to the rollout of an algorithm that ignores the differential impact it may have on a specific group.

In recent years, there has been a proliferation of policy documents addressing the ethical responsibilities of state and non-state bodies using algorithmic processing—to ensure against unfair bias and other negative effects of algorithmic processing—partly in response to significant publicity of algorithmic biases (Jobin et al.2019).

The European Union's "Ethics Guidelines for Trustworthy AI," issued in 2018, is one of the most important rules in this area.

The EU statement lays forth seven principles for fair and ethical AI and algorithmic processing regulation.

Furthermore, with the adoption of the General Data Protection Regulation (GDPR) in 2018, the European Union has been in the forefront of legislative responses to algorithmic processing.

A corporation may be penalized up to 4% of its annual worldwide turnover if it uses an algorithm that is found to be prejudiced on the basis of race, gender, or another protected category, according to the GDPR, which applies in the first instance to the processing of all personal information inside the EU.

The difficulty of determining where a bias occurred and what dataset caused prejudice is a persisting challenge for algorithmic processing regulation.

This is sometimes referred to as the algorithmic black box problem: an algorithm's deep data processing layers are so intricate and many that a human cannot comprehend them.

Different data is fed into the algorithm to observe where the unequal results emerge, based on the right to an explanation when, subject to an automated decision under the GDPR, one of the replies has been to identify where the bias occurred via counterfactual explanations (Wachter et al.2018).

Technical solutions to the issue included building synthetic datasets that seek to repair naturally existing biases in datasets or provide an unbiased and representative dataset, in addition to legal and legislative instruments for tackling algorithmic bias.

While such channels for redress are vital, one of the most comprehensive solutions to the issue is to have far more varied human teams developing, producing, using, and monitoring the effect of algorithms.

A mix of life experiences within diverse teams makes it more likely that prejudices will be discovered and corrected sooner.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Biometric Technology; Explainable AI; Gender and AI.

Further Reading

Ali, Muhammed, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. 2019. “Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Skewed Outcomes.” In Proceedings of the ACM on Human-Computer Interaction, vol. 3, CSCW, Article 199 (November). New York: Association for Computing Machinery.

European Union. 2018. “General Data Protection Regulation (GDPR).” https://gdpr-info.eu/.

European Union. 2019. “Ethics Guidelines for Trustworthy AI.” https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1 (September): 389–99.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

Pasquale, Frank. 2016. The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press.

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2018. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harvard Journal of Law & Technology 31, no. 2 (Spring): 841–87.

Zuboff, Shoshana. 2018. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books.




Artificial Intelligence - What Is Artificial Intelligence, Alchemy, And Associationism?

 



Alchemy and Artificial Intelligence, a RAND Corporation paper prepared by Massachusetts Institute of Technology (MIT) philosopher Hubert Dreyfus and released as a mimeographed memo in 1965, critiqued artificial intelligence researchers' aims and essential assumptions.

The paper, which was written when Dreyfus was consulting for RAND, elicited a significant negative response from the AI community.

Dreyfus had been engaged by RAND, a nonprofit American global policy think tank, to analyze the possibilities for artificial intelligence research from a philosophical standpoint.

Researchers like as Herbert Simon and Marvin Minsky, who predicted in the late 1950s that robots capable of accomplishing whatever humans could do will exist within decades, made bright forecasts for the future of AI.

The objective for most AI researchers was not only to develop programs that processed data in such a manner that the output or outcome looked to be the result of intelligent activity.

Rather, they wanted to create software that could mimic human cognitive processes.

Experts in artificial intelligence felt that human cognitive processes might be used as a model for their algorithms, and that AI could also provide insight into human psychology.

The work of phenomenologists Maurice Merleau-Ponty, Martin Heidegger, and Jean-Paul Sartre impacted Dreyfus' thought.

Dreyfus contended in his report that the theory and aims of AI were founded on associationism, a philosophy of human psychology that includes a core concept: that thinking happens in a succession of basic, predictable stages.

Artificial intelligence researchers believed they could use computers to duplicate human cognitive processes because of their belief in associationism (which Dreyfus claimed was erroneous).

Dreyfus compared the characteristics of human thinking (as he saw them) to computer information processing and the inner workings of various AI systems.

The core of his thesis was that human and machine information processing processes are fundamentally different.

Computers can only be programmed to handle "unambiguous, totally organized information," rendering them incapable of managing "ill-structured material of everyday life," and hence of intelligence (Dreyfus 1965, 66).

On the other hand, Dreyfus contended that, according to AI research's primary premise, many characteristics of human intelligence cannot be represented by rules or associationist psychology.

Dreyfus outlined three areas where humans vary from computers in terms of information processing: fringe consciousness, insight, and ambiguity tolerance.

Chess players, for example, utilize the fringe awareness to decide which area of the board or pieces to concentrate on while making a move.

The human player differs from a chess-playing software in that the human player does not consciously or subconsciously examine the information or count out probable plays the way the computer does.

Only after the player has utilized their fringe awareness to choose which pieces to concentrate on can they consciously calculate the implications of prospective movements in a manner akin to computer processing.

The (human) problem-solver may build a set of steps for tackling a complicated issue by understanding its fundamental structure.

This understanding is lacking in problem-solving software.

Rather, as part of the program, the problem-solving method must be preliminarily established.

The finest example of ambiguity tolerance is in natural language comprehension, when a word or phrase may have an unclear meaning yet is accurately comprehended by the listener.

When reading ambiguous syntax or semantics, there are an endless amount of signals to examine, yet the human processor manages to choose important information from this limitless domain in order to accurately understand the meaning.

On the other hand, a computer cannot be trained to search through all conceivable facts in order to decipher confusing syntax or semantics.

Either the amount of facts is too huge, or the criteria for interpretation are very complex.

AI experts chastised Dreyfus for oversimplifying the difficulties and misrepresenting computers' capabilities.

RAND commissioned MIT computer scientist Seymour Papert to respond to the study, which he published in 1968 as The Artificial Intelligence of Hubert L.Dreyfus: A Budget of Fallacies.

Papert also set up a chess match between Dreyfus and Mac Hack, which Dreyfus lost, much to the amusement of the artificial intelligence community.

Nonetheless, some of his criticisms in this report and subsequent books appear to have foreshadowed intractable issues later acknowledged by AI researchers, such as artificial general intelligence (AGI), artificial simulation of analog neurons, and the limitations of symbolic artificial intelligence as a model of human reasoning.

Dreyfus' work was declared useless by artificial intelligence specialists, who stated that he misinterpreted their research.

Their ire had been aroused by Dreyfus's critiques of AI, which often used aggressive terminology.

The New Yorker magazine's "Talk of the Town" section included extracts from the story.

Dreyfus subsequently refined and enlarged his case in What Computers Can't Do: The Limits of Artificial Intelligence, published in 1972.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: Mac Hack; Simon, Herbert A.; Minsky, Marvin.

Further Reading

Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books.

Dreyfus, Hubert L. 1965. Alchemy and Artificial Intelligence. P-3244. Santa Monica, CA: RAND Corporation.

Dreyfus, Hubert L. 1972. What Computers Can’t Do: The Limits of Artificial Intelligence.New York: Harper and Row.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman.

Papert, Seymour. 1968. The Artificial Intelligence of Hubert L. Dreyfus: A Budget of Fallacies. Project MAC, Memo No. 154. Cambridge, MA: Massachusetts Institute of Technology.


Artificial Intelligence - What Is The AARON Computer Program?


 



Harold Cohen built AARON, a computer program that allows him to produce paintings.

The initial version was created "about 1972," according to Cohen.

Because AARON is not open source, its development came to a halt when Cohen died in 2016.

In 2014, AARON was still creating fresh photos, and its functioning was still visible in 2016.

AARON is not an abbreviation.

The name was chosen since it is the first letter of the alphabet, and Cohen anticipated that he would eventually build further programs, which he never did.

AARON has various versions during the course of its four decades of development, each with its own set of capabilities.

Earlier versions could only generate black-and-white line drawings, while later versions could also paint in color.

Some AARON versions were set up to make abstract paintings, while others were set up to create scenes with objects and people.

AARON's main goal was to generate not just computer pictures, but also physical, large-scale images or paintings.

The lines made by AARON, a program written in C at the time, were traced directly on the wall in Cohen's show at the San Francisco Museum of Modern Art.

The software was then paired with a machine that had a robotic arm and could apply paint on canvas in later creative episodes of AARON.

For example, the version of AARON on display at Boston's Computer Museum in 1995, which was written in LISP at the time and ran on a Silicon Graphics computer, generated a file containing a set of instructions.

After then, the file was transmitted to a PC that was running a C++ program.

This computer was equipped with a robotic arm.

The C++ code processed the commands and controlled the arm's movement, as well as the dye mixing and application to the canvas.

Cohen's drawing and painting devices were also significant advancements.

Industrial inkjet printers were employed in subsequent generations as well.

Because of the colors these new printers could create, Cohen considered this configuration of AARON to be the most advanced; he thought that the inkjet was the most important innovation since the industrial revolution when it came to colors.

While Cohen primarily concentrated on tactile pictures, Ray Kurzweil built a screensaver version of AARON around the year 2000.

By 2016, Cohen had developed his own version of AARON, which produced black-and-white pictures that the user could color using a big touch screen.

"Fingerpainting," he called it.

AARON, according to Cohen, is neither a "totally independent artist" nor completely creative.

He did feel, however, that AARON demonstrates one requirement of autonomy: emergence, which Cohen defines as "paintings that are really shocking and unique." Cohen never got too far into AARON's philosophical ramifications.

It's easy to infer that AARON's work as a colorist was his greatest accomplishment, based on the amount of time he devotes to it in practically all of the interviews conducted with him.

Computational Creativity and Generative Design are two more terms for the same thing.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Computational Creativity; Generative Design.


Further Reading

Cohen, Harold. 1995. “The Further Exploits of AARON, Painter.” Stanford Humanities Review 4, no. 2 (July): 141–58.

Cohen, Harold. 2004. “A Sorcerer’s Apprentice: Art in an Unknown Future.” Invited talk at Tate Modern, London. http://www.aaronshome.com/aaron/publications/tate-final.doc.

Cohen, Paul. 2016. “Harold Cohen and AARON.” AI Magazine 37, no. 4 (Winter): 63–66.

McCorduck, Pamela. 1990. Aaron’s Code: Meta-Art, Artificial Intelligence, and the Work of Harold Cohen. New York: W. H. Freeman.



Artificial Intelligence - What Is An AI Winter?

 



The term AI Winter was established during the American Association of Artificial Intelligence's annual conference in 1984.(now the Association for the Advancement of Artificial Intelligence or AAAI).

Marvin Minsky and Roger Schank, two top academics, used the phrase to describe the imminent bust in artificial intelligence research and development at the time.

Daniel Crevier, a Canadian AI researcher, has detailed how fear of an impending AI Winter caused a domino effect that started with skepticism in the AI research community, spread to the media, and eventually resulted in negative funding responses.

As a consequence, real AI research and development came to a halt.

The initial skepticism may now be ascribed mostly to the excessively optimistic promises made at the time, with AI's real outcomes being significantly less than expected.

Other reasons, such as a lack of computer capacity during the early days of AI research, led to the belief that an AI Winter was approaching.

This was especially true in the case of neural network research, which required a large amount of processing power.

Economic reasons, however, limited attention on more concrete investments, especially during overlapping times of economic crises.

AI Winters have occurred many times during the history of AI, with two of the most notable eras covering 1974 to 1980 and 1987 to 1993.

Although the dates of AI Winters are debatable and dependent on the source, times with overlapping patterns are associated with research abandonment and defunding.

The development of AI systems and technologies has progressed, similar to the hype and ultimate collapse of breakthrough technologies such as nanotechnology.

Not only has there been an unprecedented amount of money for basic research, but there has also been exceptional progress in the development of machine learning during the present boom time.

The reasons for the investment surge vary depending on the many stakeholders involved in artificial intelligence research and development.

For example, industry has staked a lot of money on the idea that discoveries in AI would result in dividends by changing whole market sectors.

Governmental agencies, such as the military, invest in AI research to improve the efficiency of both defensive and offensive technology and to protect troops from imminent damage.

Because AI Winters are triggered by a perceived lack of trust in what AI can provide, the present buzz around AI and its promises has sparked fears of another AI Winter.

On the other hand, others argue that current technology developments in applied AI research have secured future progress in this field.

This argument contrasts sharply with the so-called "pipeline issue," which claims that a lack of basic AI research will result in a limited number of applied outcomes.

One of the major elements of prior AI Winters has been the pipeline issue.

However, if the counterargument is accurate, a feedback loop between applied breakthroughs and basic research will generate enough pressure to keep the pipeline moving forward.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Minsky, Marvin.

Further Reading

Crevier, Daniel. 1993. AI: The Tumultuous Search for Artificial Intelligence. New York: Basic Books.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Muehlhauser, Luke. 2016. “What Should We Learn from Past AI Forecasts?” https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts.


Artificial Intelligence - What Is The Advanced Soldier Sensor Information Systems and Technology (ASSIST)?

 



Soldiers are often required to do missions that may take many hours and are quite stressful.

Soldiers are requested to write a report detailing the most significant events that occurred once a mission is completed.

This report is designed to collect information about the environment and local/foreign people in order to better organize future operations.

Soldiers often offer this report primarily based on their memories, still photographs, and GPS data from portable equipment.

There are probably numerous cases when crucial information is missing and not accessible for future mission planning due to the severe stress they face.

Soldiers were equipped with sensors that could be worn directly on their uniforms as part of the ASSIST (Advanced Soldier Sensor Information Systems and Technology) program, which addressed this problem.

Sensors continually recorded what was going on around the troops during the operation.

When the troops returned from their mission, the sensor data was indexed and an electronic record of the events that occurred while the ASSIST system was recording was established.

Soldiers might offer more accurate reports if they had this knowledge instead of depending simply on their memories.

Numerous functions were made possible by AI-based algorithms, including:

• "Capabilities for Image/Video Data Analysis"

• Object Detection/Image Classification—the capacity to detect and identify items (such as automobiles, persons, and license plates) using video, images, and/or other data sources.

• "Audio Data Analysis Capabilities"

• "Arabic Text Translation"—the ability to detect, recognize, and translate written Arabic text (e.g., in imagery data)

• "Change Detection"—the ability to detect changes in related data sources over time (e.g., identify differences in imagery of the same location at different times)

• Sound Recognition/Speech Recognition—the capacity to distinguish speech (e.g., keyword spotting and foreign language recognition) and identify sound events (e.g., explosions, gunfire, and cars) in audio data.

• Shooter Localization/Shooter Classification—the ability to recognize gunshots in the environment (e.g., via audio data processing), as well as the kind of weapon used and the shooter's position.

• "Capabilities for Soldier Activity Data Analysis"

• Soldier State Identification/Soldier Localization—the capacity to recognize a soldier's course of movement in a given area and characterize the soldier's activities (e.g., running, walking, and climbing stairs) To be effective, AI systems like this (also known as autonomous or intelligent systems) must be thoroughly and statistically analyzed to verify that they would work correctly and as intended in a military setting.

The National Institute of Standards and Technology (NIST) was entrusted with assessing these AI systems using three criteria:

1. The precision with which objects, events, and activities are identified and labeled

2. The system's capacity to learn and improve its categorization performance.

3. The system's usefulness in improving operational efficiency To create its performance measurements, NIST devised a two-part test technique.

Metrics 1 and 2 were assessed using component- and system-level technical performance evaluations, respectively, while meter 3 was assessed using system-level utility assessments.

The utility assessments were created to estimate the effect these technologies would have on warfighter performance in a range of missions and job tasks, while the technical performance evaluations were created to ensure the ongoing improvement of ASSIST system technical capabilities.

NIST endeavored to create assessment techniques that would give an appropriate degree of difficulty for system and soldier performance while defining the precise processes for each sort of evaluation.

The ASSIST systems were divided down into components that implemented certain capabilities at the component level.

For example, the system was divided down into an Arabic text identification component, an Arabic text extraction component (to localize individual text characters), and a text translation component to evaluate its Arabic translation capacity.

Each factor was evaluated on its own to see how it affected the system.

Each ASSIST system was assessed as a black box at the system level, with the overall performance of the system being evaluated independently of the individual component performance.

The total system received a single score, which indicated the system's ability to complete the overall job.

A test was also conducted at the system level to determine the system's usefulness in improving operational effectiveness for after-mission reporting.

Because all of the systems reviewed as part of this initiative were in the early phases of development, a formative assessment technique was suitable.

NIST was especially interested in determining the system's value for warfighters.

As a result, we were worried about the influence on their procedures and goods.

User-centered metrics were used to represent this viewpoint.

NIST set out to find measures that may assist answer questions like: What information do infantry troops seek and/or require after completing a field mission? From both the troops' and the S2's (Staff 2—Intelligence Officer) perspectives, how successfully are information demands met? What was ASSIST's contribution to mission reporting in terms of user-stated information requirements? This examination was carried out at the Aberdeen Test Center Military Operations in Urban Terrain (MOUT) location in Aberdeen, Maryland.

The location was selected for a variety of reasons:

• Ground truth—Aberdeen was able to deliver ground truth to within two centimeters of chosen locations.

This provided a strong standard against which the system output could be compared, enabling the assessment team to get a good depiction of what really transpired in the environment.

• Realism—The MOUT location has around twenty structures that were built up to seem like an Iraqi town.

• Testing infrastructure—The facility was outfitted with a number of cameras (both inside and outside) to help us better comprehend the surroundings during testing.

• Soldier availability—For the assessment, the location was able to offer a small squad of active-duty troops.

The MOUT site was enhanced with items, people, and background noises whose location and behavior were programmed to provide a more operationally meaningful test environment.

The goal was to provide an environment in which the various ASSIST systems could test their capabilities by detecting, identifying, and/or capturing various forms of data.

Foreign language speech detection and classification, Arabic text detection and recognition, detection of shots fired and vehicle sounds, classification of soldier states and tracking their locations (both inside and outside of buildings), and identifying objects of interest such as vehicles, buildings, people, and so on were all included in NIST's utility assessments.

Because the tests required the troops to respond according to their training and experience, the soldiers' actions were not scripted as they progressed through each exercise.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.




See also: Battlefield AI and Robotics; Cybernetics and AI.

Further Reading

Schlenoff, Craig, Brian Weiss, Micky Steves, Ann Virts, Michael Shneier, and Michael Linegang. 2006. “Overview of the First Advanced Technology Evaluations for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 125–32. Gaithersburg, MA: National Institute of Standards and Technology.

Steves, Michelle P. 2006. “Utility Assessments of Soldier-Worn Sensor Systems for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 165–71. Gaithersburg, MA: National Institute of Standards and Technology.

Washington, Randolph, Christopher Manteuffel, and Christopher White. 2006. “Using an Ontology to Support Evaluation of Soldier-Worn Sensor Systems for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 172–78. Gaithersburg, MA: National Institute of Standards and Technology.

Weiss, Brian A., Craig I. Schlenoff, Michael O. Shneier, and Ann Virts. 2006. “Technol￾ogy Evaluations and Performance Metrics for Soldier-Worn Sensors for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 157–64. Gaithersburg, MA: National Institute of Standards and Technology.




Artificial Intelligence - How Are Accidents and Risk Assessment Done Using AI?

 



Many computer-based systems' most significant feature is their reliability.

Physical damage, data loss, economic disruption, and human deaths may all result from mechanical and software failures.

Many essential systems are now controlled by robotics, automation, and artificial intelligence.

Nuclear power plants, financial markets, social security payments, traffic lights, and military radar stations are all under their watchful eye.

High-tech systems may be designed purposefully hazardous to people, as with Trojan horses, viruses, and spyware, or they can be dangerous due to human programming or operation errors.

They may become dangerous in the future as a result of purposeful or unintended actions made by the machines themselves, or as a result of unanticipated environmental variables.

The first person to be murdered while working with a robot occurred in 1979.

A one-ton parts-retrieval robot built by Litton Industries hit Ford Motor Company engineer Robert Williams in the head.

After failing to entirely switch off a malfunctioning robot on the production floor at Kawasaki Heavy Industries two years later, Japanese engineer Kenji Urada was murdered.

Urada was shoved into a grinding machine by the robot's arm.

Accidents do not always result in deaths.

A 300-pound Knightscope K5 security robot on patrol at a retail business center in Northern California, for example, knocked down a kid and ran over his foot in 2016.

Only a few cuts and swelling were sustained by the youngster.

The Cold War's history is littered with stories of nuclear near-misses caused by faulty computer technology.

In 1979, a computer glitch at the North American Aerospace Defense Command (NORAD) misled the Strategic Air Command into believing that the Soviet Union had fired over 2,000 nuclear missiles towards the US.

An examination revealed that a training scenario had been uploaded to an active defense computer by mistake.

In 1983, a Soviet military early warning system identified a single US intercontinental ballistic missile launching a nuclear assault.

Stanislav Petrov, the missile defense system's operator, correctly discounted the signal as a false alarm.

The reason of this and subsequent false alarms was ultimately discovered to be sunlight hitting high altitude clouds.

Petrov was eventually punished for humiliating his superiors by disclosing faults, despite preventing global thermonuclear Armageddon.

The so-called "2010 Flash Crash" was caused by stock market trading software.

In slightly over a half-hour on May 6, 2010, the S&P 500, Dow Jones, and NASDAQ stock indexes lost—and then mainly regained—a trillion dollars in value.

Navin Dal Singh Sarao, a U.K. trader, was arrested after a five-year investigation by the US Department of Justice for allegedly manipulating an automated system to issue and then cancel huge numbers of sell orders, allowing his business to acquire equities at temporarily reduced prices.

In 2015, there were two more software-induced market flash crashes, and in 2017, there were flash crashes in the gold futures market and digital cryptocurrency sector.

Tay (short for "Thinking about you"), a Microsoft Corporation artificial intelligence social media chatterbot, went tragically wrong in 2016.

Tay was created by Microsoft engineers to imitate a nineteen-year-old American girl and to learn from Twitter discussions.

Instead, Tay was trained to use harsh and aggressive language by internet trolls, which it then repeated in tweets.

After barely sixteen hours, Microsoft deleted Tay's account.

More AI-related accidents in motor vehicle operating may occur in the future.

In 2016, the first fatal collision involving a self-driving car happened when a Tesla Model S in autopilot mode collided with a semi-trailer crossing the highway.

The motorist may have been viewing a Harry Potter movie on a portable DVD player when the accident happened, according to witnesses.

Tesla's software does not yet allow for completely autonomous driving, hence a human operator is required.

Despite these dangers, one management consulting company claims that autonomous automobiles might avert up to 90% of road accidents.

Artificial intelligence security is rapidly growing as a topic of cybersecurity study.

Militaries all around the globe are working on prototypes of dangerous autonomous weapons systems.

Automatic weapons, such as drones, that now rely on a human operator to make deadly force judgments against targets, might be replaced with automated systems that make life and death decisions.

Robotic decision-makers on the battlefield may one day outperform humans in extracting patterns from the fog of war and reacting quickly and logically to novel or challenging circumstances.

High technology is becoming more and more important in modern civilization, yet it is also becoming more fragile and prone to failure.

An inquisitive squirrel caused the NASDAQ's primary computer to collapse in 1987, bringing one of the world's major stock exchanges to its knees.

In another example, the ozone hole above Antarctica was not discovered for years because exceptionally low levels reported in data-processed satellite images were assumed to be mistakes.

It's likely that the complexity of autonomous systems, as well as society's reliance on them under quickly changing circumstances, will make completely tested AI unachievable.

Artificial intelligence is powered by software that can adapt to and interact with its surroundings and users.

Changes in variables, individual acts, or events may have unanticipated and even disastrous consequences.

One of the dark secrets of sophisticated artificial intelligence is that it is based on mathematical approaches and deep learning algorithms that are so complicated that even its creators are baffled as to how it makes accurate conclusions.

Autonomous cars, for example, depend on exclusively computer-written instructions while they watch people driving in real-world situations.

But how can a self-driving automobile learn to anticipate the unexpected?

Will attempts to adjust AI-generated code to decrease apparent faults, omissions, and impenetrability lessen the likelihood of unintended negative consequences, or will they merely magnify existing problems and produce new ones? Although it is unclear how to mitigate the risks of artificial intelligence, it is likely that society will rely on well-established and presumably trustworthy machine-learning systems to automatically provide rationales for their actions, as well as examine newly developed cognitive computing systems on our behalf.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



Also see: Algorithmic Error and Bias; Autonomy and Complacency; Beneficial AI, Asilo mar Meeting on; Campaign to Stop Killer Robots; Driverless Vehicles and Liability; Explainable AI; Product Liability and AI; Trolley Problem.


Further Reading

De Visser, Ewart Jan. 2012. “The World Is Not Enough: Trust in Cognitive Agents.” Ph.D. diss., George Mason University.

Forester, Tom, and Perry Morrison. 1990. “Computer Unreliability and Social Vulnerability.” Futures 22, no. 5 (June): 462–74.

Lee, John D., and Katrina A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors 46, no. 1 (Spring): 50–80.

Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan 

M. Ćirković, 308–45. New York: Oxford University Press.



Artificial Intelligence - How Is AI Being Applied To Air Traffic Control?

 


Air Traffic Control (ATC) is a ground-based air navigation service that directs airplanes on the ground and in regulated airspace.

Air traffic controllers also give advising services in uncontrolled airspace on occasion.

By coordinating the movement of commercial and private planes and guaranteeing a safe separation of traffic in the air and on the ground, controllers ensure the safe flow of air traffic.

They usually provide pilots with real-time traffic and weather notifications along with directing guidance.

The major goals of the ATC, according to the Federal Aviation Administration (FAA), are to manage and expedite air traffic flow, as well as to prevent aircraft crashes and provide real-time information and other navigational assistance for pilots.

The ATC is a service that is both risk adverse and safety crucial.

Air traffic controllers use a variety of technology, including computer systems, radars, and transmitters, in addition to their eye observation.

The volume and density of air travel has been increasing over the world.

The operational boundaries of modern ATC systems are being pushed as worldwide air traffic density increases.

To keep up with the rising need for accommodating future expansion in air traffic, air navigation and air traffic management systems must become increasingly complex.

Artificial intelligence (AI) provides a number of applications for safer, more efficient, and better management of rising air traffic.

According to the International Civil Aviation Organization's (ICAO) Global Air Navigation Plan (GANP), AI-based air traffic management systems may help address the operational issues posed by the growing volume and variety of air traffic.

Simulation systems with AI that can monitor and advise the activities of trainee controllers are already used in the training of human air traffic controllers.

In terms of operations, the ability of machine learning-based AI systems to ingest massive amounts of data may be used to solve the complexity and challenges of traffic management.

Such technologies may be used to assess traffic data for flight planning and route selection during the planning stages.

By detecting a wide range of flight patterns, AI can also provide reliable traffic predictions.

AI-based ATC systems may be used for route prediction and decision-making in en route operations, particularly in difficult scenarios with little data.

AI can help with taxiing methods and runway layouts.

Additionally, AI-assisted voice recognition technologies may help pilots and controllers communicate more effectively.

With such a wide range of applications, AI technologies may help human air traffic controllers improve their overall performance by providing them with detailed information and quick decision-making procedures.

It's also worth noting that, rather than replacing human air traffic controllers, AI-based solutions have shown to be useful in ensuring the safe and efficient flow of air traffic.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Intelligent Transportation.


Further Reading

Federal Aviation Administration. 2013. Aeronautical Information Manual: Official Guide to Basic Flight Information and ATC Procedures. Washington, DC: FAA. https://www.faa.gov/air_traffic/publications/.

International Civil Aviation Organization. 2018. “Potential of Artificial Intelligence (AI) in Air Traffic Management (ATM).” In Thirteenth Air Navigation Conference, 1–3. Montreal, Canada. https://www.icao.int/Meetings/anconf13/Documents/WP/wp_232_en.pdf.

Nolan, Michael S. 1999. Fundamentals of Air Traffic Control. Pacific Grove, CA: Brooks/Cole.





What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...