Artificial Intelligence - What Is Automated Machine Learning?

 


 

Machine learning algorithms are created with the goal of detecting and describing complex patterns in massive datasets.

By taking the uncertainty out of constructing instruments of convenience, automated machine learning (AutoML) aims to deliver these analytical tools to everyone interested in large data research.

"Computational analysis pipelines" is the name given to these instruments.

While there is still a lot of work to be done in automated machine learning, early achievements show that it will be an important tool in the arsenal of computer and data scientists.

It will be critical to customize these software packages to beginner users, enabling them to undertake difficult machine learning activities in a user-friendly way while still allowing for the integration of domain-specific knowledge and model interpretation and action.

These latter objectives have received less attention, but they will need to be addressed in future study before AutoML is able to tackle complicated real-world situations.

Automated machine learning is a relatively young field of research that has risen in popularity in the past ten years as a consequence of the widespread availability of strong open-source machine learning frameworks and high-performance computers.

AutoML software packages are currently available in both open-source and commercial versions.

Many of these packages allow for the exploration of machine learning pipelines, which can include feature transformation algorithms like discretization (which converts continuous equations, functions, models, and variables into discrete equations, functions, and so on for digital computers), feature engineering algorithms like principal components analysis (which removes large dimensions of "less important" data while keeping a subset of "more important" variables), and so on.

Bayesian optimization, ensemble techniques, and genetic programming are examples of stochastic search strategies utilized in AutoML.

Stochastic search techniques may be used to solve deterministic issues that have random noise or deterministic problems that have randomness injected into them.

New methods for extracting "signal from noise" in datasets, as well as finding insights and making predictions, are currently being developed and tested.

One of the difficulties with machine learning is that each algorithm examines data in a unique manner.

That is, each algorithm recognizes and classifies various patterns.

Linear support vector machines and k-nearest neighbor algorithms are excellent at detecting linear patterns, whereas k-nearest neighbor methods are effective at detecting nonlinear patterns.

The problem is that scientists don't know which algorithm(s) to employ when they start their job since they don't know what patterns they're looking for in the data.

The majority of users select an algorithm that they are acquainted with or that seems to operate well across a variety of datasets.

Some people may choose an algorithm because the models it generates are simple to compare.

There are a variety of reasons why various algorithms are used for data analysis.

Nonetheless, the approach selected may not be optimal for a particular data set.

This task is especially tough for a new user who may not be aware of the strengths and disadvantages of each algorithm.

A grid search is one way to address this issue.

Multiple machine learning algorithms and parameter settings are applied to a dataset in a systematic manner, with the results compared to determine which approach is the best.

This is a frequent strategy that may provide positive outcomes.

The grid search's drawback is that it may be computationally demanding when a large number of methods, each with several parameter values, need to be examined.

Random forests are classification algorithms comprised of numerous decision trees with a number of regularly used parameters that must be fine-tuned for best results on a specific dataset.

The accepted machine learning approach adjusts the data using parameters, which are configuration variables.

The maximum number of characteristics that may be used in the decision trees that are constructed and assessed is a typical parameter.

Automated machine learning may aid in the management of the complicated, computationally costly combinatorial explosion that occurs during the execution of specialized investigations.

A single parameter might have 10 distinct configurations, for example.

Another parameter might be the number of decision trees to be included in the forest, which could be 10 in total.

Another ten possible parameters might be the minimum amount of samples that would be permitted in the "leaves" of the decision trees.

Based on the examination of just three parameters, this example gives 1000 distinct alternative parameter configurations.

A data scientist looking at ten different machine learning methods, each with 1000 different parameter values, would have to undertake 10,000 different studies.

Hyperparameters, which are characteristics of the analyses that are established ahead of time and hence not learnt from the data, are added on top of these studies.

They are often established by the data scientist using a variety of rules of thumb or values derived from previous challenges.

Comparisons of numerous alternative cross-validation procedures or the influence of sample size on findings are examples of hyperparameter setups.

Hundreds of hyperparameter combinations may need to be assessed in a typical case.

The data scientist would have to execute a total of one million analyses using a mix of machine learning algorithms, parameter settings, and hyperparameter settings.

Given the computer resources available to the user, so many distinct studies might be prohibitive depending on the sample size of the data to be examined, the number of features, and the kinds of machine learning algorithms used.

Using a stochastic search to approximate the optimum mix of machine learning algorithms, parameter settings, and hyperparameter settings is an alternate technique.

Until a computational limit is reached, a random number generator is employed to sample from all potential possibilities.

Before making a final decision, the user manually explores various parameter and hyperparameter settings around the optimal technique.

This has the virtue of being computationally controllable, but it has the disadvantage of being stochastic, since chance may not explore the best combinations.

To address this, a stochastic search algorithm with a heuristic element—a practical technique, guide, or rule—may be created that can adaptively explore algorithms and settings while improving over time.

Because they automate the search for optimum machine learning algorithms and parameters, approaches that combine stochastic searches with heuristics are referred to as automated machine learning.

A stochastic search could begin by creating a variety of machine learning algorithm, parameter setting, and hyperparameter setting combinations at random and then evaluate each one using cross-validation, a method for evaluating the effectiveness of a machine learning model.

The best of these is chosen, modified at random, and assessed once again.

This procedure is continued until a computational limit or a performance goal has been met.

This stochastic search is guided by the heuristic algorithm.

Optimal search strategy development is a hot topic in academia right now.

There are various benefits to using AutoML.

To begin with, it has the potential to be more computationally efficient than the exhaustive grid search method.

Second, it makes machine learning more accessible by removing some of the guesswork involved in choosing the best machine learning algorithm and its many parameters for a particular dataset.

This allows even the most inexperienced user to benefit from machine learning.

Third, if generalizability measurements are included into the heuristic being utilized, it may provide more repeatable outcomes.

Fourth, including complexity metrics into the heuristic might result in more understandable outcomes.

Fifth, if expert knowledge is included into the heuristic, it may produce more actionable findings.

AutoML techniques do, however, present certain difficulties.

The first is the risk of overfitting, which occurs when numerous distinct methods are evaluated, resulting in an analysis that matches existing data too closely but does not fit or forecast unknown or fresh data.

The more analytical techniques used on a dataset, the more likely it is to learn the data's noise, resulting in a model that is hard to generalize to new data.

With any AutoML technique, this must be thoroughly handled.

Second, AutoML is computationally demanding in and of itself.

Third, AutoML approaches may create very complicated pipelines including several machine learning algorithms.

This may make interpretation considerably more challenging than just selecting a single analytic method.

Fourth, this is a very new field.

Despite some promising early instances, ideal AutoML solutions may not have yet been devised.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Deep Learning.

Further Reading

Feurer, Matthias, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. 2015. “Efficient and Robust Automated Machine Learning.” In Advances in Neural Information Processing Systems, 28. Montreal, Canada: Neural Information Processing Systems. http://papers.nips.cc/paper/5872-efficient-and-robust-automated-machine-learning.

Hutter, Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren, eds. 2019. Automated Machine Learning: Methods, Systems, Challenges. New York: Springer.



Artificial Intelligence - What Has Been Isaac Asimov's Influence On AI?



(c. 1920–1992) Isaac Asimov was a professor of biochemistry at Boston University and a well-known science fiction novelist.

Asimov was a prolific writer in a variety of genres, and his corpus of science fiction has had a major impact on not just the genre, but also on ethical concerns surrounding science and technology.

Asimov was born in the Russian Federation.

He celebrated his birthday on January 2, 1920, despite not knowing his official birth date.

In 1923, his family moved to New York City.

At the age of sixteen, Asimov applied to Columbia College, the undergraduate school of Columbia University, but was refused admission owing to anti-Semitic restrictions on the number of Jewish students.

He finally enrolled in Seth Low Junior College, a connected undergraduate institution.

Asimov switched to Columbia College when Seth Low closed its doors, but obtained a Bachelor of Science rather than a Bachelor of Arts, which he regarded as "a gesture of second-class citizenship" (Asimov 1994, n.p.).

Asimov grew interested in science fiction about this time and started writing letters to science fiction periodicals, ultimately attempting to write his own short tales.

His debut short story, "Marooned off Vesta," was published in Amazing Stories in 1938.

His early works placed him in the company of science fiction pioneers like Robert Heinlein.

After graduation, Asimov attempted, but failed, to enroll in medical school.

Instead, at the age of nineteen, he enrolled in graduate school for chemistry.

World War II halted Asimov's graduate studies, and at Heinlein's recommendation, he completed his military duty by working at the Naval Air Experimental Station in Philadelphia.

He created short tales while stationed there, which constituted the foundation for Foundation (1951), one of his most well-known works and the first of a multi-volume series that he would eventually tie to numerous of his other pieces.

He earned his doctorate from Columbia University in 1948.

The pioneering Robots series by Isaac Asimov (1950s–1990s) has served as a foundation for ethical norms to alleviate human worries about technology gone awry.

The Three Laws of Robotics, for example, are often mentioned as guiding principles for artificial intelligence and robotics.

The Three Laws were initially mentioned in the short tale "Run about" (1942), which was eventually collected in I, Robot (1950):

1. A robot may not damage a human being or enable a human being to come to danger as a result of its inactivity.

2. Except when such commands contradict with the First Law, a robot shall follow orders provided to it by humans.

3. A robot must defend its own existence as long as this does not violate the First or Second Laws.

A "zeroth rule" is devised in Robots and Empire (1985) in order for robots to prevent a scheme to destroy Earth: "A robot may not damage mankind, or enable humanity to come to danger via inactivity." The original Three Laws are superseded by this statute.

Characters in Asimov's Robot series of books and short stories are often tasked with solving a mystery in which a robot seems to have broken one of the Three Laws.

In "Runaround," for example, two field experts with U.S. Robots and Mechanical Men, Inc. discover they're in danger of being stuck on Mercury since their robot "Speedy" hasn't returned with selenium required to power a protective shield in an abandoned base to screen them from the sun.

Speedy has malfunctioned because he is stuck in a conflict between the Second and Third Laws: when the robot approaches the selenium, it is obliged to recede in order to defend itself from a corrosive quantity of carbon monoxide near the selenium.

The humans must discover out how to apply the Three Laws to free Speedy from a conflict-induced feedback cycle.

More intricate arguments concerning the application of the Three Laws appear in later tales and books.

The Machines manage the world's economy in "The Evitable Conflict" (1950), and "robopsychologist" Susan Calvin notices that they have changed the First Law into a predecessor of Asimov's zeroth law: "the Machines work not for any one human being, but for all humanity" (Asimov 2004b, 222).

Calvin is concerned that the Machines are guiding mankind toward "the ultimate good of humanity" (Asimov 2004b, 222), even if humanity is unaware of what it is.

Furthermore, Asimov's Basis trilogy (1940s–1990s) coined the word "psychohistory," which may be interpreted as foreshadowing the algorithms that provide the foundation for artificial intelligence today.

In Foundation, the main character Hari Seldon creates psychohistory as a method of making broad predictions about the future behavior of extremely large groups of people, such as the breakdown of civilization (here, the Galactic Empire) and the ensuing Dark Ages.

Seldon, on the other hand, claims that using psychohistory may shorten the era of anarchy: Psychohistory, which has the ability to foretell the fall, may make pronouncements about the coming dark times.

The Empire has been in existence for almost a thousand years.

The next dark ages will last thirty thousand years, not twelve.

A Second Empire will develop, but there will be a thousand generations of suffering humanity between it and our civilization...

If my party is permitted to operate immediately, it is conceivable to cut the period of anarchy to a single millennium.

30–31 (Asimov, 2004a) Psychohistory produces "a mathematical prediction" (Asimov, 2004a, 30), similar to how artificial intelligence would make a forecast.

Seldon established the Basis in the Foundation trilogy, a hidden collection of people enacting humanity's collective knowledge and so serving as the physical foundation for a hypothetical second Galactic Empire.

The Foundation is threatened by the Mule in later parts of the Foundation series, a mutant and consequently an aberration that was not predicted by psychohistory's predictive research.

Although Seldon's thousand-year plan depends on macro conceptions—"the future isn't nebulous," the friction between large-scale theories and individual actions is a crucial factor driving Foundation.

Seldon has computed and plotted it" (Asimov 2004a, 100)—individual acts may save or destroy the scheme.

Asimov's works were frequently foreshadowing, prompting some to label his work as "future history" or "speculative fiction." Asimov's ethical challenges are often cited in legal, political, and policy arguments years after they were published.

For example, in 2007, the South Korean Ministry of Commerce, Industry, and Energy established a Robot Ethics Charter based on the Three Laws, predicting that by 2020, every Korean household will have a robot.

The British House of Lords' Artificial Intelligence Committee adopted a set of guidelines in 2017 that are similar to the Three Laws.

The Three Laws' utility has been questioned by others.

First, some opponents point out that robots are often employed for military purposes, and that the Three Laws would restrict this usage, which Asimov would have supported given his anti-war short tales like "The Gentle Vultures" (1957).

Second, some argue that today's robots and AI applications vary significantly from those shown in the Robot series.

Asimov's imaginary robots are still powered by a "positronic brain," which is still science fiction and beyond current computer capacity.

Third, the Three Laws are clearly fiction, and Asimov's Robot series is founded on misinterpretations in order to advance ethical concerns and for dramatic impact.

Critics claim that the Three Laws cannot serve as a true moral framework for controlling AI or robotics since they may be misunderstood just like any other legislation.

Finally, some argue that these ethical principles should be applied to all people.

Asimov died in 1992 from symptoms related to AIDS, which he caught after receiving a tainted blood transfusion during a heart bypass operation in 1983.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Beneficial AI, Asilomar Meeting on; Pathetic Fallacy; Robot Ethics.


Further Reading

Asimov, Isaac. 1994. I, Asimov: A Memoir. New York: Doubleday.

Asimov, Isaac. 2002. It’s Been a Good Life. Amherst: Prometheus Books.

Asimov, Isaac. 2004a. The Foundation Novels. New York: Bantam Dell.

Asimov, Isaac. 2004b. I, Robot. New York: Bantam Dell.





Artificial Intelligence - Animal Consciousness, Social Cognition, Soul, And AI.



Researchers have gained a growing understanding for animal and other nonhuman intelligences in recent decades.

Ravens, bower birds, gorillas, elephants, cats, crows, dogs, dolphins, chimps, grey parrots, jackdaws, magpies, beluga whales, octopi, and a variety of other creatures have all argued for consciousness or sentience, sophisticated kinds of cognition, and personal rights.

By adding one more prejudice to the contemporary struggle against racism, classism, sexism, and ethnocentrism, the Cambridge Declaration on Consciousness and the separate Nonhuman Rights Project mirror the contemporary struggle against racism, classism, sexism, and ethnocentrism: "speciesism," coined in 1970 by psychologist Richard Ryder and popularized by philosopher Peter Singer.

Animal awareness, in fact, may open the way for the investigation and appreciation of other sorts of postulated intelligences, including artificial (traditionally regarded as "mindless machines," such as animals) and alien intelligences.

The knowability of subjective experience and objective properties of other forms of consciousness is one of the most significant topics professionals in many professions are grappling with today.

"What is it like to be a bat?" reportedly enquired philosopher Thomas Nagel, particularly because bats can use echolocation but humans cannot.

Most selfishly, a greater knowledge of animal consciousness might lead to a better comprehension of human mind by comparison.

Looking to animals may also give fresh insights into the principles behind the emergence of consciousness in humans, which may aid scientists in equipping robots with comparable characteristics, appreciating their moral standing, or sympathizing with their actions.

Animals have been utilized as a tool to achieve human goals rather than as ends in and of themselves throughout history.

Cows produce milk that is consumed by humans.

Sheep produce wool, which is used to manufacture clothes.

Horses used to be used for transportation and power in agriculture, but today they are used for amusement and gambling.

The "discovery" of animal awareness may imply that humans are no longer at the center of their own mental world.

The twentieth-"Cognitive century's Revolution," which ostensibly eliminated the soul as a scientific explanation for mental life, opened the door to studying and conducting experiments in animal perception, memory, cognition, and reasoning, as well as exploring the possibilities for incorporating sophisticated information processing convolutions and integrative capabilities into machines.

The possibility of a fundamental cognitive "software" that is shared by humans, animals, and artificial general intelligences is often addressed in emerging interdisciplinary sciences like neuroscience, evolutionary psychology, and computer science.

In his book Man and Dolphin (1961), independent researcher John Lilly was one of the first to propose that dolphins are not only intelligent, but also exhibit traits and communication abilities that are superior to humans in many aspects.

Many of his results have subsequently been validated by other researchers such as Lori Marino and Diana Reiss, and a broad consensus has been formed that dolphins' self-awareness falls somewhere between humans and chimpanzees.

Dolphins have been spotted fishing together with human fishers, while Pelorus Jack, the most renowned dolphin in history, reliably and freely accompanied ships for twenty-four years over the treacherous rocks and tidal surges of Cook Strait in New Zealand.

Some animals seem to pass the well-known self-recognition mirror test.

Dolphins and killer whales, chimps and bonobos, magpies, and elephants are among them.

The test is often performed by painting a tiny mark on an animal in a location that it cannot see without using a mirror.

The animal is reported to identify itself if they touch the mark on their own body after seeing it reflected in the mirror.

Certain detractors claim that the mirror-mark test is unfair to some animal species because it favors vision over other sense organs.

The study of animal consciousness, according to SETI experts, may help humans contend with the existential implications of self-aware alien intelligences.

Similarly, work with animals has sparked curiosity in artificial intelligences' awareness.

To give you an example, consider the following: John Lilly discusses a hypothetical Solid State Intelligence (SSI) that will eventually evolve from the labor of human computer scientists and engineers in his book The Scientist (1978).

This SSI would be built out of computer components, develop its own integrations and advancements, and eventually self-replicate to confront and destroy humans.

Some human beings would be protected by the SSI in domed "reservations" that it would maintain and govern.

The SSI would eventually develop the capacity to move the planet and traverse the cosmos in search of other intelligences similar to itself.

Artificial intelligence's self-consciousness has been criticized on many grounds.

Machines, according to John Searle, lack intentionality, or the capacity to discover meaning in the computations they do.

Inanimate things are seldom considered to have free will, and so are not considered to be human.

Furthermore, they may be considered as as lacking something, such as a soul, the capacity to distinguish between good and evil, or emotion and creativity.

Animal consciousness findings, on the other hand, have introduced a new dimension to debates over animal and robot rights since they allow for the claim that these animals have the ability to understand whether they are experiencing good or unpleasant experiences.

They also pave the way for powerful kinds of social cognition like attachment, communication, and empathy to be recognized.

A long list of chimps, gorillas, orangutans, and bonobos, including the well-known Washoe, Koko, Chantek, and Kanzi, have mastered an incredible number of gestures in American Sign Language or artificial lexigrams (keyboard symbols of objects or ideas), raising the possibility of true interspecies social exchange.

The Great Ape Project was formed in 1993 by an international group of primatologists with the stated goal of granting these creatures fundamental human rights to life, liberty, and protection from torture.

They argued that these creatures should be granted nonhuman personhood and brought to the forefront of the big mammalian "community of equals." Many well-known marine mammal biologists have become outspoken opponents of fishermen's indiscriminate slaughter of cetaceans or their usage in captivity shows.

In 2007, American lawyer Steven Wise created the Nonhuman Rights Project at the Center for the Expansion of Fundamental Rights.

The Nonhuman Rights Project aims to give animals that are today considered property of legal humans legal personhood.

Physical liberty (against incarceration) and bodily integrity would be among these core personhood rights (against laboratory experimentation).

According to the group, there is no common law norm that prevents animal personhood, and the law finally allowed human slaves to become legal people without precedent via the writ of habeas corpus.

Individuals may use habeas corpus writs to assert their right to liberty and to challenge unjust confinement.

The Nonhuman Rights Project has been fighting for animal rights in the courts since 2013.

The first case was brought in New York State to protect the rights of four confined chimps, and it contained an affidavit from renowned primatologist Jane Goodall as proof.

The North American Primate Sanctuary Alliance requested that the chimpanzees be freed and relocated to their reserve.

The applications and appeals filed by the organization were refused.

Steven Wise has found heart in the fact that, in one judgment, the Supreme Court acknowledged that the subject of personhood is determined by public policy and social norms rather than biology.

The Cambridge Declaration on Consciousness was signed by a group of neuroscientists during the Francis Crick Memorial Conference in 2012.

David Edelman of the Neurosciences Institute in La Jolla, California, Christof Koch of the California Institute of Technology, and Philip Low of Stanford University were the three scientists most directly engaged in the document's creation.

All signatories agreed that scientific methodologies have shown evidence that mammal brain circuits seem to be linked to consciousness, affective moods, and emotional actions.

Birds seem to have developed awareness in a similar way to mammals, according to the researchers.

REM sleep patterns in zebra finches and equivalent effects of hallucinogenic medications were also discovered as proof of conscious behavior in animals by the researchers.

Despite lacking a neocortex for higher-order brain activities, invertebrate cephalopods seem to exhibit self-conscious consciousness, according to the declaration's signers.

Such views have not gone unchallenged.

Humans should continue to carry legal duty for animal care, according to attorney Richard Cupp.

He also contends that animal personhood may block the rights and autonomy of people with cognitive disabilities, leaving them exposed to decreased legal personality.

Cupp also believes that animals are outside of the human moral community, and so outside of the social contract that established personhood rights in the first place.

Daniel Dennett, a philosopher and cognitive scientist, is a vocal opponent of animal sentience, saying that consciousness is a "fiction" that can only be generated via the use of human language.

Animals can't make up tales like this, thus they can't be aware.

Because consciousness is a story we tell ourselves, scientific disciplines will never be able to grasp what it means to be a conscious animal because science is based on objective facts and universal descriptions rather than tales.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Nonhuman Rights and Personhood; Sloman, Aaron.

Further Reading

Dawkins, Marian S. 2012. Why Animals Matter: Animal Consciousness, Animal Welfare, and Human Well-being. Oxford, UK: Oxford University Press.

Kaplan, Gisela. 2016. “Commentary on ‘So Little Brain, So Much Mind: Intelligence and Behavior in Nonhuman Animals’ by Felice Cimatti and Giorgio Vallortigara.” Italian Journal of Cognitive Science 4, no. 8: 237–52.

Solum, Lawrence B. 1992. “Legal Personhood for Artificial Intelligences.” North Caro￾lina Law Review 70, no. 4: 1231–87.

Wise, Steven M. 2010. “Legal Personhood and the Nonhuman Rights Project.” Animal Law Review 17, no. 1: 1–11.

Wise, Steven M. 2013. “Nonhuman Rights to Personhood.” Pace Environmental Law Review 30, no. 3: 1278–90.



Artificial Intelligence - What Is Algorithmic Error and Bias?

 




Bias in algorithmic systems has emerged as one of the most pressing issues surrounding artificial intelligence ethics.

Algorithmic bias refers to a computer system's recurrent and systemic flaws that discriminate against certain groups or people.

It's crucial to remember that bias isn't necessarily a bad thing: it may be included into a system in order to fix an unjust system or reality.

Bias causes problems when it leads to an unjust or discriminating conclusion that affects people's lives and chances.

Individuals and communities that are already weak in society are often at danger from algorithmic prejudice and inaccuracy.

As a result, algorithmic prejudice may exacerbate social inequality by restricting people's access to services and goods.

Algorithms are increasingly being utilized to guide government decision-making, notably in the criminal justice sector for sentencing and bail, as well as in migration management using biometric technology like face and gait recognition.

When a government's algorithms are shown to be biased, individuals may lose faith in the AI system as well as its usage by institutions, whether they be government agencies or private businesses.

There have been several incidents of algorithmic prejudice during the past few years.

A high-profile example is Facebook's targeted advertising, which is based on algorithms that identify which demographic groups a given advertisement should be viewed by.

Indeed, according to one research, job advertising for janitors and related occupations on Facebook are often aimed towards lower-income groups and minorities, while ads for nurses or secretaries are focused at women (Ali et al. 2019).

This involves successfully profiling persons in protected classifications, such as race, gender, and economic bracket, in order to maximize the effectiveness and profitability of advertising.

Another well-known example is Amazon's algorithm for sorting and evaluating resumes in order to increase efficiency and ostensibly impartiality in the recruiting process.

Amazon's algorithm was trained using data from the company's previous recruiting practices.

However, once the algorithm was implemented, it became evident that it was prejudiced against women, with résumés that contained the terms "women" or "gender" or indicated that the candidate had attended a women's institution receiving worse rankings.

Little could be done to address the algorithm's prejudices since it was trained on Amazon's prior recruiting practices.

While the algorithm was plainly prejudiced, this example demonstrates how such biases may mirror social prejudices, including, in this instance, Amazon's deeply established biases against employing women.

Indeed, bias in an algorithmic system may develop in a variety of ways.

Algorithmic bias occurs when a group of people and their lived experiences are not taken into consideration while the algorithm is being designed.

This can happen at any point during the algorithm development process, from collecting data that isn't representative of all demographic groups to labeling data in ways that reproduce discriminatory profiling to the rollout of an algorithm that ignores the differential impact it may have on a specific group.

In recent years, there has been a proliferation of policy documents addressing the ethical responsibilities of state and non-state bodies using algorithmic processing—to ensure against unfair bias and other negative effects of algorithmic processing—partly in response to significant publicity of algorithmic biases (Jobin et al.2019).

The European Union's "Ethics Guidelines for Trustworthy AI," issued in 2018, is one of the most important rules in this area.

The EU statement lays forth seven principles for fair and ethical AI and algorithmic processing regulation.

Furthermore, with the adoption of the General Data Protection Regulation (GDPR) in 2018, the European Union has been in the forefront of legislative responses to algorithmic processing.

A corporation may be penalized up to 4% of its annual worldwide turnover if it uses an algorithm that is found to be prejudiced on the basis of race, gender, or another protected category, according to the GDPR, which applies in the first instance to the processing of all personal information inside the EU.

The difficulty of determining where a bias occurred and what dataset caused prejudice is a persisting challenge for algorithmic processing regulation.

This is sometimes referred to as the algorithmic black box problem: an algorithm's deep data processing layers are so intricate and many that a human cannot comprehend them.

Different data is fed into the algorithm to observe where the unequal results emerge, based on the right to an explanation when, subject to an automated decision under the GDPR, one of the replies has been to identify where the bias occurred via counterfactual explanations (Wachter et al.2018).

Technical solutions to the issue included building synthetic datasets that seek to repair naturally existing biases in datasets or provide an unbiased and representative dataset, in addition to legal and legislative instruments for tackling algorithmic bias.

While such channels for redress are vital, one of the most comprehensive solutions to the issue is to have far more varied human teams developing, producing, using, and monitoring the effect of algorithms.

A mix of life experiences within diverse teams makes it more likely that prejudices will be discovered and corrected sooner.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Biometric Technology; Explainable AI; Gender and AI.

Further Reading

Ali, Muhammed, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. 2019. “Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Skewed Outcomes.” In Proceedings of the ACM on Human-Computer Interaction, vol. 3, CSCW, Article 199 (November). New York: Association for Computing Machinery.

European Union. 2018. “General Data Protection Regulation (GDPR).” https://gdpr-info.eu/.

European Union. 2019. “Ethics Guidelines for Trustworthy AI.” https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1 (September): 389–99.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

Pasquale, Frank. 2016. The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press.

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2018. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harvard Journal of Law & Technology 31, no. 2 (Spring): 841–87.

Zuboff, Shoshana. 2018. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books.




Artificial Intelligence - What Is Artificial Intelligence, Alchemy, And Associationism?

 



Alchemy and Artificial Intelligence, a RAND Corporation paper prepared by Massachusetts Institute of Technology (MIT) philosopher Hubert Dreyfus and released as a mimeographed memo in 1965, critiqued artificial intelligence researchers' aims and essential assumptions.

The paper, which was written when Dreyfus was consulting for RAND, elicited a significant negative response from the AI community.

Dreyfus had been engaged by RAND, a nonprofit American global policy think tank, to analyze the possibilities for artificial intelligence research from a philosophical standpoint.

Researchers like as Herbert Simon and Marvin Minsky, who predicted in the late 1950s that robots capable of accomplishing whatever humans could do will exist within decades, made bright forecasts for the future of AI.

The objective for most AI researchers was not only to develop programs that processed data in such a manner that the output or outcome looked to be the result of intelligent activity.

Rather, they wanted to create software that could mimic human cognitive processes.

Experts in artificial intelligence felt that human cognitive processes might be used as a model for their algorithms, and that AI could also provide insight into human psychology.

The work of phenomenologists Maurice Merleau-Ponty, Martin Heidegger, and Jean-Paul Sartre impacted Dreyfus' thought.

Dreyfus contended in his report that the theory and aims of AI were founded on associationism, a philosophy of human psychology that includes a core concept: that thinking happens in a succession of basic, predictable stages.

Artificial intelligence researchers believed they could use computers to duplicate human cognitive processes because of their belief in associationism (which Dreyfus claimed was erroneous).

Dreyfus compared the characteristics of human thinking (as he saw them) to computer information processing and the inner workings of various AI systems.

The core of his thesis was that human and machine information processing processes are fundamentally different.

Computers can only be programmed to handle "unambiguous, totally organized information," rendering them incapable of managing "ill-structured material of everyday life," and hence of intelligence (Dreyfus 1965, 66).

On the other hand, Dreyfus contended that, according to AI research's primary premise, many characteristics of human intelligence cannot be represented by rules or associationist psychology.

Dreyfus outlined three areas where humans vary from computers in terms of information processing: fringe consciousness, insight, and ambiguity tolerance.

Chess players, for example, utilize the fringe awareness to decide which area of the board or pieces to concentrate on while making a move.

The human player differs from a chess-playing software in that the human player does not consciously or subconsciously examine the information or count out probable plays the way the computer does.

Only after the player has utilized their fringe awareness to choose which pieces to concentrate on can they consciously calculate the implications of prospective movements in a manner akin to computer processing.

The (human) problem-solver may build a set of steps for tackling a complicated issue by understanding its fundamental structure.

This understanding is lacking in problem-solving software.

Rather, as part of the program, the problem-solving method must be preliminarily established.

The finest example of ambiguity tolerance is in natural language comprehension, when a word or phrase may have an unclear meaning yet is accurately comprehended by the listener.

When reading ambiguous syntax or semantics, there are an endless amount of signals to examine, yet the human processor manages to choose important information from this limitless domain in order to accurately understand the meaning.

On the other hand, a computer cannot be trained to search through all conceivable facts in order to decipher confusing syntax or semantics.

Either the amount of facts is too huge, or the criteria for interpretation are very complex.

AI experts chastised Dreyfus for oversimplifying the difficulties and misrepresenting computers' capabilities.

RAND commissioned MIT computer scientist Seymour Papert to respond to the study, which he published in 1968 as The Artificial Intelligence of Hubert L.Dreyfus: A Budget of Fallacies.

Papert also set up a chess match between Dreyfus and Mac Hack, which Dreyfus lost, much to the amusement of the artificial intelligence community.

Nonetheless, some of his criticisms in this report and subsequent books appear to have foreshadowed intractable issues later acknowledged by AI researchers, such as artificial general intelligence (AGI), artificial simulation of analog neurons, and the limitations of symbolic artificial intelligence as a model of human reasoning.

Dreyfus' work was declared useless by artificial intelligence specialists, who stated that he misinterpreted their research.

Their ire had been aroused by Dreyfus's critiques of AI, which often used aggressive terminology.

The New Yorker magazine's "Talk of the Town" section included extracts from the story.

Dreyfus subsequently refined and enlarged his case in What Computers Can't Do: The Limits of Artificial Intelligence, published in 1972.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: Mac Hack; Simon, Herbert A.; Minsky, Marvin.

Further Reading

Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books.

Dreyfus, Hubert L. 1965. Alchemy and Artificial Intelligence. P-3244. Santa Monica, CA: RAND Corporation.

Dreyfus, Hubert L. 1972. What Computers Can’t Do: The Limits of Artificial Intelligence.New York: Harper and Row.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman.

Papert, Seymour. 1968. The Artificial Intelligence of Hubert L. Dreyfus: A Budget of Fallacies. Project MAC, Memo No. 154. Cambridge, MA: Massachusetts Institute of Technology.


Artificial Intelligence - What Is The AARON Computer Program?


 



Harold Cohen built AARON, a computer program that allows him to produce paintings.

The initial version was created "about 1972," according to Cohen.

Because AARON is not open source, its development came to a halt when Cohen died in 2016.

In 2014, AARON was still creating fresh photos, and its functioning was still visible in 2016.

AARON is not an abbreviation.

The name was chosen since it is the first letter of the alphabet, and Cohen anticipated that he would eventually build further programs, which he never did.

AARON has various versions during the course of its four decades of development, each with its own set of capabilities.

Earlier versions could only generate black-and-white line drawings, while later versions could also paint in color.

Some AARON versions were set up to make abstract paintings, while others were set up to create scenes with objects and people.

AARON's main goal was to generate not just computer pictures, but also physical, large-scale images or paintings.

The lines made by AARON, a program written in C at the time, were traced directly on the wall in Cohen's show at the San Francisco Museum of Modern Art.

The software was then paired with a machine that had a robotic arm and could apply paint on canvas in later creative episodes of AARON.

For example, the version of AARON on display at Boston's Computer Museum in 1995, which was written in LISP at the time and ran on a Silicon Graphics computer, generated a file containing a set of instructions.

After then, the file was transmitted to a PC that was running a C++ program.

This computer was equipped with a robotic arm.

The C++ code processed the commands and controlled the arm's movement, as well as the dye mixing and application to the canvas.

Cohen's drawing and painting devices were also significant advancements.

Industrial inkjet printers were employed in subsequent generations as well.

Because of the colors these new printers could create, Cohen considered this configuration of AARON to be the most advanced; he thought that the inkjet was the most important innovation since the industrial revolution when it came to colors.

While Cohen primarily concentrated on tactile pictures, Ray Kurzweil built a screensaver version of AARON around the year 2000.

By 2016, Cohen had developed his own version of AARON, which produced black-and-white pictures that the user could color using a big touch screen.

"Fingerpainting," he called it.

AARON, according to Cohen, is neither a "totally independent artist" nor completely creative.

He did feel, however, that AARON demonstrates one requirement of autonomy: emergence, which Cohen defines as "paintings that are really shocking and unique." Cohen never got too far into AARON's philosophical ramifications.

It's easy to infer that AARON's work as a colorist was his greatest accomplishment, based on the amount of time he devotes to it in practically all of the interviews conducted with him.

Computational Creativity and Generative Design are two more terms for the same thing.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Computational Creativity; Generative Design.


Further Reading

Cohen, Harold. 1995. “The Further Exploits of AARON, Painter.” Stanford Humanities Review 4, no. 2 (July): 141–58.

Cohen, Harold. 2004. “A Sorcerer’s Apprentice: Art in an Unknown Future.” Invited talk at Tate Modern, London. http://www.aaronshome.com/aaron/publications/tate-final.doc.

Cohen, Paul. 2016. “Harold Cohen and AARON.” AI Magazine 37, no. 4 (Winter): 63–66.

McCorduck, Pamela. 1990. Aaron’s Code: Meta-Art, Artificial Intelligence, and the Work of Harold Cohen. New York: W. H. Freeman.



Artificial Intelligence - What Is An AI Winter?

 



The term AI Winter was established during the American Association of Artificial Intelligence's annual conference in 1984.(now the Association for the Advancement of Artificial Intelligence or AAAI).

Marvin Minsky and Roger Schank, two top academics, used the phrase to describe the imminent bust in artificial intelligence research and development at the time.

Daniel Crevier, a Canadian AI researcher, has detailed how fear of an impending AI Winter caused a domino effect that started with skepticism in the AI research community, spread to the media, and eventually resulted in negative funding responses.

As a consequence, real AI research and development came to a halt.

The initial skepticism may now be ascribed mostly to the excessively optimistic promises made at the time, with AI's real outcomes being significantly less than expected.

Other reasons, such as a lack of computer capacity during the early days of AI research, led to the belief that an AI Winter was approaching.

This was especially true in the case of neural network research, which required a large amount of processing power.

Economic reasons, however, limited attention on more concrete investments, especially during overlapping times of economic crises.

AI Winters have occurred many times during the history of AI, with two of the most notable eras covering 1974 to 1980 and 1987 to 1993.

Although the dates of AI Winters are debatable and dependent on the source, times with overlapping patterns are associated with research abandonment and defunding.

The development of AI systems and technologies has progressed, similar to the hype and ultimate collapse of breakthrough technologies such as nanotechnology.

Not only has there been an unprecedented amount of money for basic research, but there has also been exceptional progress in the development of machine learning during the present boom time.

The reasons for the investment surge vary depending on the many stakeholders involved in artificial intelligence research and development.

For example, industry has staked a lot of money on the idea that discoveries in AI would result in dividends by changing whole market sectors.

Governmental agencies, such as the military, invest in AI research to improve the efficiency of both defensive and offensive technology and to protect troops from imminent damage.

Because AI Winters are triggered by a perceived lack of trust in what AI can provide, the present buzz around AI and its promises has sparked fears of another AI Winter.

On the other hand, others argue that current technology developments in applied AI research have secured future progress in this field.

This argument contrasts sharply with the so-called "pipeline issue," which claims that a lack of basic AI research will result in a limited number of applied outcomes.

One of the major elements of prior AI Winters has been the pipeline issue.

However, if the counterargument is accurate, a feedback loop between applied breakthroughs and basic research will generate enough pressure to keep the pipeline moving forward.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Minsky, Marvin.

Further Reading

Crevier, Daniel. 1993. AI: The Tumultuous Search for Artificial Intelligence. New York: Basic Books.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Muehlhauser, Luke. 2016. “What Should We Learn from Past AI Forecasts?” https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts.


Artificial Intelligence - What Is The Advanced Soldier Sensor Information Systems and Technology (ASSIST)?

 



Soldiers are often required to do missions that may take many hours and are quite stressful.

Soldiers are requested to write a report detailing the most significant events that occurred once a mission is completed.

This report is designed to collect information about the environment and local/foreign people in order to better organize future operations.

Soldiers often offer this report primarily based on their memories, still photographs, and GPS data from portable equipment.

There are probably numerous cases when crucial information is missing and not accessible for future mission planning due to the severe stress they face.

Soldiers were equipped with sensors that could be worn directly on their uniforms as part of the ASSIST (Advanced Soldier Sensor Information Systems and Technology) program, which addressed this problem.

Sensors continually recorded what was going on around the troops during the operation.

When the troops returned from their mission, the sensor data was indexed and an electronic record of the events that occurred while the ASSIST system was recording was established.

Soldiers might offer more accurate reports if they had this knowledge instead of depending simply on their memories.

Numerous functions were made possible by AI-based algorithms, including:

• "Capabilities for Image/Video Data Analysis"

• Object Detection/Image Classification—the capacity to detect and identify items (such as automobiles, persons, and license plates) using video, images, and/or other data sources.

• "Audio Data Analysis Capabilities"

• "Arabic Text Translation"—the ability to detect, recognize, and translate written Arabic text (e.g., in imagery data)

• "Change Detection"—the ability to detect changes in related data sources over time (e.g., identify differences in imagery of the same location at different times)

• Sound Recognition/Speech Recognition—the capacity to distinguish speech (e.g., keyword spotting and foreign language recognition) and identify sound events (e.g., explosions, gunfire, and cars) in audio data.

• Shooter Localization/Shooter Classification—the ability to recognize gunshots in the environment (e.g., via audio data processing), as well as the kind of weapon used and the shooter's position.

• "Capabilities for Soldier Activity Data Analysis"

• Soldier State Identification/Soldier Localization—the capacity to recognize a soldier's course of movement in a given area and characterize the soldier's activities (e.g., running, walking, and climbing stairs) To be effective, AI systems like this (also known as autonomous or intelligent systems) must be thoroughly and statistically analyzed to verify that they would work correctly and as intended in a military setting.

The National Institute of Standards and Technology (NIST) was entrusted with assessing these AI systems using three criteria:

1. The precision with which objects, events, and activities are identified and labeled

2. The system's capacity to learn and improve its categorization performance.

3. The system's usefulness in improving operational efficiency To create its performance measurements, NIST devised a two-part test technique.

Metrics 1 and 2 were assessed using component- and system-level technical performance evaluations, respectively, while meter 3 was assessed using system-level utility assessments.

The utility assessments were created to estimate the effect these technologies would have on warfighter performance in a range of missions and job tasks, while the technical performance evaluations were created to ensure the ongoing improvement of ASSIST system technical capabilities.

NIST endeavored to create assessment techniques that would give an appropriate degree of difficulty for system and soldier performance while defining the precise processes for each sort of evaluation.

The ASSIST systems were divided down into components that implemented certain capabilities at the component level.

For example, the system was divided down into an Arabic text identification component, an Arabic text extraction component (to localize individual text characters), and a text translation component to evaluate its Arabic translation capacity.

Each factor was evaluated on its own to see how it affected the system.

Each ASSIST system was assessed as a black box at the system level, with the overall performance of the system being evaluated independently of the individual component performance.

The total system received a single score, which indicated the system's ability to complete the overall job.

A test was also conducted at the system level to determine the system's usefulness in improving operational effectiveness for after-mission reporting.

Because all of the systems reviewed as part of this initiative were in the early phases of development, a formative assessment technique was suitable.

NIST was especially interested in determining the system's value for warfighters.

As a result, we were worried about the influence on their procedures and goods.

User-centered metrics were used to represent this viewpoint.

NIST set out to find measures that may assist answer questions like: What information do infantry troops seek and/or require after completing a field mission? From both the troops' and the S2's (Staff 2—Intelligence Officer) perspectives, how successfully are information demands met? What was ASSIST's contribution to mission reporting in terms of user-stated information requirements? This examination was carried out at the Aberdeen Test Center Military Operations in Urban Terrain (MOUT) location in Aberdeen, Maryland.

The location was selected for a variety of reasons:

• Ground truth—Aberdeen was able to deliver ground truth to within two centimeters of chosen locations.

This provided a strong standard against which the system output could be compared, enabling the assessment team to get a good depiction of what really transpired in the environment.

• Realism—The MOUT location has around twenty structures that were built up to seem like an Iraqi town.

• Testing infrastructure—The facility was outfitted with a number of cameras (both inside and outside) to help us better comprehend the surroundings during testing.

• Soldier availability—For the assessment, the location was able to offer a small squad of active-duty troops.

The MOUT site was enhanced with items, people, and background noises whose location and behavior were programmed to provide a more operationally meaningful test environment.

The goal was to provide an environment in which the various ASSIST systems could test their capabilities by detecting, identifying, and/or capturing various forms of data.

Foreign language speech detection and classification, Arabic text detection and recognition, detection of shots fired and vehicle sounds, classification of soldier states and tracking their locations (both inside and outside of buildings), and identifying objects of interest such as vehicles, buildings, people, and so on were all included in NIST's utility assessments.

Because the tests required the troops to respond according to their training and experience, the soldiers' actions were not scripted as they progressed through each exercise.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.




See also: Battlefield AI and Robotics; Cybernetics and AI.

Further Reading

Schlenoff, Craig, Brian Weiss, Micky Steves, Ann Virts, Michael Shneier, and Michael Linegang. 2006. “Overview of the First Advanced Technology Evaluations for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 125–32. Gaithersburg, MA: National Institute of Standards and Technology.

Steves, Michelle P. 2006. “Utility Assessments of Soldier-Worn Sensor Systems for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 165–71. Gaithersburg, MA: National Institute of Standards and Technology.

Washington, Randolph, Christopher Manteuffel, and Christopher White. 2006. “Using an Ontology to Support Evaluation of Soldier-Worn Sensor Systems for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 172–78. Gaithersburg, MA: National Institute of Standards and Technology.

Weiss, Brian A., Craig I. Schlenoff, Michael O. Shneier, and Ann Virts. 2006. “Technol￾ogy Evaluations and Performance Metrics for Soldier-Worn Sensors for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 157–64. Gaithersburg, MA: National Institute of Standards and Technology.




What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...