Showing posts sorted by relevance for query AI programs. Sort by date Show all posts
Showing posts sorted by relevance for query AI programs. Sort by date Show all posts

Artificial Intelligence - What Is Artificial Intelligence, Alchemy, And Associationism?

 



Alchemy and Artificial Intelligence, a RAND Corporation paper prepared by Massachusetts Institute of Technology (MIT) philosopher Hubert Dreyfus and released as a mimeographed memo in 1965, critiqued artificial intelligence researchers' aims and essential assumptions.

The paper, which was written when Dreyfus was consulting for RAND, elicited a significant negative response from the AI community.

Dreyfus had been engaged by RAND, a nonprofit American global policy think tank, to analyze the possibilities for artificial intelligence research from a philosophical standpoint.

Researchers like as Herbert Simon and Marvin Minsky, who predicted in the late 1950s that robots capable of accomplishing whatever humans could do will exist within decades, made bright forecasts for the future of AI.

The objective for most AI researchers was not only to develop programs that processed data in such a manner that the output or outcome looked to be the result of intelligent activity.

Rather, they wanted to create software that could mimic human cognitive processes.

Experts in artificial intelligence felt that human cognitive processes might be used as a model for their algorithms, and that AI could also provide insight into human psychology.

The work of phenomenologists Maurice Merleau-Ponty, Martin Heidegger, and Jean-Paul Sartre impacted Dreyfus' thought.

Dreyfus contended in his report that the theory and aims of AI were founded on associationism, a philosophy of human psychology that includes a core concept: that thinking happens in a succession of basic, predictable stages.

Artificial intelligence researchers believed they could use computers to duplicate human cognitive processes because of their belief in associationism (which Dreyfus claimed was erroneous).

Dreyfus compared the characteristics of human thinking (as he saw them) to computer information processing and the inner workings of various AI systems.

The core of his thesis was that human and machine information processing processes are fundamentally different.

Computers can only be programmed to handle "unambiguous, totally organized information," rendering them incapable of managing "ill-structured material of everyday life," and hence of intelligence (Dreyfus 1965, 66).

On the other hand, Dreyfus contended that, according to AI research's primary premise, many characteristics of human intelligence cannot be represented by rules or associationist psychology.

Dreyfus outlined three areas where humans vary from computers in terms of information processing: fringe consciousness, insight, and ambiguity tolerance.

Chess players, for example, utilize the fringe awareness to decide which area of the board or pieces to concentrate on while making a move.

The human player differs from a chess-playing software in that the human player does not consciously or subconsciously examine the information or count out probable plays the way the computer does.

Only after the player has utilized their fringe awareness to choose which pieces to concentrate on can they consciously calculate the implications of prospective movements in a manner akin to computer processing.

The (human) problem-solver may build a set of steps for tackling a complicated issue by understanding its fundamental structure.

This understanding is lacking in problem-solving software.

Rather, as part of the program, the problem-solving method must be preliminarily established.

The finest example of ambiguity tolerance is in natural language comprehension, when a word or phrase may have an unclear meaning yet is accurately comprehended by the listener.

When reading ambiguous syntax or semantics, there are an endless amount of signals to examine, yet the human processor manages to choose important information from this limitless domain in order to accurately understand the meaning.

On the other hand, a computer cannot be trained to search through all conceivable facts in order to decipher confusing syntax or semantics.

Either the amount of facts is too huge, or the criteria for interpretation are very complex.

AI experts chastised Dreyfus for oversimplifying the difficulties and misrepresenting computers' capabilities.

RAND commissioned MIT computer scientist Seymour Papert to respond to the study, which he published in 1968 as The Artificial Intelligence of Hubert L.Dreyfus: A Budget of Fallacies.

Papert also set up a chess match between Dreyfus and Mac Hack, which Dreyfus lost, much to the amusement of the artificial intelligence community.

Nonetheless, some of his criticisms in this report and subsequent books appear to have foreshadowed intractable issues later acknowledged by AI researchers, such as artificial general intelligence (AGI), artificial simulation of analog neurons, and the limitations of symbolic artificial intelligence as a model of human reasoning.

Dreyfus' work was declared useless by artificial intelligence specialists, who stated that he misinterpreted their research.

Their ire had been aroused by Dreyfus's critiques of AI, which often used aggressive terminology.

The New Yorker magazine's "Talk of the Town" section included extracts from the story.

Dreyfus subsequently refined and enlarged his case in What Computers Can't Do: The Limits of Artificial Intelligence, published in 1972.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: Mac Hack; Simon, Herbert A.; Minsky, Marvin.

Further Reading

Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books.

Dreyfus, Hubert L. 1965. Alchemy and Artificial Intelligence. P-3244. Santa Monica, CA: RAND Corporation.

Dreyfus, Hubert L. 1972. What Computers Can’t Do: The Limits of Artificial Intelligence.New York: Harper and Row.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman.

Papert, Seymour. 1968. The Artificial Intelligence of Hubert L. Dreyfus: A Budget of Fallacies. Project MAC, Memo No. 154. Cambridge, MA: Massachusetts Institute of Technology.


Artificial Intelligence - Who Was John McCarthy?

 


John McCarthy  (1927–2011) was an American computer scientist and mathematician who was best known for helping to develop the subject of artificial intelligence in the late 1950s and pushing the use of formal logic in AI research.

McCarthy was a creative thinker who earned multiple accolades for his contributions to programming languages and operating systems research.

Throughout McCarthy's life, however, artificial intelligence and "formalizing common sense" remained his primary research interest (McCarthy 1990).

As a graduate student, McCarthy first met the concepts that would lead him to AI at the Hixon conference on "Cerebral Mechanisms in Behavior" in 1948.

The symposium was place at the California Institute of Technology, where McCarthy had just finished his undergraduate studies and was now enrolled in a graduate mathematics program.

In the United States, machine intelligence had become a subject of substantial academic interest under the wide term of cybernetics by 1948, and many renowned cyberneticists, notably Princeton mathematician John von Neumann, were in attendance at the symposium.

McCarthy moved to Princeton's mathematics department a year later, when he discussed some early ideas inspired by the symposium with von Neumann.

McCarthy never published the work, despite von Neumann's urging, since he believed cybernetics could not solve his problems concerning human knowing.

McCarthy finished a PhD on partial differential equations at Princeton.

He stayed at Princeton as an instructor after graduating in 1951, and in the summer of 1952, he had the chance to work at Bell Labs with cyberneticist and inventor of information theory Claude Shannon, whom he persuaded to collaborate on an edited collection of writings on machine intelligence.

Automata Studies received contributions from a variety of fields, ranging from pure mathematics to neuroscience.

McCarthy, on the other hand, felt that the published studies did not devote enough attention to the important subject of how to develop intelligent machines.

McCarthy joined the mathematics department at Stanford in 1953, but was fired two years later, maybe because he spent too much time thinking about intelligent computers and not enough time working on his mathematical studies, he speculated.

In 1955, he accepted a position at Dartmouth, just as IBM was preparing to establish the New England Computation Center at MIT.

The New England Computation Center gave Dartmouth access to an IBM computer that was installed at MIT and made accessible to a group of New England colleges.

McCarthy met IBM researcher Nathaniel Rochester via the IBM initiative, and he recruited McCarthy to IBM in the summer of 1955 to work with his research group.

McCarthy persuaded Rochester of the need for more research on machine intelligence, and he submitted a proposal to the Rockefeller Foundation for a "Summer Research Project on Artificial Intelligence" with Rochester, Shannon, and Marvin Minsky, a graduate student at Princeton, which included the first known use of the phrase "artificial intelligence." Despite the fact that the Dartmouth Project is usually regarded as a watershed moment in the development of AI, the conference did not go as McCarthy had envisioned.

The Rockefeller Foundation supported the proposal at half the proposed budget since it was for such an unique field of research with a relatively young professor as author, and because Shannon's reputation carried substantial weight with the Foundation.

Furthermore, since the event took place over many weeks in the summer of 1955, only a handful of the guests were able to attend the whole period.

As a consequence, the Dartmouth conference was a fluid affair with an ever-changing and unpredictably diverse guest list.

Despite its chaotic implementation, the meeting was crucial in establishing AI as a distinct area of research.

McCarthy won a Sloan grant to spend a year at MIT, closer to IBM's New England Computation Center, while still at Dartmouth in 1957.

McCarthy was given a post in the Electrical Engineering department at MIT in 1958, which he accepted.

Later, he was joined by Minsky, who worked in the mathematics department.

McCarthy and Minsky suggested the construction of an official AI laboratory to Jerome Wiesner, head of MIT's Research Laboratory of Electronics, in 1958.

McCarthy and Minsky agreed on the condition that Wiesner let six freshly accepted graduate students into the laboratory, and the "artificial intelligence project" started teaching its first generation of students.

McCarthy released his first article on artificial intelligence in the same year.

In his book "Programs with Common Sense," he described a computer system he named the Advice Taker that would be capable of accepting and understanding instructions in ordinary natural language from nonexpert users.

McCarthy would later define Advice Taker as the start of a study program aimed at "formalizing common sense." McCarthy felt that everyday common sense notions, such as comprehending that if you don't know a phone number, you'll need to look it up before calling, might be written as mathematical equations and fed into a computer, enabling the machine to come to the same conclusions as humans.

Such formalization of common knowledge, McCarthy felt, was the key to artificial intelligence.

McCarthy's presentation, which was presented at the United Kingdom's National Physical Laboratory's "Symposium on Mechansation of Thought Processes," helped establish the symbolic program of AI research.

McCarthy's research was focused on AI by the late 1950s, although he was also involved in a range of other computing-related topics.

In 1957, he was assigned to a group of the Association for Computing Machinery charged with developing the ALGOL programming language, which would go on to become the de facto language for academic research for the next several decades.

He created the LISP programming language for AI research in 1958, and its successors are widely used in business and academia today.

McCarthy contributed to computer operating system research via the construction of time sharing systems, in addition to his work on programming languages.

Early computers were large and costly, and they could only be operated by one person at a time.

McCarthy identified the necessity for several users throughout a major institution, such as a university or hospital, to be able to use the organization's computer systems concurrently via computer terminals in their offices from his first interaction with computers in 1955 at IBM.

McCarthy pushed for study on similar systems at MIT, serving on a university committee that looked into the issue and ultimately assisting in the development of MIT's Compatible Time-Sharing System (CTSS).

Although McCarthy left MIT before the CTSS work was completed, his advocacy with J.C.R.

Licklider, future office head at the Advanced Research Projects Agency, the predecessor to DARPA, while a consultant at Bolt Beranek and Newman in Cambridge, was instrumental in helping MIT secure significant federal support for computing research.

McCarthy was recruited to join what would become the second department of computer science in the United States, after Purdue's, by Stanford Professor George Forsythe in 1962.

McCarthy insisted on going only as a full professor, which he believed would be too much for Forsythe to handle as a young researcher.

Forsythe was able to persuade Stanford to grant McCarthy a full chair, and he moved to Stanford in 1965 to establish the Stanford AI laboratory.

Until his retirement in 2000, McCarthy oversaw research at Stanford on AI topics such as robotics, expert systems, and chess.

McCarthy was up in a family where both parents were ardent members of the Communist Party, and he had a lifetime interest in Russian events.

He maintained numerous professional relationships with Soviet cybernetics and AI experts, traveling and lecturing there in the mid-1960s, and even arranged a chess match between a Stanford chess computer and a Russian equivalent in 1965, which the Russian program won.

He developed many foundational concepts in symbolic AI theory while at Stanford, such as circumscription, which expresses the idea that a computer must be allowed to make reasonable assumptions about problems presented to it; otherwise, even simple scenarios would have to be specified in such exacting logical detail that the task would be all but impossible.

McCarthy's accomplishments have been acknowledged with various prizes, including the 1971 Turing Award, the 1988 Kyoto Prize, admission into the National Academy of Sciences in 1989, the 1990 Presidential Medal of Science, and the 2003 Benjamin Franklin Medal.

McCarthy was a brilliant thinker who continuously imagined new technologies, such as a space elevator for economically transporting stuff into orbit and a system of carts strung from wires to better urban transportation.

In a 2008 interview, McCarthy was asked what he felt the most significant topics in computing now were, and he answered without hesitation, "Formalizing common sense," the same endeavor that had inspired him from the start.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cybernetics and AI; Expert Systems; Symbolic Logic.


References & Further Reading:


Hayes, Patrick J., and Leora Morgenstern. 2007. “On John McCarthy’s 80th Birthday, in Honor of His Contributions.” AI Magazine 28, no. 4 (Winter): 93–102.

McCarthy, John. 1990. Formalizing Common Sense: Papers, edited by Vladimir Lifschitz. Norwood, NJ: Albex.

Morgenstern, Leora, and Sheila A. McIlraith. 2011. “John McCarthy’s Legacy.” Artificial Intelligence 175, no. 1 (January): 1–24.

Nilsson, Nils J. 2012. “John McCarthy: A Biographical Memoir.” Biographical Memoirs of the National Academy of Sciences. http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/mccarthy-john.pdf.



Artificial Intelligence - Autonomy And Complacency In AI Systems.

 




The concepts of machine autonomy and human autonomy and complacency are intertwined.

Artificial intelligences are undoubtedly getting more independent as they are trained to learn from their own experiences and data intake.

Machines that gain more skills than humans tend to become increasingly dependent on them to make judgments and react correctly to unexpected events.

This dependence on AI systems' decision-making processes might lead to a loss of human agency and complacency.

This complacency may result in the AI's system or decision-making processes failing to respond to major faults.

Autonomous machines are ones that can function in unsupervised settings, adapt to new situations and experiences, learn from previous errors, and decide the best potential outcomes in each case without the need for fresh programming input.

To put it another way, these robots learn from their experiences and are capable of going beyond their original programming in certain respects.

The concept is that programmers won't be able to foresee every circumstance that an AI-enabled machine could experience based on its activities, thus it must be able to adapt.

This is not widely recognized, since others say that these systems' adaptability is inherent in their programming, as their programs are designed to be adaptable.

The disagreement over whether any agent, including humans, can express free will and act autonomously exacerbates these debates.

With the advancement of technology, the autonomy of AI programs is not the only element of autonomy that is being explored.

Worries have also been raised concerning the influence on human autonomy, as well as concerns about machine complacency.

People who gain from the machine's choice being irrelevant since they no longer have to make decisions as AI systems grow increasingly tuned to anticipate people's wishes and preferences.

The interaction of human employees and automated systems has gotten a lot of attention.

According to studies, humans are more prone to overlook flaws in these procedures, particularly when they get routinized, which leads to a positive expectation of success rather than a negative expectation of failure.

This sense of accomplishment causes the operators or supervisors of automated processes to place their confidence in inaccurate readouts or machine judgments, which may lead to mistakes and accidents.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Accidents and Risk Assessment; Autonomous and Semiautonomous Systems.



Further Reading


André, Quentin, Ziv Carmon, Klaus Wertenbroch, Alia Crum, Frank Douglas, William 
Goldstein, Joel Huber, Leaf Van Boven, Bernd Weber, and Haiyang Yang. 2018. 

“Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data.” Customer Needs and Solutions 5, no. 1–2: 28–37.

Bahner, J. Elin, Anke-Dorothea Hüper, and Dietrich Manzey. 2008. “Misuse of Auto￾mated Decision Aids: Complacency, Automation Bias, and the Impact of Training 
Experience.” International Journal of Human-Computer Studies 66, no. 9: 
688–99.

Lawless, W. F., Ranjeev Mittu, Donald Sofge, and Stephen Russell, eds. 2017. Autonomy 
and Intelligence: A Threat or Savior? Cham, Switzerland: Springer.

Parasuraman, Raja, and Dietrich H. Manzey. 2010. “Complacency and Bias in Human 
Use of Automation: An Attentional Integration.” Human Factors 52, no. 3: 
381–410.





Artificial Intelligence - What Is The Turing Test?

 



 

The Turing Test is a method of determining whether or not a machine can exhibit intelligence that mimics or is equivalent and likened to Human intelligence. 

The Turing Test, named after computer scientist Alan Turing, is an AI benchmark that assigns intelligence to any machine capable of displaying intelligent behavior comparable to that of a person.

Turing's "Computing Machinery and Intelligence" (1950), which establishes a simple prototype—what Turing calls "The Imitation Game," is the test's locus classicus.

In this game, a person is asked to determine which of the two rooms is filled by a computer and which is occupied by another human based on anonymized replies to natural language questions posed by the judge to each inhabitant.

Despite the fact that the human respondent must offer accurate answers to the court's queries, the machine's purpose is to fool the judge into thinking it is human.





According to Turing, the machine may be considered intelligent to the degree that it is successful at this job.

The fundamental benefit of this essentially operationalist view of intelligence is that it avoids complex metaphysics and epistemological issues about the nature and inner experience of intelligent activities.

According to Turing's criteria, little more than empirical observation of outward behavior is required for predicting object intelligence.

This is in sharp contrast to the widely Cartesian epistemological tradition, which holds that some internal self-awareness is a need for intelligence.

Turing's method avoids the so-called "problem of other minds" that arises from such a viewpoint—namely, how to be confident of the presence of other intelligent individuals if it is impossible to know their thoughts from a presumably required first-person perspective.



Nonetheless, the Turing Test, at least insofar as it considers intelligence in a strictly formalist manner, is bound up with the spirit of Cartesian epistemol ogy.

The machine in the Imitation Game is a digital computer in the sense of Turing: a set of operations that may theoretically be implemented in any material.


A digital computer consists of three parts: a knowledge store, an executive unit that executes individual orders, and a control that regulates the executive unit.






However, as Turing points out, it makes no difference whether these components are created using electrical or mechanical means.

What matters is the formal set of rules that make up the computer's very nature.

Turing holds to the core belief that intellect is inherently immaterial.

If this is true, it is logical to assume that human intellect functions in a similar manner to a digital computer and may therefore be copied artificially.


Since Turing's work, AI research has been split into two camps: 


  1. those who embrace and 
  2. those who oppose this fundamental premise.


To describe the first camp, John Haugeland created the term "good old-fashioned AI," or GOFAI.

Marvin Minsky, Allen Newell, Herbert Simon, Terry Winograd, and, most notably, Joseph Weizenbaum, whose software ELIZA was controversially hailed as the first to pass the Turing Test in 1966.



Nonetheless, detractors of Turing's formalism have proliferated, particularly in the past three decades, and GOFAI is now widely regarded as a discredited AI technique.

John Searle's Minds, Brains, and Programs (1980), in which Searle builds his now-famous Chinese Room thought experiment, is one of the most renowned criticisms of GOFAI in general—and the assumptions of the Turing Test in particular.





In the latter, a person with no prior understanding of Chinese is placed in a room and forced to correlate Chinese characters she receives with other Chinese characters she puts out, according to an English-scripted software.


Searle thinks that, assuming adequate mastery of the software, the person within the room may pass the Turing Test, fooling a native Chinese speaker into thinking she knew Chinese.

If, on the other hand, the person in the room is a digital computer, Turing-type tests, according to Searle, fail to capture the phenomena of understanding, which he claims entails more than the functionally accurate connection of inputs and outputs.

Searle's argument implies that AI research should take materiality issues seriously in ways that Turing's Imitation Game's formalism does not.

Searle continues his own explanation of the Chinese Room thought experiment by saying that human species' physical makeup—particularly their sophisticated nerve systems, brain tissue, and so on—should not be discarded as unimportant to conceptions of intelligence.


This viewpoint has influenced connectionism, an altogether new approach to AI that aims to build computer intelligence by replicating the electrical circuitry of human brain tissue.


The effectiveness of this strategy has been hotly contested, although it looks to outperform GOFAI in terms of developing generalized kinds of intelligence.

Turing's test, on the other hand, may be criticized not just from the standpoint of materialism, but also from the one of fresh formalism.





As a result, one may argue that Turing tests are insufficient as a measure of intelligence since they attempt to reproduce human behavior, which is frequently exceedingly dumb.


According to certain variants of this argument, if criteria of rationality are to distinguish rational from illogical human conduct in the first place, they must be derived a priori rather than from real human experience.

This line of criticism has gotten more acute as AI research has shifted its focus to the potential of so-called super-intelligence: forms of generalized machine intelligence that far outperform human intellect.


Should this next level of AI be attained, Turing tests would seem to be outdated.

Furthermore, simply discussing the idea of superintelligence would seem to need additional intelligence criteria in addition to severe Turing testing.

Turing may be defended against such accusation by pointing out that establishing a universal criterion of intellect was never his goal.



Indeed, according to Turing (1997, 29–30), the purpose is to replace the metaphysically problematic issue "can machines think" with the more empirically verifiable alternative: 

"What will happen when a computer assumes the role [of the man in the Imitation Game]" (Turing 1997, 29–30).


Thus, Turing's test's above-mentioned flaw—that it fails to establish a priori rationality standards—is also part of its strength and drive.

It also explains why, since it was initially presented three-quarters of a century ago, it has had such a lengthy effect on AI research in all domains.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.

Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.

Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cam￾bridge, MA: MIT Press.



Artificial Intelligence - Gender and Artificial Intelligence.

 



Artificial intelligence and robots are often thought to be sexless and genderless in today's society, but this is not the case.

Humans, on the other hand, encode gender and stereo types into artificial intelligence systems in a similar way that gender is woven into language and culture.

The data used to train artificial intelligences has a gender bias.

Biased data may cause significant discrepancies in computer predictions and conclusions.

These differences would be said to be discriminating in humans.

AIs are only as good as the people who provide the data that machine learning systems capture, and they are only as ethical as the programmers who create and supervise them.

Machines presume gender prejudice is normal (if not acceptable) human behavior when individuals exhibit it.

When utilizing numbers, text, graphics, or voice recordings to teach algorithms, bias might emerge.

Machine learning is the use of statistical models to evaluate and categorize large amounts of data in order to generate predictions.

Deep learning is the use of neural network topologies that are expected to imitate human brainpower.

Data is labeled using classifiers based on previous patterns.

Classifiers have a lot of power.

By studying data from automobiles visible in Google Street View, they can precisely forecast income levels and political leanings of neighborhoods and cities.

The language individuals employ reveals gender prejudice.

This bias may be apparent in the names of items as well as how they are ranked in significance.

Beginning with the frequency with which their respective titles are employed and they are referred to as men and women vs boys and girls, descriptions of men and women are skewed.

The analogies and words employed are skewed as well.

Biased AI may influence whether or not individuals of particular genders or ethnicities are targeted for certain occupations, whether or not medical diagnoses are correct, whether or not they are able to acquire loans, and even how exams are scored.

"Woman" and "girl" are more often associated with the arts than with mathematics in AI systems.

Similar biases have been discovered in Google's AI systems for finding employment prospects.



Facebook and Microsoft's algorithms regularly correlate pictures of cooking and shopping with female activity, whereas sports and hunting are associated with masculine activity.

Researchers have discovered instances when gender prejudices are purposefully included into AI systems.

Men, for example, are more often provided opportunities to apply for highly paid and sought-after positions on job sites than women.

Female-sounding names for digital assistants on smartphones include Siri, Alexa, and Cortana.

According to Alexa's creator, the name came from negotiations with Amazon CEO Jeff Bezos, who desired a virtual assistant with the attitude and gender of the Enterprise starship computer from the Star Trek television program, which is a woman.

Debo rah Harrison, the Cortana project's head, claims that their female voice arose from studies demonstrating that people react better to female voices.

However, when BMW introduced a female voice to its in-car GPS route planner, it experienced instant backlash from males who didn't want their vehicles to tell them what to do.

Female voices should seem empathic and trustworthy, but not authoritative, according to the company.

Affectiva, a startup that specializes in artificial intelligence, utilizes photographs of six million people's faces as training data to attempt to identify their underlying emotional states.

The startup is now collaborating with automakers to utilize real-time footage of drivers to assess whether or not they are weary or furious.

The automobile would advise these drivers to pull over and take a break.

However, the organization has discovered that women seem to "laugh more" than males, which complicates efforts to accurately estimate the emotional states of normal drivers.

In hardware, the same biases might be discovered.

A disproportionate percentage of female robots are created by computer engineers, who are still mostly male.

The NASA Valkyrie robot, which has been deployed on Shuttle flights, has breasts.

Jia, a shockingly human-looking robot created at China's University of Science and Technology, has long wavy black hair, pale complexion, and pink lips and cheeks.

She maintains her eyes and head inclined down when initially spoken to, as though in reverence.

She wears a tight gold gown that is slender and busty.

"Yes, my lord, what can I do for you?" she says as a welcome.

"Don't get too near to me while you're taking a photo," Jia says when asked to snap a picture.

It will make my face seem chubby." In popular culture, there is a strong prejudice against female robots.

Fembots in the 1997 film Austin Powers discharged bullets from their breast cups, weaponizing female sexuality.

The majority of robots in music videos are female robots.

Duran Duran's "Electric Barbarella" was the first song accessible for download on the internet.

Bjork's video "The Girl And The Robot" gave birth to the archetypal white-sheathed robot seen today in so many places.

Marina and the Diamonds' protest that "I Am Not a Robot" is met by Hoodie Allen's fast answer that "You Are Not a Robot." In "The Ghost Inside," by the Broken Bells, a female robot sacrifices plastic body parts to pay tolls and reclaim paradise.

The skin of Lenny Kravitz's "Black Velveteen" is titanium.

Hatsune Miku and Kagamine Rin are anime-inspired holographic vocaloid singers.

Daft Punk is the notable exception, where robot costumes conceal the genuine identity of the male musicians.

Sexy robots are the principal love interests in films like Metropolis (1927), The Stepford Wives (1975), Blade Runner (1982), Ex Machina (2014), and Her (2013), as well as television programs like Battlestar Galactica and Westworld.

Meanwhile, "killer robots," or deadly autonomous weapons systems, are hypermasculine.

Atlas, Helios, and Titan are examples of rugged military robots developed by the Defense Advanced Research Projects Agency (DARPA).

Achilles, Black Knight, Overlord, and Thor PRO are some of the names given to self-driving automobiles.

The HAL 9000 computer implanted in the spacecraft Discovery in 2001: A Space Odyssey (1968), the most renowned autonomous vehicle of all time, is masculine and deadly.

In the field of artificial intelligence, there is a clear gender disparity.

The head of the Stanford Artificial Intelligence Lab, Fei-Fei Li, revealed in 2017 that her team was mostly made up of "men in hoodies" (Hempel 2017).

Women make up just approximately 12% of the researchers who speak at major AI conferences (Simonite 2018b).

In computer and information sciences, women have 19% of bachelor's degrees and 22% of PhD degrees (NCIS 2018).

Women now have a lower proportion of bachelor's degrees in computer science than they did in 1984, when they had a peak of 37 percent (Simonite 2018a).

This is despite the fact that the earliest "computers," as shown in the film Hidden Figures (2016), were women.

There is significant dispute among philosophers over whether un-situated, gender-neutral knowledge may exist in human society.

Users projected gender preferences on Google and Apple's unsexed digital assistants even after they were launched.

White males developed centuries of professional knowledge, which was eventually unleashed into digital realms.

Will machines be able to build and employ rules based on impartial information for hundreds of years to come? In other words, is there a gender to scientific knowledge? Is it masculine or female? Alison Adam is a Science and Technology Studies researcher who is more concerned in the gender of the ideas created by the participants than the gender of the persons engaged.

Sage, a British corporation, recently employed a "conversation manager" entrusted with building a gender-neutral digital assistant, which was eventually dubbed "Pegg." To help its programmers, the organization has also formalized "five key principles" in a "ethics of code" paper.

According to Sage CEO Kriti Sharma, "by 2020, we'll spend more time talking to machines than our own families," thus getting technology right is critical.

Aether, a Microsoft internal ethics panel for AI and Ethics in Engineering and Research, was recently established.

Gender Swap is a project that employs a virtual reality system as a platform for embodiment experience, a kind of neuroscience in which users may sense themselves in a new body.

Human partners utilize the immersive Head Mounted Display Oculus Rift and first-person cameras to generate the brain illusion.

Both users coordinate their motions to generate this illusion.

The embodiment experience will not operate if one does not correlate to the movement of the other.

It implies that every move they make jointly must be agreed upon by both users.

On a regular basis, new causes of algorithmic gender bias are discovered.

Joy Buolamwini, an MIT computer science graduate student, discovered gender and racial prejudice in the way AI detected individuals' looks in 2018.

She discovered, with the help of other researchers, that the dermatologist-approved Fitzpatrick The datasets for Skin Type categorization systems were primarily made up of lighter-skinned people (up to 86 percent).

The researchers developed a skin type system based on a rebalanced dataset and used it to compare three gender categorization systems available off the shelf.

They discovered that darker-skinned girls are the most misclassified in all three commercial systems.

Buolamwini founded the Algorithmic Justice League, a group that fights unfairness in decision-making software.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 

Algorithmic Bias and Error; Explainable AI.


Further Reading:


Buolamwini, Joy and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research: Conference on Fairness, Accountability, and Transparency 81: 1–15.

Hempel, Jessi. 2017. “Melinda Gates and Fei-Fei Li Want to Liberate AI from ‘Guys With Hoodies.’” Wired, May 4, 2017. https://www.wired.com/2017/05/melinda-gates-and-fei-fei-li-want-to-liberate-ai-from-guys-with-hoodies/.

Leavy, Susan. 2018. “Gender Bias in Artificial Intelligence: The Need for Diversity and Gender Theory in Machine Learning.” In GE ’18: Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, 14–16. New York: Association for Computing Machinery.

National Center for Education Statistics (NCIS). 2018. Digest of Education Statistics. https://nces.ed.gov/programs/digest/d18/tables/dt18_325.35.asp.

Roff, Heather M. 2016. “Gendering a Warbot: Gender, Sex, and the Implications for the Future of War.” International Feminist Journal of Politics 18, no. 1: 1–18.

Simonite, Tom. 2018a. “AI Is the Future—But Where Are the Women?” Wired, August 17, 2018. https://www.wired.com/story/artificial-intelligence-researchers-gender-imbalance/.

Simonite, Tom. 2018b. “AI Researchers Fight Over Four Letters: NIPS.” Wired, October 26, 2018. https://www.wired.com/story/ai-researchers-fight-over-four-letters-nips/.

Søraa, Roger Andre. 2017. “Mechanical Genders: How Do Humans Gender Robots?” Gender, Technology, and Development 21, no. 1–2: 99–115.

Wosk, Julie. 2015. My Fair Ladies: Female Robots, Androids, and Other Artificial Eves. New Brunswick, NJ: Rutgers University Press.



Artificial Intelligence - Who Was Marvin Minsky?

 






Donner Professor of Natural Sciences Marvin Minsky (1927–2016) was a well-known cognitive scientist, inventor, and artificial intelligence researcher from the United States.

At the Massachusetts Institute of Technology, he cofounded the Artificial Intelligence Laboratory in the 1950s and the Media Lab in the 1980s.

His renown was such that the sleeping astronaut Dr.

Victor Kaminski (killed by the HAL 9000 sentient computer) was named after him when he was an adviser on Stanley Kubrick's iconic film 2001: A Space Odyssey in the 1960s.

At the conclusion of high school in the 1940s, Minsky got interested in intelligence, thinking, and learning machines.

He was interested in neurology, physics, music, and psychology as a Harvard student.



On problem-solving and learning ideas, he collaborated with cognitive psychologist George Miller, and on perception and brain modeling theories with J.C.R. Licklider, professor of psychoacoustics and later father of the internet.

Minsky started thinking about mental ideas while at Harvard.

"I thought the brain was made up of tiny relays called neurons, each of which had a probability linked to it that determined whether the neuron would conduct an electric pulse," he later recalled.

"Technically, this system is now known as a stochastic neural network" (Bern stein 1981).

This hypothesis is comparable to Donald Hebb's Hebbian theory, which he laid forth in his book The Organization of Behavior (1946).

In the mathematics department, he finished his undergraduate thesis on topology.

Minsky studied mathematics as a graduate student at Princeton University, but he became increasingly interested in attempting to build artificial neurons out of vacuum tubes like those described in Warren McCulloch and Walter Pitts' famous 1943 paper "A Logical Calculus of the Ideas Immanent in Nervous Activity." He thought that a machine like this might navigate mazes like a rat.



In the summer of 1951, he and fellow Princeton student Dean Edmonds created the system, termed SNARC (Stochastic Neural-Analog Reinforcement Calculator), with money from the Office of Naval Research.

There were 300 tubes in the machine, as well as multiple electric motors and clutches.

Making it a learning machine, the machine employed the clutches to adjust its own knobs.

The electric rat initially walked at random, but after learning how to make better choices and accomplish a wanted objective via reinforcement of probability, it learnt how to make better choices and achieve a desired goal.

Multiple rats finally gathered in the labyrinth and learnt from one another.

Minsky built a second memory for his hard-wired neural network in his dissertation thesis, which helped the rat recall what stimulus it had received.

When confronted with a new circumstance, this enabled the system to explore its memories and forecast the optimum course of action.

Minsky had believed that by adding enough memory loops to his self-organizing random networks, conscious intelligence would arise spontaneously.

In 1954, Minsky finished his dissertation, "Neural Nets and the Brain Model Problem." After graduating from Princeton, Minsky continued to consider how to create artificial intelligence.



In 1956, he organized and participated in the DartmouthSummer Research Project on Artificial Intelligence with John McCarthy, Nathaniel Rochester, and Claude Shannon.

The Dartmouth workshop is often referred to as a watershed moment in AI research.

Minsky started replicating the computational process of proving Euclid's geometric theorems using bits of paper during the summer workshop since no computer was available.

He realized he could create an imagined computer that would locate proofs without having to tell it precisely what it needed to accomplish.

Minsky showed the results to Nathaniel Rochester, who returned to IBM and asked Herbert Gelernter, a new physics hire, to write a geometry-proving program on a computer.

Gelernter built a program in FORTRAN List Processing Language, a language he invented.

Later, John McCarthy combined Gelernter's language with ideas from mathematician Alonzo Church to develop LISP, the most widely used AI language (List-Processing).

Minsky began his studies at MIT in 1957.

He started worked on pattern recognition difficulties with Oliver Selfridge at the university's Lincoln Laboratory.

The next year, he was hired as an assistant professor in the mathematics department.

He founded the AI Group with McCarthy, who had transferred to MIT from Dartmouth.

They continued to work on machine learning concepts.

Minsky started working with mathematician Seymour Papert in the 1960s.

Perceptrons: An Introduction to Computational Geometry (1969) was a joint publication describing a kind of artificial neural network described by Cornell Aeronautical Lab oratory psychologist Frank Rosenblatt.

The book sparked a decades-long debate in the AI field, which continues to this day in certain aspects.

The mathematical arguments provided in Minsky and Papert's book pushed the field to shift toward symbolic AI (also known as "Good Old-Fashioned AI" or GOFAI) in the 1980s, when artificial intelligence researchers rediscovered perceptrons and neural networks.

Time-shared computers were more widely accessible on the MIT campus in the 1960s, and Minsky started working with students on machine intelligence issues.

One of the first efforts was to teach computers how to solve problems in basic calculus using symbolic manipulation techniques such as differentiation and integration.

In 1961, his student James Robert Slagle built a software for symbol manipulation.

SAINT was the name of the application, which operated on an IBM 7090 transistorized mainframe computer (Symbolic Automatic INTegrator).

Other students applied the technique to any symbol manipulation that their software MACSYMA would demand.

Minsky's pupils also had to deal with the challenge of educating a computer to reason by analogy.

Minsky's team also worked on issues related to computational linguistics, computer vision, and robotics.

Daniel Bobrow, one of his pupils, taught a computer how to answer word problems, an accomplishment that combined language processing and mathematics.

Henry Ernst, a student, designed the first computer-controlled robot, a mechanical hand with photoelectric touch sensors for grasping nuclear materials.

Minsky collaborated with Papert to develop semi-independent programs that could interact with one another to address increasingly complex challenges in computer vision and manipulation.

Minsky and Papert combined their nonhierarchical management techniques into a natural intelligence hypothesis known as the Society of Mind.

Intelligence, according to this view, is an emergent feature that results from tiny interactions between programs.

After studying various constructions, the MIT AI Group trained a computer-controlled robot to build structures out of children's blocks by 1970.

Throughout the 1970s and 1980s, the blocks-manipulating robot and the Society of Mind hypothesis evolved.

Minsky finally released The Society of Mind (1986), a model for the creation of intelligence through individual mental actors and their interactions, rather than any fundamental principle or universal technique.

He discussed consciousness, self, free will, memory, genius, language, memory, brainstorming, learning, and many other themes in the book, which is made up of 270 unique articles.

Agents, according to Minsky, do not require their own mind, thinking, or feeling abilities.

They are not intelligent.

However, when they work together as a civilization, they develop what we call human intellect.

To put it another way, understanding how to achieve any certain goal requires the collaboration of various agents.

Agents are required by Minsky's robot constructor to see, move, locate, grip, and balance blocks.

"I'd like to believe that this effort provided us insights into what goes on within specific sections of children's brains when they learn to 'play' with basic toys," he wrote (Minsky 1986, 29).

Minsky speculated that there may be over a hundred agents collaborating to create what we call mind.

In the book Emotion Machine, he expanded on his views on Society of Mind (2006).

He argued that emotions are not a separate kind of reasoning in this section.

Rather, they reflect different ways of thinking about various sorts of challenges that people face in the real world.

According to Minsky, the mind changes between different modes of thought, thinks on several levels, finds various ways to represent things, and constructs numerous models of ourselves.

Minsky remarked on a broad variety of popular and significant subjects linked to artificial intelligence and robotics in his final years via his books and interviews.

The Turing Option (1992), a book created by Minsky in partnership with science fiction novelist Harry Harrison, is set in the year 2023 and deals with issues of artificial intelligence.

In a 1994 article for Scientific American headlined "Will Robots Inherit the Earth?" he said, "Yes, but they will be our children" (Minsky 1994, 113).

Minsky once suggested that a superintelligent AI may one day spark a Riemann Hypothesis Catastrophe, in which an agent charged with answering the hypothesis seizes control of the whole planet's resources in order to obtain even more supercomputing power.

He didn't think this was a plausible scenario.

Humans could be able to converse with intelligent alien life forms, according to Minsky.

They'd think like humans because they'd be constrained by the same "space, time, and material constraints" (Minsky 1987, 117).

Minsky was also a critic of the Loebner Prize, the world's oldest Turing Test-like competition, claiming that it is detrimental to artificial intelligence research.

To anybody who could halt Hugh Loebner's yearly competition, he offered his own Minsky Loebner Prize Revocation Prize.

Both Minsky and Loebner died in 2016, yet the Loebner Prize competition is still going on.

Minsky was also responsible for the development of the confocal microscope (1957) and the head-mounted display (HMD) (1963).

He was awarded the Turing Award in 1969, the Japan Prize in 1990, and the Benjamin Franklin Medal in 1991. (2001). Daniel Bobrow (operating systems), K. Eric Drexler (molecular nanotechnology), Carl Hewitt (mathematics and philosophy of logic), Danny Hillis (parallel computing), Benjamin Kuipers (qualitative simulation), Ivan Sutherland (computer graphics), and Patrick Winston (computer graphics) were among Minsky's doctoral students (who succeeded Minsky as director of the MIT AI Lab).


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.




See also: 


AI Winter; Chatbots and Loebner Prize; Dartmouth AI Conference; 2001: A Space Odyssey.



References & Further Reading:


Bernstein, Jeremy. 1981. “Marvin Minsky’s Vision of the Future.” New Yorker, December 7, 1981. https://www.newyorker.com/magazine/1981/12/14/a-i.

Minsky, Marvin. 1986. The Society of Mind. London: Picador.

Minsky, Marvin. 1987. “Why Intelligent Aliens Will Be Intelligible.” In Extraterrestrials: Science and Alien Intelligence, edited by Edward Regis, 117–28. Cambridge, UK: Cambridge University Press.

Minsky, Marvin. 1994. “Will Robots Inherit the Earth?” Scientific American 271, no. 4 (October): 108–13.

Minsky, Marvin. 2006. The Emotion Machine. New York: Simon & Schuster.

Minsky, Marvin, and Seymour Papert. 1969. Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: Massachusetts Institute of Technology.

Singh, Push. 2003. “Examining the Society of Mind.” Computing and Informatics 22, no. 6: 521–43.


What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...