Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Artificial Intelligence - What Were The Macy Conferences?

 



The Macy Conferences on Cybernetics, which ran from 1946 to 1960, aimed to provide the framework for developing multidisciplinary disciplines such as cybernetics, cognitive psychology, artificial life, and artificial intelligence.

Famous twentieth-century scholars, academics, and researchers took part in the Macy Conferences' freewheeling debates, including psychiatrist W.

Ross Ashby, anthropologist Gregory Bateson, ecologist G. Evelyn Hutchinson, psychologist Kurt Lewin, philosopher Donald Marquis, neurophysiologist Warren McCulloch, cultural anthropologist Margaret Mead, economist Oskar Morgenstern, statistician Leonard Savage, physicist Heinz von Foerster McCulloch, a neurophysiologist at the Massachusetts Institute of Technology's Research Laboratory for Electronics, and von Foerster, a professor of signal engineering at the University of Illinois at Urbana-Champaign and coeditor with Mead of the published Macy Conference proceedings, were the two main organizers of the conferences.

All meetings were sponsored by the Josiah Macy Jr. Foundation, a nonprofit organization.

The conferences were started by Macy administrators Frank Fremont-Smith and Lawrence K. Frank, who believed that they would spark multidisciplinary discussion.

The disciplinary isolation of medical research was a major worry for Fremont-Smith and Frank.

A Macy-sponsored symposium on Cerebral Inhibitions in 1942 preceded the Macy meetings, during which Harvard physiology professor Arturo Rosenblueth presented the first public discussion on cybernetics, titled "Behavior, Purpose, and Teleology." The 10 conferences conducted between 1946 and 1953 focused on biological and social systems' circular causation and feedback processes.

Between 1954 and 1960, five transdisciplinary Group Processes Conferences were held as a result of these sessions.

To foster direct conversation amongst participants, conference organizers avoided formal papers in favor of informal presentations.

The significance of control, communication, and feedback systems in the human nervous system was stressed in the early Macy Conferences.

The contrasts between analog and digital processing, switching circuit design and Boolean logic, game theory, servomechanisms, and communication theory were among the other subjects explored.

These concerns belong under the umbrella of "first-order cybernetics." Several biological issues were also discussed during the conferences, including adrenal cortex function, consciousness, aging, metabolism, nerve impulses, and homeostasis.

The sessions acted as a forum for discussing long-standing issues in what would eventually be referred to as artificial intelligence.

(At Dartmouth College in 1955, mathematician John McCarthy invented the phrase "artificial intelligence.") Gregory Bateson, for example, gave a lecture at the inaugural Macy Conference that differentiated between "learning" and "learning to learn" based on his anthropological research and encouraged listeners to consider how a computer might execute either job.

Attendees in the eighth conference discussed decision theory research, which was led by Leonard Savage.

Ross Ashby suggested the notion of chess-playing automatons at the ninth conference.

The usefulness of automated computers as logic models for human cognition was discussed more than any other issue during the Macy Conferences.

In 1964, the Macy Conferences gave rise to the American Society for Cybernetics, a professional organization.

The Macy Conferences' early arguments on feedback methods were applied to topics as varied as artillery control, project management, and marital therapy.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cybernetics and AI; Dartmouth AI Conference.


References & Further Reading:


Dupuy, Jean-Pierre. 2000. The Mechanization of the Mind: On the Origins of Cognitive Science. Princeton, NJ: Princeton University Press.

Hayles, N. Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.

Heims, Steve J. 1988. “Optimism and Faith in Mechanism among Social Scientists at the Macy Conferences on Cybernetics, 1946–1953.” AI & Society 2: 69–78.

Heims, Steve J. 1991. The Cybernetics Group. Cambridge, MA: MIT Press.

Pias, Claus, ed. 2016. The Macy Conferences, 1946–1953: The Complete Transactions. Zürich, Switzerland: Diaphanes.




Biased Data Isn't the Only Source of AI Bias.

 





In order to eliminate prejudice in artificial intelligence, it will be necessary to address both human and systemic biases. 


Bias in AI systems is often seen as a technological issue, but the NIST study recognizes that human prejudices, as well as systemic, institutional biases, have a role. 

Researchers at the National Institute of Standards and Technology (NIST) recommend broadening the scope of where we look for the source of these biases — beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed — as a step toward improving our ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems. 

The advice is at the heart of a new NIST article, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), which incorporates feedback from the public on a draft version issued last summer. 


The publication provides guidelines related to the AI Risk Management Framework that NIST is creating as part of a wider effort to facilitate the development of trustworthy and responsible AI. 


The key difference between the draft and final versions of the article, according to NIST's Reva Schwartz, is the increased focus on how bias presents itself not just in AI algorithms and the data used to train them, but also in the sociocultural environment in which AI systems are employed. 

"Context is crucial," said Schwartz, one of the report's authors and the primary investigator for AI bias. 

"AI systems don't work in a vacuum. They assist individuals in making choices that have a direct impact on the lives of others. If we want to design trustworthy AI systems, we must take into account all of the elements that might undermine public confidence in AI. Many of these variables extend beyond the technology itself to its consequences, as shown by the responses we got from a diverse group of individuals and organizations." 

NIST contributes to the research, standards, and data needed to fulfill artificial intelligence's (AI) full potential as a driver of American innovation across industries and sectors. 

NIST is working with the AI community to define the technological prerequisites for cultivating confidence in AI systems that are accurate and dependable, safe and secure, explainable, and bias-free. 


AI bias is harmful to humans. 


AI may make choices on whether or not a student is admitted to a school, approved for a bank loan, or accepted as a rental applicant. 

Machine learning software, for example, might be taught on a dataset that underrepresents a certain gender or ethnic group. 

While these computational and statistical causes of bias remain relevant, the new NIST article emphasizes that they do not capture the whole story. 

Human and structural prejudices, which play a large role in the new edition, must be taken into consideration for a more thorough understanding of bias. 

Institutions that operate in ways that disfavor specific social groups, such as discriminating against persons based on race, are examples of systemic biases. 

Human biases may be related to how individuals utilize data to fill in gaps, such as a person's neighborhood impacting how likely police would consider them to be a criminal suspect. 

When human, institutional, and computational biases come together, they may create a dangerous cocktail – particularly when there is no specific direction for dealing with the hazards of deploying AI systems. 

"If we are to construct trustworthy AI systems, we must take into account all of the elements that might erode public faith in AI." 

Many of these considerations extend beyond the technology itself to the technology's consequences." —Reva Schwartz, AI bias main investigator To address these concerns, the NIST authors propose a "socio-technical" approach to AI bias mitigation. 


This approach recognizes that AI acts in a wider social context — and that attempts to overcome the issue of bias just on a technological level would fall short. 


"When it comes to AI bias concerns, organizations sometimes gravitate to highly technical solutions," Schwartz added. 

"However, these techniques fall short of capturing the social effect of AI systems. The growth of artificial intelligence into many facets of public life necessitates broadening our perspective to include AI as part of the wider social system in which it functions." 

According to Schwartz, socio-technical approaches to AI are a developing field, and creating measuring tools that take these elements into account would need a diverse mix of disciplines and stakeholders. 

"It's critical to bring in specialists from a variety of sectors, not just engineering," she added, "and to listen to other organizations and communities about the implications of AI." 

Over the next several months, NIST will host a series of public workshops aimed at creating a technical study on AI bias and integrating it to the AI Risk Management Framework.


Visit the AI RMF workshop website for further information and to register.



A Method for Reducing Artificial Intelligence Bias Risk. 


The National Institute of Standards and Technology (NIST) is advancing an approach for identifying and managing biases in artificial intelligence (AI) — and is asking for the public's help in improving it — in an effort to combat the often pernicious effect of biases in AI that can harm people's lives and public trust in AI. 


A Proposal for Identifying and Managing Bias in Artificial Intelligence (NIST Special Document 1270), a new publication from NIST, lays out the methodology. 


It's part of the agency's larger effort to encourage the development of trustworthy and responsible AI. 


NIST will welcome public comments on the paper through September 10, 2021 (an extension of the initial deadline of August 5, 2021), and the writers will utilize the feedback to help define the topic of numerous collaborative virtual events NIST will organize in the following months. 


This series of events aims to engage the stakeholder community and provide them the opportunity to contribute feedback and ideas on how to reduce the danger of bias in AI. 


"Managing the danger of bias in AI is an important aspect of establishing trustworthy AI systems, but the route to accomplishing this remains uncertain," said Reva Schwartz of the National Institute of Standards and Technology, who was one of the report's authors. 

"We intend to include the community in the development of voluntary, consensus-based norms for limiting AI bias and decreasing the likelihood of negative consequences." 


NIST contributes to the research, standards, and data needed to fulfill artificial intelligence's (AI) full potential as a catalyst for American innovation across industries and sectors. 


NIST is working with the AI community to define the technological prerequisites for cultivating confidence in AI systems that are accurate and dependable, safe and secure, explainable, and bias-free. 

Bias in AI-based goods and systems is a critical, but yet poorly defined, component of trustworthiness. 

This prejudice might be intentional or unintentional. 


NIST is working to get us closer to consensus on recognizing and quantifying bias in AI systems by organizing conversations and conducting research. 


Because AI can typically make sense of information faster and more reliably than humans, it has become a transformational technology. 

Everything from medical detection to digital assistants on our cellphones now uses AI. 

However, as AI's uses have developed, we've seen that its conclusions may be skewed by biases in the data it's given - data that either partially or erroneously represents the actual world. 

Furthermore, some AI systems are designed to simulate complicated notions that cannot be readily assessed or recorded by data, such as "criminality" or "employment appropriateness." 

Other criteria, such as where you live or how much education you have, are used as proxies for the notions these systems are attempting to mimic. 


The imperfect association of the proxy data with the original notion might result to undesirable or discriminatory AI outputs, such as wrongful arrests, or eligible candidates being erroneously refused for employment or loans. 


The strategy the authors suggest for controlling bias comprises a conscious effort to detect and manage bias at multiple phases in an AI system’s lifespan, from early idea through design to release. 

The purpose is to bring together stakeholders from a variety of backgrounds, both within and outside the technology industry, in order to hear viewpoints that haven't been heard before. 

“We want to bring together the community of AI developers of course, but we also want to incorporate psychologists, sociologists, legal experts and individuals from disadvantaged communities,” said NIST’s Elham Tabassi, a member of the National AI Research Resource Task Force. 

"We'd want to hear from individuals who are affected by AI, both those who design AI systems and those who aren't." 


Preliminary research for the NIST writers includes a study of peer-reviewed publications, books, and popular news media, as well as industry reports and presentations. 


It was discovered that bias may seep into AI systems at any level of development, frequently in different ways depending on the AI's goal and the social environment in which it is used. 

"An AI tool is often built for one goal, but it is subsequently utilized in a variety of scenarios," Schwartz said. 

"Many AI applications have also been inadequately evaluated, if at all, in the environment for which they were designed. All these elements might cause bias to go undetected.” 

Because the team members acknowledge that they do not have all of the answers, Schwartz believes it is critical to get public comment, particularly from those who are not often involved in technical conversations. 


"We'd want to hear from individuals who are affected by AI, both those who design AI systems and those who aren't." ~ Elham Tabassi.


"We know bias exists throughout the AI lifespan," added Schwartz. 

"It would be risky to not know where your model is biased or to assume that there is none. The next stage is to figure out how to see it and deal with it."


Comments on the proposed method may be provided by downloading and completing the template form (in Excel format) and emailing it to ai-bias@list.nist.gov by Sept. 10, 2021 (extended from the initial deadline of Aug. 5, 2021). 

This website will be updated with further information on the joint event series.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read and learn more Technology and Engineering here.

You may also want to read and learn more Artificial Intelligence here.




Artificial Intelligence - Machine Translation.

  



Machine translation is the process of using computer technology to automatically translate human languages.

The US administration saw machine translation as a valuable instrument in diplomatic attempts to restrict communism in the USSR and the People's Republic of China from the 1950s through the 1970s.

Machine translation has lately become a tool for marketing goods and services in countries where they would otherwise be unavailable due to language limitations, as well as a standalone offering.

Machine translation is also one of the litmus tests for artificial intelligence progress.

This artificial intelligence study advances along three broad paradigms.

Rule-based expert systems and statistical methods to machine translation are the earliest.

Neural-based machine translation and example-based machine translation are two more contemporary paradigms (or translation by analogy).

Within computer linguistics, automated language translation is now regarded an academic specialization.

While there are multiple possible roots for the present discipline of machine translation, the notion of automated translation as an academic topic derives from a 1947 communication between crystallographer Andrew D. Booth of Birkbeck College (London) and Warren Weaver of the Rockefeller Foundation.

"I have a manuscript in front of me that is written in Russian, but I am going to assume that it is truly written in English and that it has been coded in some bizarre symbols," Weaver said in a preserved note to colleagues in 1949.

To access the information contained in the text, all I have to do is peel away the code" (Warren Weaver, as cited in Arnold et al. 1994, 13).

Most commercial machine translation systems have a translation engine at their core.

The user's sentences are parsed several times by translation engines, each time applying algorithmic rules to transform the source sentence into the desired target language.

There are rules for word-based and phrase-based trans formation.

The initial objective of a parser software is generally to replace words using a two-language dictionary.

Additional processing rounds of the phrases use comparative grammatical rules that consider sentence structure, verb form, and suffixes.

The intelligibility and accuracy of translation engines are measured.

Machine translation isn't perfect.

Poor grammar in the source text, lexical and structural differences between languages, ambiguous usage, multiple meanings of words and idioms, and local variations in usage can all lead to "word salad" translations.

In 1959–60, MIT philosopher, linguist, and mathematician Yehoshua Bar-Hillel issued the harshest early criticism of machine translation of language.

In principle, according to Bar-Hillel, near-perfect machine translation is impossible.

He used the following sentence to demonstrate the issue: John was on the prowl for his toy box.

He eventually discovered it.

In the pen, there was a box.

John was overjoyed.

The word "pen" poses a problem in this statement since it might refer to a child's playpen or a writing ballpoint pen.

Knowing the difference necessitates a broad understanding of the world, which a computer lacks.

When the National Academy of Sciences Automatic Language Processing Advisory Committee (ALPAC) released an extremely damaging report about the poor quality and high cost of machine translation in 1964, the initial rounds of US government funding eroded.

ALPAC came to the conclusion that the country already had an abundant supply of human translators capable of producing significantly greater translations.

Many machine translation experts slammed the ALPAC report, pointing to machine efficiency in the preparation of first drafts and the successful rollout of a few machine translation systems.

In the 1960s and 1970s, there were only a few machine translation research groups.

The TAUM group in Canada, the Mel'cuk and Apresian groups in the Soviet Union, the GETA group in France, and the German Saarbrücken SUSY group were among the biggest.

SYSTRAN (System Translation), a private corporation financed by government contracts founded by Hungarian-born linguist and computer scientist Peter Toma, was the main supplier of automated translation technology and services in the United States.

In the 1950s, Toma became interested in machine translation while studying at the California Institute of Technology.

Around 1960, Toma moved to Georgetown University and started collaborating with other machine translation experts.

The Georgetown machine translation project, as well as SYSTRAN's initial contract with the United States Air Force in 1969, were both devoted to translating Russian into English.

That same year, at Wright-Patterson Air Force Base, the company's first machine translation programs were tested.

SYSTRAN software was used by the National Aeronautics and Space Administration (NASA) as a translation help during the Apollo-Soyuz Test Project in 1974 and 1975.

Shortly after, SYSTRAN was awarded a contract by the Commission of the European Communities to offer automated translation services, and the company has subsequently amalgamated with the European Commission (EC).

By the 1990s, the EC had seventeen different machine translation systems focused on different language pairs in use for internal communications.

In 1992, SYSTRAN began migrating its mainframe software to personal computers.

SYSTRAN Professional Premium for Windows was launched in 1995 by the company.

SYSTRAN continues to be the industry leader in machine translation.

METEO, which has been in use by the Canadian Meteorological Center in Montreal since 1977 for the purpose of translating weather bulletins from English to French; ALPS, developed by Brigham Young University for Bible translation; SPANAM, the Pan American Health Organization's Spanish-to-English automatic translation system; and METAL, developed at the University of Toronto.

In the late 1990s, machine translation became more readily accessible to the general public through web browsers.

Babel Fish, a web-based application created by a group of researchers at Digital Equipment Corporation using SYSTRAN machine translation technology, was one of the earliest online language translation services (DEC).

Thirty-six translation pairs between thirteen languages were supported by the technology.

Babel Fish began as an AltaVista web search engine tool before being sold to Yahoo! and then Microsoft.

The majority of online translation services still use rule-based and statistical machine translation.

Around 2016, SYSTRAN, Microsoft Translator, and Google Translate made the switch to neural machine translation.

103 languages are supported by Google Translate.

Predictive deep learning algorithms, artificial neural networks, or connectionist systems based after biological brains are used in neural machine translation.

Machine translation based on neural networks is achieved in two steps.

The translation engine models its interpretation in the first phase based on the context of each source word within the entire sentence.

The artificial neural network then translates the entire word model into the target language in the second phase.

Simply said, the engine predicts the probability of word sequences and combinations inside whole sentences, resulting in a fully integrated translation model.

The underlying algorithms use statistical models to learn language rules.

The Harvard SEAS natural language processing group, in collaboration with SYSTRAN, has launched OpenNMT, an open-source neural machine translation system.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Cheng, Lili; Natural Language Processing and Speech Understanding.



Further Reading:


Arnold, Doug J., Lorna Balkan, R. Lee Humphreys, Seity Meijer, and Louisa Sadler. 1994. Machine Translation: An Introductory Guide. Manchester and Oxford: NCC Blackwell.

Bar-Hillel, Yehoshua. 1960. “The Present Status of Automatic Translation of Languages.” Advances in Computers 1: 91–163.

Garvin, Paul L. 1967. “Machine Translation: Fact or Fancy?” Datamation 13, no. 4: 29–31.

Hutchins, W. John, ed. 2000. Early Years in Machine Translation: Memoirs and Biographies of Pioneers. Philadelphia: John Benjamins.

Locke, William Nash, and Andrew Donald Booth, eds. 1955. Machine Translation of Languages. New York: Wiley.

Yngve, Victor H. 1964. “Implications of Mechanical Translation Research.” Proceedings of the American Philosophical Society 108 (August): 275–81.



Artificial Intelligence - What Is The Mac Hack IV Program?

 




Mac Hack IV, a 1967 chess software built by Richard Greenblatt, gained notoriety for being the first computer chess program to engage in a chess tournament and to play adequately against humans, obtaining a USCF rating of 1,400 to 1,500.

Greenblatt's software, written in the macro assembly language MIDAS, operated on a DEC PDP-6 computer with a clock speed of 200 kilohertz.

While a graduate student at MIT's Artificial Intelligence Laboratory, he built the software as part of Project MAC.

"Chess is the drosophila [fruit fly] of artificial intelligence," according to Russian mathematician Alexander Kronrod, the field's chosen experimental organ ism (Quoted in McCarthy 1990, 227).



Creating a champion chess software has been a cherished goal in artificial intelligence since 1950, when Claude Shan ley first described chess play as a task for computer programmers.

Chess and games in general involve difficult but well-defined issues with well-defined rules and objectives.

Chess has long been seen as a prime illustration of human-like intelligence.

Chess is a well-defined example of human decision-making in which movements must be chosen with a specific purpose in mind, with limited knowledge and uncertainty about the result.

The processing capability of computers in the mid-1960s severely restricted the depth to which a chess move and its alternative answers could be studied since the number of different configurations rises exponentially with each consecutive reply.

The greatest human players have been proven to examine a small number of moves in greater detail rather than a large number of moves in lower depth.

Greenblatt aimed to recreate the methods used by good players to locate significant game tree branches.

He created Mac Hack to reduce the number of nodes analyzed while choosing moves by using a minimax search of the game tree along with alpha-beta pruning and heuristic components.

In this regard, Mac Hack's style of play was more human-like than that of more current chess computers (such as Deep Thought and Deep Blue), which use the sheer force of high processing rates to study tens of millions of branches of the game tree before making moves.

In a contest hosted by MIT mathematician Seymour Papert in 1967, Mac Hack defeated MIT philosopher Hubert Dreyfus and gained substantial renown among artificial intelligence researchers.

The RAND Corporation published a mimeographed version of Dreyfus's paper, Alchemy and Artificial Intelligence, in 1965, which criticized artificial intelligence researchers' claims and aspirations.

Dreyfus claimed that no computer could ever acquire intelligence since human reason and intelligence are not totally rule-bound, and hence a computer's data processing could not imitate or represent human cognition.

In a part of the paper titled "Signs of Stagnation," Dreyfus highlighted attempts to construct chess-playing computers, among his many critiques of AI.

Mac Hack's victory against Dreyfus was first seen as vindication by the AI community.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Alchemy and Artificial Intelligence; Deep Blue.



Further Reading:



Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books.

Greenblatt, Richard D., Donald E. Eastlake III, and Stephen D. Crocker. 1967. “The Greenblatt Chess Program.” In AFIPS ’67: Proceedings of the November 14–16, 1967, Fall Joint Computer Conference, 801–10. Washington, DC: Thomson Book Company.

Marsland, T. Anthony. 1990. “A Short History of Computer Chess.” In Computers, Chess, and Cognition, edited by T. Anthony Marsland and Jonathan Schaeffer, 3–7. New York: Springer-Verlag.

McCarthy, John. 1990. “Chess as the Drosophila of AI.” In Computers, Chess, and Cognition, edited by T. Anthony Marsland and Jonathan Schaeffer, 227–37. New York: Springer-Verlag.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman.




Artificial Intelligence - Who Is Ray Kurzweil (1948–)?




Ray Kurzweil is a futurist and inventor from the United States.

He spent the first half of his career developing the first CCD flat-bed scanner, the first omni-font optical character recognition device, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed, large-vocabulary speech recognition machine.

He has earned several awards for his contributions to technology, including the Technical Grammy Award in 2015 and the National Medal of Technology.

Kurzweil is the cofounder and chancellor of Singularity University, as well as the director of engineering at Google, where he leads a team that works on artificial intelligence and natural language processing.

Singularity University is a non-accredited graduate school founded on the premise of tackling great issues like renewable energy and space travel by gaining a deep understanding of the opportunities presented by technology progress's current acceleration.

The university, which is headquartered in Silicon Valley, has evolved to include one hundred chapters in fifty-five countries, delivering seminars, educational programs, and business acceleration programs.

While at Google, Kurzweil published the book How to Create a Mind (2012).

He claims that the neo cortex is a hierarchical structure of pattern recognizers in his Pattern Recognition Theory of Mind.

Kurzweil claims that replicating this design in machines might lead to the creation of artificial superintelligence.

He believes that by doing so, he will be able to bring natural language comprehension to Google.

Kurzweil's popularity stems from his work as a futurist.

Futurists are those who specialize in or are interested in the near-to-long-term future and associated topics.

They use well-established methodologies like scenario planning to carefully examine forecasts and construct future possibilities.

Kurzweil is the author of five national best-selling books, including The Singularity Is Near, which was named a New York Times best-seller (2005).

He has an extensive list of forecasts.

Kurzweil predicted the enormous development of international internet usage in the second part of the decade in his debut book, The Age of Intelligent Machines (1990).

He correctly predicted that computers will soon exceed humans in making the greatest investing choices in his second extremely important book, The Age of Spiritual Machines (where "spiritual" stands for "aware"), published in 1999.

Kurzweil prophesied in the same book that computers would one day "appear to have their own free will" and perhaps have "spiritual experiences" (Kurz weil 1999, 6).

Human-machine barriers will dissolve to the point that they will basically live forever as combined human-machine hybrids.

Scientists and philosophers have slammed Kurzweil's forecast of a sentient computer, claiming that awareness cannot be created by calculations.

Kurzweil tackles the phenome non of the Technological Singularity in his third book, The Singularity Is Near.

John von Neumann, a famous mathematician, created the word singularity.

In a 1950s chat with his colleague Stanislaw Ulam, von Neumann proposed that the ever-accelerating speed of technological progress "appears to be reaching some essential singularity in the history of the race beyond which human activities as we know them could not continue" (Ulam 1958, 5).

To put it another way, technological development would alter the course of human history.

Vernor Vinge, a computer scientist, math professor, and science fiction writer, rediscovered the word in 1993 and utilized it in his article "The Coming Technological Singularity." In Vinge's article, technological progress is more accurately defined as an increase in processing power.

Vinge investigates the idea of a self-improving artificial intelligence agent.

According to this theory, the artificial intelligent agent continues to update itself and grow technologically at an unfathomable pace, eventually resulting in the birth of a superintelligence—that is, an artificial intelligence that far exceeds all human intelligence.

In Vinge's apocalyptic vision, robots first become autonomous, then superintelligent, to the point where humans lose control of technology and machines seize control of their own fate.

Machines will rule the planet because technology is more intelligent than humans.

According to Vinge, the Singularity is the end of the human age.

Kurzweil presents an anti-dystopic Singularity perspective.

Kurzweil's core premise is that humans can develop something smarter than themselves; in fact, exponential advances in computer power make the creation of an intelligent machine all but inevitable, to the point that the machine will surpass humans in intelligence.

Kurzweil believes that machine intelligence and human intellect will converge at this moment.

The subtitle of The Singularity Is Near is When Humans Transcend Biology, which is no coincidence.

Kurzweil's overarching vision is based on discontinuity: no lesson from the past, or even the present, can aid humans in determining the way to the future.

This also explains why new types of education, such as Singularity University, are required.

Every sentimental look back to history, every memory of the past, renders humans more susceptible to technological change.

With the arrival of a new superintelligent, almost immortal race, history as a human construct will soon come to an end.

Posthumans, the next phase in human development, are known as immortals.

Kurzweil believes that posthumanity will be made up of sentient robots rather than people with mechanical bodies.

He claims that the future should be formed on the assumption that mankind is in the midst of an extraordinary period of technological advancement.

The Singularity, he believes, would elevate humanity beyond its wildest dreams.

While Kurzweil claims that artificial intelligence is now outpacing human intellect on certain activities, he also acknowledges that the moment of superintelligence, often known as the Technological Singularity, has not yet arrived.

He believes that individuals who embrace the new age of human-machine synthesis and are daring to go beyond evolution's boundaries would view humanity's future as positive. 




Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


General and Narrow AI; Superintelligence; Technological Singularity.



Further Reading:




Kurzweil, Ray. 1990. The Age of Intelligent Machines. Cambridge, MA: MIT Press.

Kurzweil, Ray. 1999. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Penguin.

Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Ulam, Stanislaw. 1958. “Tribute to John von Neumann.” Bulletin of the American Mathematical Society 64, no. 3, pt. 2 (May): 1–49.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. Cleveland, OH: NASA Lewis Research Center.



 

Artificial Intelligence - Who Is Heather Knight?




Heather Knight is a robotics and artificial intelligence specialist best recognized for her work in the entertainment industry.

Her Collaborative Humans and Robots: Interaction, Sociability, Machine Learning, and Art (CHARISMA) Research Lab at Oregon State University aims to apply performing arts techniques to robots.

Knight identifies herself as a social roboticist, a person who develops non-anthropomorphic—and sometimes nonverbal—machines that interact with people.

She makes robots that act in ways that are modeled after human interpersonal communication.

These behaviors include speaking styles, greeting movements, open attitudes, and a variety of other context indicators that assist humans in establishing rapport with robots in ordinary life.

Knight examines social and political policies relating to robotics in the CHARISMA Lab, where he works with social robots and so-called charismatic machines.

The Marilyn Monrobot interactive robot theatre company was founded by Knight.

The Robot Film Festival provides a venue for roboticists to demonstrate their latest inventions in a live setting, as well as films that are relevant to the evolving state of the art in robotics and robot-human interaction.

The Marilyn Monrobot firm arose from Knight's involvement with the Syyn Labs creative collective and her observations of Guy Hoffman, Director of the MIT Media Innovation Lab, on robots built for performance reasons.

Knight's production firm specializes on robot humor.

Knight claims that theatrical spaces are ideal for social robotics research because they not only encourage playfulness—requiring robot actors to express themselves and interact—but also include creative constraints that robots thrive in, such as a fixed stage, trial-and-error learning, and repeat performances (with manipu lated variations).

The usage of robots in entertainment situations, according to Knight, is beneficial since it increases human culture, imagination, and creativity.

At the TEDWomen conference in 2010, Knight debuted Data, a stand-up comedy robot.

Aldebaran Robotics created Data, an Nao robot (now SoftBank Group).

Data performs a comedy performance (with roughly 200 pre-programmed jokes) while gathering input from the audience and fine-tuning its act in real time.

The robot was created at Carnegie Mellon University by Scott Satkin and Varun Ramakrisha.

Knight is presently collaborating with Ginger the Robot on a comedic project.

The development of algorithms for artificial social intelligence is also fueled by robot entertainment.

In other words, art is utilized to motivate the development of new technologies.

To evaluate audience responses and understand the noises made by audiences, Data and Ginger use a microphone and a machine learning system (laughter, chatter, clap ping, etc.).

After each joke, the audience is given green and red cards to hold up.

Green cards indicate to the robots that the audience enjoys the joke.

Red cards are given out when jokes fall flat.

Knight has discovered that excellent robot humor doesn't have to disguise the fact that it's about a robot.

Rather, Data makes people laugh by drawing attention to its machine-specific issues and making self-deprecating remarks about its limits.

In order to create expressive, captivating robots, Knight has found improvisational acting and dancing skills to be quite useful.

In the process, she has changed the original Robotic Paradigm's technique of Sense-Plan-Act, preferring Sensing-Character-Enactment, which is more similar to the procedure utilized in theatrical performance in practice.

Knight is presently experimenting with ChairBots, which are hybrid robots made by gluing IKEA wooden chairs to Neato Botvacs (a brand of intelligent robotic vacuum cleaner).

The ChairBots are being tested in public places to see how a basic robot might persuade people to get out of the way using just rudimentary gestures as a mode of communication.

They've also been used to persuade prospective café customers to come in, locate a seat, and settle down.

Knight collaborated on the synthetic organic robot art piece Public Anemone for the SIGGRAPH computer graphics conference while pursuing degrees at the MIT Media Lab with Personal Robots group head Professor Cynthia Breazeal.

The installation consisted of a fiberglass cave filled with glowing creatures that moved and responded to music and people.

The cave's centerpiece robot, also known as "Public Anemone," swayed and interacted with visitors, bathed in a waterfall, watered a plant, and interacted with other cave attractions.

Knight collaborated with animatronics designer Dan Stiehl to create capacitive sensor-equipped artificial tube worms.

The tubeworm's fiberoptic tentacles drew into their tubes and changed color when a human observer reached into the cave, as though prompted by protective impulses.

The team behind Public Anemone defined the initiative as "a step toward fully embodied robot theatrical performance" and "an example of intelligent staging." Knight also helped with the mechanical design of the Smithsonian/Cooper-Hewitt Design Museum's "Cyberflora" kinetic robot flower garden display in 2003.

Her master's thesis at MIT focused on the Sensate Bear, a huggable robot teddy bear with full-body capacitive touch sensors that she used to investigate real-time algorithms incorporating social touch and nonverbal communication.

In 2016, Knight received her PhD from Carnegie Mellon University.

Her dissertation focused on expressive motion in robots with a reduced degree of freedom.

Humans do not require robots to closely resemble humans in appearance or behavior to be treated as close associates, according to Knight's research.

Humans, on the other hand, are quick to anthropomorphize robots and offer them autonomy.

Indeed, she claims, when robots become more human-like in appearance, people may feel uneasy or anticipate a far higher level of humanlike conduct.

Professor Matt Mason of the School of Computer Science and Robotics Institute advised Knight.

She was formerly a robotic artist in residence at Alphabet's X, Google's parent company's research lab.

Knight has previously worked with Aldebaran Robotics and NASA's Jet Propulsion Laboratory as a research scientist and engineer.

While working as an engineer at Aldebaran Robotics, Knight created the touch sensing panel for the Nao autonomous family companion robot, as well as the infrared detection and emission capabilities in its eyes.

Syyn Labs won a UK Music Video Award for her work on the opening two minutes of the OK Go video "This Too Shall Pass," which contains a Rube Goldberg machine.

She is now assisting Clearpath Robotics in making its self-driving, mobile-transport robots more socially conscious. 





Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


RoboThespian; Turkle, Sherry.


Further Reading:



Biever, Celeste. 2010. “Wherefore Art Thou, Robot?” New Scientist 208, no. 2792: 50–52.

Breazeal, Cynthia, Andrew Brooks, Jesse Gray, Matt Hancher, Cory Kidd, John McBean, Dan Stiehl, and Joshua Strickon. 2003. “Interactive Robot Theatre.” Communications of the ACM 46, no. 7: 76–84.

Knight, Heather. 2013. “Social Robots: Our Charismatic Friends in an Automated Future.” Wired UK, April 2, 2013. https://www.wired.co.uk/article/the-inventor.

Knight, Heather. 2014. How Humans Respond to Robots: Building Public Policy through Good Design. Washington, DC: Brookings Institute, Center for Technology Innovation.



Artificial Intelligence - Who Is Hiroshi Ishiguro (1963–)?

  


Hiroshi Ishiguro is a well-known engineer who is most known for his life-like humanoid robots.

He thinks that the present information culture will eventually develop into a world populated by robot caregivers or helpmates.

Ishiguro also expects that studying artificial people would help us better understand how humans are conditioned to read and comprehend the actions and expressions of their own species.

Ishiguro seeks to explain concepts like relationship authenticity, autonomy, creativity, imitation, reciprocity, and robot ethics in terms of cognitive science.

Ishiguro's study aims to produce robots that are uncannily identical to humans in look and behavior.

He thinks that his robots will assist us in comprehending what it is to be human.

Sonzaikan is the Japanese name for this sense of a human's substantial presence, or spirit.

Success, according to Ishiguro, may be measured and evaluated in two ways.

The first is what he refers to as the complete Turing Test, in which an android passes if 70% of human spectators are unaware that they are seeing a robot until at least two seconds have passed.

The second metric for success, he claims, is the length of time a human stays actively engaged with a robot before discovering that the robot's cooperative eye tracking does not reflect true thinking.

Robovie was one of Ishiguro's earliest robots, launched in 2000.

Ishiguro intended to make a robot that didn't appear like a machine or a pet, but might be mistaken for a friend in everyday life.

Robovie may not seem to be human, but it can perform a variety of innovative human-like motions and interactive activities.

Eye contact, staring at items, pointing at things, nodding, swinging and folding arms, shaking hands, and saying hello and goodbye are all possible with Robovie.

Robo Doll was extensively featured in Japanese media, and Ishiguro was persuaded that the robot's look, engagement, and conversation were vital to deeper, more nuanced connections between robots and humans.

In 2003, Ishiguro debuted Actroid to the general public for the first time.

Sanrio's Kokoro animatronics division has begun manufacturing Actroid, an autonomous robot controlled by AI software developed at Osaka University's Intelligent Robotics Laboratory.

Actroid has a feminine look (in science fiction terms, a "gynoid") with skin constructed of incredibly realistic silicone.

Internal sensors and quiet air actuators at 47 points of physical articulation allow the robot to replicate human movement, breathing, and blinking, and it can even speak.

Movement is done by sensor processing, data files carrying key val ues for degrees of freedom in movement of limbs and joints.

Five to seven degrees of freedom are typical for robot arms.

Arms, legs, torso, and neck of humanoid robots may have thirty or more degrees of freedom.

Programmers create Actroid scenarios in four steps: (1) collect recognition data from sensors activated by contact, (2) choose a motion module, (3) execute a specified series of movements and play an audio file, and (4) return to step 1.

Experiments utilizing irregular random or contingent reactions to human context hints have been shown to be helpful in holding the human subject's attention, but they are made much more effective when planned scenarios are included.

Motion modules are written in XML, a text-based markup language that is simple enough for even inexperienced programmers to understand.

Ishiguro debuted Repliee variants of the Actroid in 2005, which were supposed to be indistinguishable from a human female on first glance.

Repliee Q1Expo is an android replica of Ayako Fujii, a genuine Japanese newscaster.

Repliee androids are interactive; they can use voice recognition software to comprehend human conversations, answer verbally, maintain eye contact, and react quickly to human touch.

This is made possible by a sensor network made up of infrared motion detectors, cameras, microphones, identification tag readers, and floor sensors that is distributed and ubiquitous.

Artificial intelligence is used by the robot to assess whether the human is contacting the robot gently or aggressively.

Ishiguro also debuted Repliee R1, a kid version of the robot that looks identical to his then four-year-old daughter.

Actroids have recently been proven to be capable of imitating human limb and joint movement by observing and duplicating the movements.

Because much of the computer gear that runs the artificial intelligence program is external to the robot, it is not capable of actual movement.

Self-reports of human volunteers' sentiments and moods are captured when robots perform activities in research done at Ishiguro's lab.

The Actroid elicits a wide spectrum of emotions, from curiosity to disgust, acceptance to terror.

Ishiguro's research colleagues have also benefited from real-time neuroimaging of human volunteers in order to better understand how human brains are stimulated in human-android interactions.

As a result, Actroid serves as a testbed for determining why particular nonhuman agent acts fail to elicit the required cognitive reactions in humans.

The Geminoid robots were created in response to the fact that artificial intelligence lags far behind robotics when it comes to developing realistic interactions between humans and androids.

Ishiguro, in particular, admitted that it would be several years before a computer could have a lengthy, intensive spoken discussion with a person.

The Geminoid HI-1, which debuted in 2006, is a teleoperated (rather than totally autonomous) robot that looks similar to Ishiguro.

The name "gemininoid" is derived from the Latin word "twin." Hand fidgeting, blinking, and motions similar with human respiration are all possible for Geminoid.

Motion-capture technology is used to operate the android, which mimics Ishiguro's face and body motions.

The robot can imitate its creator's voice and communicate in a human-like manner.

Ishiguro plans to utilize the robot to teach students through remote telepresence one day.

When he is teleoperating the robot, he has observed that the sensation of immersion is so strong that his brain is fooled into producing phantom perceptions of actual contact when the android is poked.

The Geminoid-DK is a mechanical doppelgänger of Danish psychology professor Henrik Schärfe, launched in 2011.

While some viewers find the Geminoid's look unsettling, many others do not and simply communicate with the robot in a normal way.

In 2010, the Telenoid R1 was introduced as a teleoperated android robot.

Telenoid is 30 inches tall and amorphous, with just a passing resemblance to a human form.

The robot's objective is to transmit a human voice and gestures to a spectator who may use it as a communication or videoconferencing tool.

The Telenoid, like the other robots in Ishiguro's lab, looks to be alive: it simulates breathing and speech gestures and blinks.

However, in order to stimulate creativity, the design limits the amount of features.

In this manner, the Telenoid is analogous to a tangible, real-world avatar.

Its goal is to make more intimate, human-like interactions possible using telecommunications technology.

Ishiguro suggests that the robot might one day serve as a suitable stand-in for a teacher or partner who is otherwise only accessible from afar.

The Elfoid, a tiny version of the robot, can be grasped with one hand and carried in a pocket.

The autonomous persocom dolls that replace smart phones and other electronics in the immensely famous manga series Chobits foreshadowed the Actroid and Telenoid.

Ishiguro is a professor of systems innovation and the director of Osaka University's Intelligent Robotics Laboratory.

He's also a group leader at Kansai Science City's Advanced Telecommunications Research Institute (ATR) and a cofounder of the tech-transfer startup Vstone Ltd.

He thinks that future commercial enterprises will profit from the success of teleoperated robots in order to fund the continued development of his autonomous robots.

Erica, a humanoid robot that became a Japanese television news presenter in 2018, is his most recent creation.

Ishiguro studied oil painting extensively as a young man, pondering how to depict human resemblance on canvas while he worked.

In Hanao Mori's computer science lab at Yamanashi University, he got enthralled with robots.

At Osaka University, Ishiguro pursued his PhD in engineering under computer vision and image recognition pioneer Saburo Tsuji.

At studies done in Tsuji’s lab, he worked on mobile robots capable of SLAM— simultaneous mapping and navigation using panoramic and omni-directional video cameras.

This work led to his doctoral dissertation, which focused on tracking a human subject using active camera control and panning to acquire complete 360-degree views of the surroundings.

Ishiguro believed that his technology and applications may be utilized to provide a meaningful internal map of an interacting robot's surroundings.

His dissertation was rejected by the first reviewer of an article based on it.

Fine arts and technology, according to Ishiguro, are inexorably linked; art inspires new technologies, while technology enables for the creation and duplication of art.

Ishiguro has recently brought his robots to Seinendan, a theatre company founded by Oriza Hirata, in order to put what he's learned about human-robot communication into practice.

Ishiguro's field of cognitive science and AI, which he calls android science, has precedents in Disneyland's "Great Moments with Mr.

Lincoln" robotics animation show and the fictitious robot replacements described in the Bruce Willis film Surrogates (2009).

In the Willis film, Ishiguro has a cameo appearance.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Caregiver Robots; Nonhuman Rights and Personhood.



Further Reading:



Guizzo, Erico. 2010. “The Man Who Made a Copy of Himself.” IEEE Spectrum 47, no. 4 (April): 44–56.

Ishiguro, Hiroshi, and Fabio Dalla Libera, eds. 2018. Geminoid Studies: Science and Technologies for Humanlike Teleoperated Androids. New York: Springer.

Ishiguro, Hiroshi, and Shuichi Nishio. 2007. “Building Artificial Humans to Understand Humans.” Journal of Artificial Organs 10, no. 3: 133–42.

Ishiguro, Hiroshi, Tetsuo Ono, Michita Imai, Takeshi Maeda, Takayuki Kanda, and Ryohei Nakatsu. 2001. “Robovie: An Interactive Humanoid Robot.” International Journal of Industrial Robotics 28, no. 6: 498–503.

Kahn, Peter H., Jr., Hiroshi Ishiguro, Batya Friedman, Takayuki Kanda, Nathan G. Freier, Rachel L. Severson, and Jessica Miller. 2007. “What Is a Human? Toward Psychological Benchmarks in the Field of Human–Robot Interaction.” Interaction Studies 8, no. 3: 363–90.

MacDorman, Karl F., and Hiroshi Ishiguro. 2006. “The Uncanny Advantage of Using Androids in Cognitive and Social Science Research.” Interaction Studies 7, no. 3: 297–337.

Nishio, Shuichi, Hiroshi Ishiguro, and Norihiro Hagita. 2007a. “Can a Teleoperated Android Represent Personal Presence? A Case Study with Children.” Psychologia 50: 330–42.

Nishio, Shuichi, Hiroshi Ishiguro, and Norihiro Hagita. 2007b. “Geminoid: Teleoperated Android of an Existing Person.” In Humanoid Robots: New Developments, edited by Armando Carlos de Pina Filho, 343–52. Vienna, Austria: I-Tech.






What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...