Showing posts sorted by relevance for query AI. Sort by date Show all posts
Showing posts sorted by relevance for query AI. Sort by date Show all posts

Artificial Intelligence - Who Is Ben Goertzel (1966–)?


Ben Goertzel is the founder and CEO of SingularityNET, a blockchain AI company, as well as the chairman of Novamente LLC, a research professor at Xiamen University's Fujian Key Lab for Brain-Like Intelligent Systems, the chief scientist of Mozi Health and Hanson Robotics in Shenzhen, China, and the chair of the OpenCog Foundation, Humanity+, and Artificial General Intelligence Society conference series. 

Goertzel has long wanted to create a good artificial general intelligence and use it in bioinformatics, finance, gaming, and robotics.

He claims that, despite AI's current popularity, it is currently superior than specialists in a number of domains.

Goertzel divides AI advancement into three stages, each of which represents a step toward a global brain (Goertzel 2002, 2): • the intelligent Internet • the full-fledged Singularity Goertzel presented a lecture titled "Decentralized AI: The Power and the Necessity" at TEDxBerkeley in 2019.

He examines artificial intelligence in its present form as well as its future in this discussion.

"The relevance of decentralized control in leading AI to the next stages, the strength of decentralized AI," he emphasizes (Goertzel 2019a).

In the evolution of artificial intelligence, Goertzel distinguishes three types: artificial narrow intelligence, artificial broad intelligence, and artificial superintelligence.

Artificial narrow intelligence refers to machines that can "address extremely specific issues... better than humans" (Goertzel 2019a).

In certain restricted activities, such as chess and Go, this kind of AI has outperformed a human.

Ray Kurzweil, an American futurologist and inventor, coined the phrase "narrow AI." Artificial general intelligence (AGI) refers to intelligent computers that can "generate knowledge" in a variety of fields and have "humanlike autonomy." By 2029, according to Goertzel, this kind of AI will have reached the same level of intellect as humans.

Artificial superintelligence (ASI) is based on both narrow and broad AI, but it can also reprogram itself.



By 2045, he claims, this kind of AI will be smarter than the finest human brains in terms of "scientific innovation, general knowledge, and social abilities" (Goertzel 2019a).

According to Goertzel, Facebook, Google, and a number of colleges and companies are all actively working on AGI.

According to Goertzel, the shift from AI to AGI will occur within the next five to thirty years.

Goertzel is also interested in artificial intelligence-assisted life extension.

He thinks that artificial intelligence's exponential advancement will lead to technologies that will increase human life span and health eternally.

He predicts that by 2045, a singularity featuring a drastic increase in "human health span" would have occurred (Goertzel 2012).

Vernor Vinge popularized the term "singularity" in his 1993 article "The Coming Technological Singularity." Ray Kurzweil coined the phrase in his 2005 book The Singularity is Near.

The Technological Singularity, according to both writers, is the merging of machine and human intellect as a result of a fast development in new technologies, particularly robots and AI.

The thought of an impending singularity excites Goertzel.

SingularityNET is his major current initiative, which entails the construction of a worldwide network of artificial intelligence researchers interested in developing, sharing, and monetizing AI technology, software, and services.

By developing a decentralized protocol that enables a full stack AI solution, Goertzel has made a significant contribution to this endeavor.

SingularityNET, as a decentralized marketplace, provides a variety of AI technologies, including text generation, AI Opinion, iAnswer, Emotion Recognition, Market Trends, OpenCog Pattern Miner, and its own cryptocurrency, AGI token.

SingularityNET is presently cooperating with Domino's Pizza in Malaysia and Singapore (Khan 2019).



Domino's is interested in leveraging SingularityNET technologies to design a marketing plan, with the goal of providing the finest products and services to its consumers via the use of unique algorithms.

Domino's thinks that by incorporating the AGI ecosystem into their operations, they will be able to provide value and service in the food delivery market.

Goertzel has reacted to scientist Stephen Hawking's challenge, which claimed that AI might lead to the extinction of human civilization.

Given the current situation, artificial super intelligence's mental state will be based on past AI generations, thus "selling, spying, murdering, and gambling are the key aims and values in the mind of the first super intelligence," according to Goertzel (Goertzel 2019b).

He acknowledges that if humans desire compassionate AI, they must first improve their own treatment of one another.

With four years, Goertzel worked for Hanson Robotics in Hong Kong.

He collaborated with Sophia, Einstein, and Han, three well-known robots.

"Great platforms for experimenting with AI algorithms, including cognitive architectures like OpenCog that aim at human-level AI," he added of the robots (Goertzel 2018).

Goertzel argues that essential human values may be retained for future generations in Sophia-like robot creatures after the Technological Singularity.

Decentralized networks like SingularityNET and OpenCog, according to Goertzel, provide "AIs with human-like values," reducing AI hazards to humanity (Goertzel 2018).

Because human values are complicated in nature, Goertzel feels that encoding them as a rule list is wasteful.

Brain-computer interfacing (BCI) and emotional interfacing are two ways Goertzel offers.

Humans will become "cyborgs," with their brains physically linked to computational-intelligence modules, and the machine components of the cyborgs will be able to read the moral-value-evaluation structures of the human mind directly from the biological components of the cyborgs (Goertzel 2018).

Goertzel uses Elon Musk's Neuralink as an example.

Because it entails invasive trials with human brains and a lot of unknowns, Goertzel doubts that this strategy will succeed.

"Emotional and spiritual connections between people and AIs, rather than Ethernet cables or Wifi signals, are used to link human and AI brains," according to the second method (Goertzel 2018).

To practice human values, he proposes that AIs participate in emotional and social connection with humans via face expression detection and mirroring, eye contact, and voice-based emotion recognition.

To that end, Goertzel collaborated with SingularityNET, Hanson AI, and Lia Inc on the "Loving AI" research project, which aims to assist artificial intelligences speak and form intimate connections with humans.

A funny video of actor Will Smith on a date with Sophia the Robot is presently available on the Loving AI website.

Sophia can already make sixty facial expressions and understand human language and emotions, according to the video of the date.

When linked to a network like SingularityNET, humanoid robots like Sophia obtain "ethical insights and breakthroughs...

via language," according to Goertzel (Goertzel 2018).

Then, through a shared internet "mindcloud," robots and AIs may share what they've learnt.

Goertzel is also the chair of the Artificial General Intelligence Society's Conference Series on Artificial General Intelligence, which has been conducted yearly since 2008.

The Journal of Artificial General Intelligence is a peer-reviewed open-access academic periodical published by the organization. Goertzel is the editor of the conference proceedings series.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 

General and Narrow AI; Superintelligence; Technological Singularity.


Further Reading:


Goertzel, Ben. 2002. Creating Internet Intelligence: Wild Computing, Distributed Digital Consciousness, and the Emerging Global Brain. New York: Springer.

Goertzel, Ben. 2012. “Radically Expanding the Human Health Span.” TEDxHKUST. https://www.youtube.com/watch?v=IMUbRPvcB54.

Goertzel, Ben. 2017. “Sophia and SingularityNET: Q&A.” H+ Magazine, November 5, 2017. https://hplusmagazine.com/2017/11/05/sophia-singularitynet-qa/.

Goertzel, Ben. 2018. “Emotionally Savvy Robots: Key to a Human-Friendly Singularity.” https://www.hansonrobotics.com/emotionally-savvy-robots-key-to-a-human-friendly-singularity/.

Goertzel, Ben. 2019a. “Decentralized AI: The Power and the Necessity.” TEDxBerkeley, March 9, 2019. https://www.youtube.com/watch?v=r4manxX5U-0.

Goertzel, Ben. 2019b. “Will Artificial Intelligence Kill Us?” July 31, 2019. https://www.youtube.com/watch?v=TDClKEORtko.

Goertzel, Ben, and Stephan Vladimir Bugaj. 2006. The Path to Posthumanity: 21st Century Technology and Its Radical Implications for Mind, Society, and Reality. Bethesda, MD: Academica Press.

Khan, Arif. 2019. “SingularityNET and Domino’s Pizza Announce a Strategic Partnership.” https://blog.singularitynet.io/singularitynet-and-dominos-pizza-announce-a-strategic-partnership-cbbe21f80fc7.

Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. NASA: Lewis Research Center





Artificial Intelligence - Emotion Recognition And Emotional Intelligence.





A group of academics released a meta-analysis of studies in 2019 indicating that a person's mood may be determined from their facial movements. 

They came to the conclusion that there is no evidence that emotional state can be predicted from expression, regardless of whether the assessment is made by a person or by technology. 


The coauthors noted, "[Facial expressions] in issue are not 'fingerprints' or diagnostic displays that dependably and explicitly convey distinct emotional states independent of context, person, or culture."


  "It's impossible to deduce pleasure from a grin, anger from a scowl, or grief from a frown with certainty." 

This statement may be disputed by Alan Cowen. He's the creator of Hume AI, a new research lab and "empathetic AI" firm coming from stealth today. He's an ex-Google scientist. 


Hume claims to have created datasets and models that "react beneficially to [human] emotion signals," allowing clients ranging from huge tech firms to startups to recognize emotions based on a person's visual, vocal, and spoken expressions. 

"When I first entered the area of emotion science, the majority of researchers were focusing on a small number of posed emotional expressions in the lab. 

Cowen told, "I wanted to apply data science to study how individuals genuinely express emotion out in the world, spanning ethnicities and cultures." 

"I uncovered a new universe of nuanced and complicated emotional behaviors that no one had ever recorded before using new computational approaches, and I was quickly publishing in the top journals." That's when businesses started contacting me." 

Hume, which has 10 workers and just secured $5 million in investment, claims to train its emotion-recognizing algorithms using "huge, experimentally-controlled, culturally varied" datasets from individuals throughout North America, Africa, Asia, and South America. 

Regardless of the data's representativeness, some experts doubt the premise that emotion-detecting algorithms have a scientific base. 




"The kindest view I have is that there are some really well-intentioned folks who are naive enough that... the issue they're attempting to cure is caused by technology," 

~ Os Keyes, an AI ethics scientist at the University of Washington. 




"Their first offering raises severe ethical concerns... It's evident that they aren't addressing the topic as a problem to be addressed, interacting deeply with it, and contemplating the potential that they aren't the first to conceive of it." 

HireVue, Entropik Technology, Emteq, Neurodata Labs, Neilson-owned Innerscope, Realeyes, and Eyeris are among the businesses in the developing "emotional AI" sector. 

Entropik says that their technology can interpret emotions "through facial expressions, eye gazing, speech tone, and brainwaves," which it sells to companies wishing to track the effectiveness of their marketing efforts. 

Neurodata created a software that Russian bank Rosbank uses to assess the emotional state of clients phoning customer support centers. 



Emotion AI is being funded by more than just startups. 


Apple bought Emotient, a San Diego company that develops AI systems to assess face emotions, in 2016. 

When Alexa senses irritation in a user's voice, it apologizes and asks for clarification. 

Nuance, a speech recognition firm that Microsoft bought in April 2021, has shown off a device for automobiles that assesses driver emotions based on facial clues. 

In May, Swedish business Smart Eye bought Affectiva, an MIT Media Lab spin-off that claimed it could identify rage or dissatisfaction in speech in 1.2 seconds. 


According to Markets & Markets, the emotion AI market is expected to almost double in size from $19 billion in 2020 to $37.1 billion in 2026. 



Hundreds of millions of dollars have been invested in firms like Affectiva, Realeyes, and Hume by venture investors eager to get in on the first floor. 


According to the Financial Times, it is being used by film companies such as Disney and 20th Century Fox to gauge public response to new series and films. 

Meanwhile, marketing organizations have been putting the technology to the test for customers like Coca-Cola and Intel to examine how audiences react to commercials. 

The difficulty is that there are few – if any – universal indicators of emotion, which calls into doubt the accuracy of emotion AI. 

The bulk of emotion AI businesses are based on psychologist Paul Ekman's seven basic emotions (joy, sorrow, surprise, fear, anger, disgust, and contempt), which he introduced in the early 1970s. 

However, further study has validated the common sense assumption that individuals from diverse backgrounds express their emotions in quite different ways. 



Context, conditioning, relationality, and culture all have an impact on how individuals react to situations. 


For example, scowling, which is commonly linked with anger, has been observed to appear on the faces of furious persons fewer than 30% of the time. 

In Malaysia, the apparently universal expression for fear is the stereotype for a threat or anger. 


  • Later, Ekman demonstrated that there are disparities in how American and Japanese pupils respond to violent films, with Japanese students adopting "a whole distinct set of emotions" if another person is around, especially an authority figure. 
  • Gender and racial biases in face analysis algorithms have been extensively established, and are caused by imbalances in the datasets used to train the algorithm. 



In general, an AI system that has been trained on photographs of lighter-skinned humans may struggle with skin tones that are unknown to it. 


This isn't the only kind of prejudice that exists. 

Retorio, an AI employment tool, was seen to react differently to the identical applicant wearing glasses versus wearing a headscarf. 


  • Researchers from MIT, the Universitat Oberta de Catalunya in Barcelona, and the Universidad Autonoma de Madrid revealed in a 2020 study that algorithms may become biased toward specific facial expressions, such as smiling, lowering identification accuracy. 
  • Researchers from the University of Cambridge and the Middle East Technical University discovered that at least one of the public datasets often used to train emotion recognition systems was contaminated. 



There are substantially more Caucasian faces in AI systems than Asian or Black ones. 


  • Recent study has shown that major vendors' emotional analysis programs assign more negative feelings to Black men's faces than white men's looks, highlighting the repercussions. 
  • Persons with impairments, disorders like autism, and people who communicate in various languages and dialects, such as African-American Vernacular English, all have different voices (AAVE). 
  • A native French speaker doing an English survey could hesitate or enunciate a word with considerable trepidation, which an AI system might misinterpret as an emotion signal. 



Despite the faults in the technology, some businesses and governments are eager to use emotion AI to make high-stakes judgments. 


Employers use it to assess prospective workers by giving them a score based on their empathy or emotional intelligence. 

It's being used in schools to track pupils' participation in class — and even when they're doing homework at home. 

Emotion AI has also been tried at border checkpoints in the United States, Hungary, Latvia, and Greece to detect "risk persons." 

To reduce prejudice, Hume claims that "randomized studies" are used to collect "a vast variety" of facial and voice expressions from "people from a wide range of backgrounds." 

According to Cowen, the company has gathered over 1.1 million images and videos of facial expressions from over 30,000 people in the United States, China, Venezuela, India, South Africa, and Ethiopia, as well as over 900,000 audio recordings of people voicing their emotions labeled with people's self-reported emotional experiences. 

Hume's dataset is less than Affectiva's, which claimed to be the biggest of its sort at the time, with over 10 million people's expressions from 87 countries. 

Cowen, on the other hand, says that Hume's data can be used to train models to assess "an exceptionally broad spectrum of emotions," including over 28 facial expressions and 25 verbal expressions. 


"As demand for our empathetic AI models has grown, we've been prepared to provide access to them at a large scale." 


As a result, we'll be establishing a developer platform that will provide developers and researchers API documentation and a playground," Hume added. 

"We're also gathering data and developing training models for social interaction and conversational data, body language, and multi-modal expressions, which we expect will broaden our use cases and client base." 

Beyond Mursion, Hume claims it's collaborating with Hoomano, a firm that develops software for "social robots" like Softbank Robotics' Pepper, to build digital assistants that make better suggestions by taking into consideration the emotions of users. 

Hume also claims to have collaborated with Mount Sinai and University of California, San Francisco experts to investigate whether its models can detect depression and schizophrenia symptoms "that no prior methodologies have been able to capture." 


"A person's emotions have a big impact on their conduct, including what they pay attention to and click on." 


As a result, 'emotion AI' is already present in AI technologies such as search engines, social media algorithms, and recommendation systems. It's impossible to avoid. 

As a result, decision-makers must be concerned about how these technologies interpret and react to emotional signals, influencing their users' well-being in ways that their inventors are unaware of." Cowen remarked. 

"Hume AI provides the tools required to guarantee that technologies are built to increase the well-being of their users. There's no way of understanding how an AI system is interpreting these signals and altering people's emotions without means to assess them, and there's no way of designing the system to do so in a way that is compatible with people's well-being." 


Leaving aside the thorny issue of using artificial intelligence to diagnose mental disorder, Mike Cook, a Queen Mary University of London AI researcher, believes the company's message is "performative" and the language is questionable. 


"[T]hey've obviously gone to tremendous lengths to speak about diversity and inclusion and other such things, and I'm not going to whine about people creating datasets with greater geographic variety." "However, it seems a little like it was massaged by a PR person who knows how to make your organization appear to care," he remarked. 

Cowen claims that by forming The Hume Initiative, a nonprofit "committed to governing empathetic AI," Hume is taking a more rigorous look at the uses of emotion AI than rivals. 

The Hume Initiative, whose ethical committee includes Taniya Mishra, former director of AI at Affectiva, has established regulatory standards that the company claims it would follow when commercializing its innovations. 


The Hume Initiative's principles forbid uses like manipulation, fraud, "optimizing for diminished well-being," and "unbounded" emotion AI. 


It also establishes limitations for use cases such as platforms and interfaces, health and development, and education, such as mandating educators to utilize the output of an emotion AI model to provide constructive — but non-evaluative — input. 

Danielle Krettek Cobb, the creator of the Google Empathy Lab, Dacher Keltner, a professor of psychology at UC Berkeley, and Ben Bland, the head of the IEEE group establishing standards for emotion AI, are coauthors of the recommendations. 

"The Hume Initiative started by compiling a list of all known applications for empathetic AI. 

After that, they voted on the first set of specific ethical principles. 


The resultant principles are tangible and enforceable, unlike any prior attempt to AI ethics. 


They describe how empathetic AI may be used to increase mankind's finest traits of belonging, compassion, and well-being, as well as how it might be used to expose humanity to intolerable dangers," Cowen remarked. 

"Those who use Hume AI's data or AI models must agree to use them solely in accordance with The Hume Initiative's ethical rules, guaranteeing that any applications using our technology are intended to promote people's well-being." Companies have boasted about their internal AI ethical initiatives in the past, only to have such efforts fall by the wayside – or prove to be performative and ineffective. 


Google's AI ethics board was notoriously disbanded barely one week after it was established. 


Meta's (previously Facebook's) AI ethics unit has also been labeled as essentially useless in reports. 

It's referred to as "ethical washing" by some. 

Simply said, ethical washing is the practice of a firm inventing or inflating its interest in fair AI systems that benefit everyone. 



When a firm touts "AI for good" activities on the one hand while selling surveillance technology to governments and companies on the other, this is a classic example for tech titans. 


The coauthors of a report published by Trilateral Research, a London-based technology consultancy, claim that ethical principles and norms do not, by themselves, assist practitioners grapple with difficult concerns like fairness in emotion AI. 

They argue that these should be thoroughly explored to ensure that businesses do not deploy systems that are incompatible with societal norms and values. 


"Ethics is made ineffectual without a continual process of challenging what is or may be clear, of probing behind what seems to be resolved, of keeping this interrogation alive," they said. 


"As a result, the establishment of ethics into established norms and principles comes to an end." Cook identifies problems in The Hume Initiative's stated rules, especially in its use of ambiguous terminology. 

"A lot of the standards seem performatively written — if you believe manipulating the user is wrong, you'll read the guidelines and think to yourself, 'Yes, I won't do that.' And if you don't care, you'll read the rules and say, 'Yes, I can justify this,'" he explained. 

Cowen believes Hume is "open[ing] the door to optimize AI for human and societal well-being" rather than short-term corporate objectives like user engagement. 

"We don't have any actual competition since the other AI models for measuring emotional signals are so restricted." They concentrate on a small number of facial expressions, neglect the voice entirely, and have major demographic biases. 



These biases are often weaved into the data used to train AI systems. 


Furthermore, no other business has established explicit ethical criteria for the usage of empathetic AI," he said. 

"We're building a platform that will consolidate our model deployment and provide customers greater choice over how their data is utilized." 

Regardless of whether or not rules exist, politicians have already started to limit the use of emotion AI systems. 



The New York City Council recently established a regulation mandating companies to notify applicants when they are being evaluated by AI, as well as to audit the algorithms once a year. 


Candidates in Illinois must provide their agreement for video footage analysis, while Maryland has outlawed the use of face analysis entirely. 

Some firms have voluntarily ceased supplying emotion AI services or erected barriers around them. 

HireVue said that its algorithms will no longer use visual analysis. 

Microsoft's sentiment-detecting Face API, which once claimed it could detect emotions across cultures, now says in a caveat that "facial expressions alone do not reflect people's interior moods."

The Hume Initiative, according to Cook, "developed some ethical papers so people don't worry about what [Hume] is doing." 

"Perhaps the most serious problem I have is that I have no idea what they're doing." "Apart from whatever datasets they created, the part that's public doesn't appear to have anything on it," Cook added. 



Emotion recognition using AI. 


Emotion detection is a hot new field, with a slew of entrepreneurs marketing devices that promise to be able to read people's interior emotional states and AI academics attempting to increase computers' capacity to do so. 

Voice analysis, body language analysis, gait analysis, eye tracking, and remote assessment of physiological indications such as pulse and respiration rates are used to do this. 

The majority of the time, though, it's done by analyzing facial expressions. 

However, a recent research reveals that these items are constructed on a foundation of intellectual sand. 


The main issue is whether human emotions can be successfully predicted by looking at their faces. 


"Whether facial expressions of emotion are universal, whether you can look at someone's face and read emotion in their face," Lisa Feldman Barrett, a professor of psychology at Northeastern University and an expert on emotion, told me, "is a topic of great contention that scientists have been debating for at least 100 years." 


Despite this extensive history, she said that no full review of all emotion research conducted over the previous century had ever been completed. 


So, a few years ago, the Association for Psychological Science gathered five eminent scientists from opposing viewpoints to undertake a "systematic evaluation of the data challenging the popular opinion" that emotion can be consistently predicted by outward facial movements. 

According to Barrett, who was one of the five scientists, they "had extremely divergent theoretical ideas." "We arrived to the project with very different assumptions of what the data would reveal, and it was our responsibility to see if we could come to an agreement on what the data revealed and how to best interpret it." We weren't sure we could do it since it's such a divisive issue." The procedure, which was supposed to take a few months, took two years. 

Nonetheless, after evaluating over 1,000 scientific studies in the psychology literature, these experts arrived to an united conclusion: "a person's emotional state may be simply determined from his or her facial expressions" has no scientific basis. 


According to the researchers, there are three common misconceptions "about how emotions are communicated and interpreted in facial movements." 


The relationship between facial expressions and emotions is neither dependable, particular, or generalizable (i.e., the same emotions are not always exhibited in the same manner) (the effects of different cultures and contexts has not been sufficiently documented). 

"A scowling face may or may not be an indication of rage," Barrett said to me. 

People frown in rage at times, and you could grin, weep, or simply seethe with a neutral look at other moments. 

People grimace at other times as well, such as when they're perplexed, concentrating, or having gas." These results do not suggest that individuals move their faces at random or that [facial expressions] have no psychological significance, according to the researchers. 

Instead, they show that the facial configurations in issue aren't "fingerprints" or diagnostic displays that consistently and explicitly convey various emotional states independent of context, person, or culture. 

It's impossible to deduce pleasure from a grin, anger from a scowl, or sorrow from a frown, as most of today's technology seeks to accomplish when applying what are incorrectly considered to be scientific principles. 

Because an entire industry of automated putative emotion-reading devices is rapidly growing, this work is relevant. 


The market for emotion detection software is expected to reach at least $3.8 billion by 2025, according to our recent research on "Robot Surveillance." 


Emotion detection (also known as "affect recognition" or "affective computing") is already being used in devices for marketing, robotics, driving safety, and audio "aggression detectors," as we recently reported. 

Emotion identification is built on the same fundamental concept as polygraphs, or "lie detectors": that a person's internal mental state can be accurately associated with physical bodily motions and situations. 

They can't — and this includes face muscles in particular. 

It seems to reason that what is true of facial muscles would also be true of all other techniques of detecting emotion, such as body language and gait. 

However, the assumption that such mind reading is conceivable might cause serious damage. 


A jury's cultural misunderstanding of what a foreign defendant's facial expressions mean, for example, can lead to a death sentence rather than a prison sentence. 


When such mindset is translated into automated systems, it may lead to further problems. 

For example, a "smart" body camera that incorrectly informs a police officer that someone is hostile and angry might lead to an unnecessary shooting. 


"There is no automatic emotion identification. 

The top algorithms can confront a face — full frontal, no occlusions, optimal illumination — and are excellent at recognizing facial movements. 

They aren't able, however, to deduce what those facial gestures signify."


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See Also: 


AI Emotions, AI Emotion Recognition, AI Emotional Intelligence, Surveillance Technologies, Privacy and Technology, AI Bias, Human Rights.


Download PDF: 








Artificial Intelligence - Who Is Elon Musk?

 




Elon Musk (1971–) is an American businessman and inventor.

Elon Musk is an engineer, entrepreneur, and inventor who was born in South Africa.

He is a dual citizen of South Africa, Canada, and the United States, and resides in California.

Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century, as well as an important influencer and contributor to the development of artificial intelligence.

Despite his controversial personality, Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century and an important influencer and contributor to the development of artificial intelligence.

Musk's business instincts and remarkable technological talent were evident from an early age.

By the age of 10, he had self-taught himself how program computers, and by the age of twelve, he had produced a video game and sold the source code to a computer magazine.

Musk has included allusions to some of his favorite novels in SpaceX's Falcon Heavy rocket launch and Tesla's software since he was a youngster.

Musk's official schooling was centered on economics and physics rather than engineering, interests that are mirrored in his subsequent work, such as his efforts in renewable energy and space exploration.

He began his education at Queen's University in Canada, but later transferred to the University of Pennsylvania, where he earned bachelor's degrees in Economics and Physics.

Musk barely stayed at Stanford University for two days to seek a PhD in energy physics before departing to start his first firm, Zip2, with his brother Kimbal Musk.


Musk has started or cofounded many firms, including three different billion-dollar enterprises: SpaceX, Tesla, and PayPal, all driven by his diverse interests and goals.


• Zip2 was a web software business that was eventually purchased by Compaq.

• X.com: an online bank that merged with PayPal to become the online payments corporation PayPal.

• Tesla, Inc.: an electric car and solar panel maker 

• SpaceX: a commercial aircraft manufacturer and space transportation services provider (via its subsidiarity SolarCity) 

• Neuralink: a neurotechnology startup focusing on brain-computer connections 

• The Boring Business: an infrastructure and tunnel construction corporation

 • OpenAI: a nonprofit AI research company focused on the promotion and development of friendly AI Musk is a supporter of environmentally friendly energy and consumption.


Concerns over the planet's future habitability prompted him to investigate the potential of establishing a self-sustaining human colony on Mars.

Other projects include the Hyperloop, a high-speed transportation system, and the Musk electric jet, a jet-powered supersonic electric aircraft.

Musk sat on President Donald Trump's Strategy and Policy Forum and Manufacturing Jobs Initiative for a short time before stepping out when the US withdrew from the Paris Climate Agreement.

Musk launched the Musk Foundation in 2002, which funds and supports research and activism in the domains of renewable energy, human space exploration, pediatric research, and science and engineering education.

Musk's effect on AI is significant, despite his best-known work with Tesla and SpaceX, as well as his contentious social media pronouncements.

In 2015, Musk cofounded the charity OpenAI with the objective of creating and supporting "friendly AI," or AI that is created, deployed, and utilized in a manner that benefits mankind as a whole.

OpenAI's objective is to make AI open and accessible to the general public, reducing the risks of AI being controlled by a few privileged people.

OpenAI is especially concerned about the possibility of Artificial General Intelligence (AGI), which is broadly defined as AI capable of human-level (or greater) performance on any intellectual task, and ensuring that any such AGI is developed responsibly, transparently, and distributed evenly and openly.

OpenAI has had its own successes in taking AI to new levels while staying true to its goals of keeping AI friendly and open.

In June of 2018, a team of OpenAI-built robots defeated a human team in the video game Dota 2, a feat that could only be accomplished through robot teamwork and collaboration.

Bill Gates, a cofounder of Microsoft, praised the achievement on Twitter, calling it "a huge milestone in advancing artificial intelligence" (@BillGates, June 26, 2018).

Musk resigned away from the OpenAI board in February 2018 to prevent any conflicts of interest while Tesla advanced its AI work for autonomous driving.

Musk became the CEO of Tesla in 2008 after cofounding the company in 2003 as an investor.

Musk was the chairman of Tesla's board of directors until 2018, when he stepped down as part of a deal with the US Securities and Exchange Commission over Musk's false claims about taking the company private.

Tesla produces electric automobiles with self-driving capabilities.

Tesla Grohmann Automation and Solar City, two of its subsidiaries, offer relevant automotive technology and manufacturing services and solar energy services, respectively.

Tesla, according to Musk, will reach Level 5 autonomous driving capabilities in 2019, as defined by the National Highway Traffic Safety Administration's (NHTSA) five levels of autonomous driving.

Tes la's aggressive development with autonomous driving has influenced conventional car makers' attitudes toward electric cars and autonomous driving, and prompted a congressional assessment of how and when the technology should be regulated.

Musk is widely credited as a key influencer in moving the automotive industry toward autonomous driving, highlighting the benefits of autonomous vehicles (including reduced fatalities in vehicle crashes, increased worker productivity, increased transportation efficiency, and job creation) and demonstrating that the technology is achievable in the near term.

Tesla's autonomous driving code has been created and enhanced under the guidance of Musk and Tesla's Director of AI, Andrej Karpathy (Autopilot).

The computer vision analysis used by Tesla, which includes an array of cameras on each car and real-time image processing, enables the system to make real-time observations and predictions.

The cameras, as well as other exterior and internal sensors, capture a large quantity of data, which is evaluated and utilized to improve Autopilot programming.

Tesla is the only autonomous car maker that is opposed to the LIDAR laser sensor (an acronym for light detection and ranging).

Tesla uses cameras, radar, and ultrasonic sensors instead.

Though academics and manufacturers disagree on whether LIDAR is required for fully autonomous driving, the high cost of LIDAR has limited Tesla's rivals' ability to produce and sell vehicles at a pricing range that allows a large number of cars on the road to gather data.

Tesla is creating its own AI hardware in addition to its AI programming.

Musk stated in late 2017 that Tesla is building its own silicon for artificial-intelligence calculations, allowing the company to construct its own AI processors rather than depending on third-party sources like Nvidia.

Tesla's AI progress in autonomous driving has been marred by setbacks.

Tesla has consistently missed self-imposed deadlines, and serious accidents have been blamed on flaws in the vehicle's Autopilot mode, including a non-injury accident in 2018, in which the vehicle failed to detect a parked firetruck on a California freeway, and a fatal accident in 2018, in which the vehicle failed to detect a pedestrian outside a crosswalk.

Neuralink was established by Musk in 2016.

With the stated objective of helping humans to keep up with AI breakthroughs, Neuralink is focused on creating devices that can be implanted into the human brain to better facilitate communication between the brain and software.

Musk has characterized the gadgets as a more efficient interface with computer equipment, while people now operate things with their fingertips and voice commands, directives would instead come straight from the brain.

Though Musk has made major advances to AI, his pronouncements regarding the risks linked with AI have been apocalyptic.

Musk has called AI "humanity's greatest existential danger" and "the greatest peril we face as a civilisation" (McFarland 2014).

(Morris 2017).

He cautions against the perils of power concentration, a lack of independent control, and a competitive rush to acceptance without appropriate analysis of the repercussions.

While Musk has used colorful terminology such as "summoning the devil" (McFarland 2014) and depictions of cyborg overlords, he has also warned of more immediate and realistic concerns such as job losses and AI-driven misinformation campaigns.

Though Musk's statements might come out as alarmist, many important and well-respected figures, including as Microsoft cofounder Bill Gates, Swedish-American scientist Max Tegmark, and the late theoretical physicist Stephen Hawking, share his concern.

Furthermore, Musk does not call for the cessation of AI research.

Instead, Musk supports for responsible AI development and regulation, including the formation of a Congressional committee to spend years studying AI with the goal of better understanding the technology and its hazards before establishing suitable legal limits.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Bostrom, Nick; Superintelligence.


References & Further Reading:


Gates, Bill. (@BillGates). 2018. Twitter, June 26, 2018. https://twitter.com/BillGates/status/1011752221376036864.

Marr, Bernard. 2018. “The Amazing Ways Tesla Is Using Artificial Intelligence and Big Data.” Forbes, January 8, 2018. https://www.forbes.com/sites/bernardmarr/2018/01/08/the-amazing-ways-tesla-is-using-artificial-intelligence-and-big-data/.

McFarland, Matt. 2014. “Elon Musk: With Artificial Intelligence, We Are Summoning the Demon.” Washington Post, October 24, 2014. https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/.

Morris, David Z. 2017. “Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization.’” Fortune, July 15, 2017. https://fortune.com/2017/07/15/elon-musk-artificial-intelligence-2/.

Piper, Kelsey. 2018. “Why Elon Musk Fears Artificial Intelligence.” Vox Media, Novem￾ber 2, 2018. https://www.vox.com/future-perfect/2018/11/2/18053418/elon-musk-artificial-intelligence-google-deepmind-openai.

Strauss, Neil. 2017. “Elon Musk: The Architect of Tomorrow.” Rolling Stone, November 15, 2017. https://www.rollingstone.com/culture/culture-features/elon-musk-the-architect-of-tomorrow-120850/.



Artificial Intelligence - What Is The Asilomar Conference On Beneficial AI?

 


The Asilomar Meeting on Beneficial AI has most prominently portrayed social concerns around artificial intelligence and danger to people via Isaac Asimov's Three Laws of Robotics.

"A robot may not damage a human being or, by inactivity, enable a human being to come to harm; A robot must follow human instructions unless such orders would contradict with the First Law; A robot must safeguard its own existence unless such protection would clash with the First or Second Law" (Asimov 1950, 40).

In subsequent books, Asimov added a Fourth Law or Zeroth Law, which is often quoted as "A robot may not hurt mankind, or, by inactivity, enable humanity to come to harm," and is detailed in Robots and Empire by the robot character Daneel Olivaw (Asimov 1985, chapter 18).

Asimov's zeroth rule sparked debate on how to judge whether or not something is harmful to mankind.

This was the topic of the 2017 Asilomar Conference on Beneficial AI, which went beyond the Three Laws and the Zeroth Law to propose twenty-three principles to protect mankind in the future of AI.

The conference's sponsor, the Future of Life Institute, has posted the principles on its website and has received 3,814 signatures from AI experts and other multidisciplinary supporters.

There are three basic kinds of principles: research questions, ethics and values, and long-term concerns.

These research guidelines are intended to guarantee that the aims of artificial intelligence continue to be helpful to people.

They're meant to help investors decide where to put their money in AI research.

To achieve useful AI, Asilomar signatories con incline that research agendas should encourage and preserve openness and conversation between AI researchers, policymakers, and developers.

Researchers interested in the development of artificial intelligence systems should work together to prioritize safety.

Proposed concepts relating to ethics and values are aimed to prevent damage and promote direct human control over artificial intelligence systems.

Parties to the Asilomar principles believe that AI should reflect human values such as individual rights, freedoms, and diversity acceptance.

Artificial intelligences, in particular, should respect human liberty and privacy, and should only be used to empower and enrich humanity.

Human social and civic norms must be adhered to by AI.

The Asilomar signatories believe that AI creators should be held accountable for their work.

One aspect that stands out is the likelihood of an autonomous weapons arms race.

Because of the high stakes, the designers of the Asilomar principles incorporated principles that addressed longer-term challenges.

They advised prudence, meticulous planning, and human supervision.

Superintelligences must be produced for the wider welfare of mankind, and not merely to further the aims of one industry or government.

The Asilomar Conference's twenty-three principles have sparked ongoing discussions about the need for beneficial AI and specific safeguards for the future of AI and humanity.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Accidents and Risk Assessment; Asimov, Isaac; Autonomous Weapons Systems, Ethics of; Campaign to Stop Killer Robots; Robot Ethics.



Further Reading


Asilomar AI Principles. 2017. https://futureoflife.org/ai-principles/.

Asimov, Isaac. 1950. “Runaround.” In I, Robot, 30–47. New York: Doubleday.

Asimov, Isaac. 1985. Robots and Empire. New York: Doubleday.

Sarangi, Saswat, and Pankaj Sharma. 2019. Artificial Intelligence: Evolution, Ethics, and Public Policy. Abingdon, UK: Routledge.





Artificial Intelligence - Who Is Sherry Turkle?

 


 

 

Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


However, this viewpoint has given place to one that considers intelligence to be emergent.

This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

Adam appreciates the fact that he is able to create something fresh when playing.

Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

Turkle believes that this transition is critical.


She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

  • the complete complexity and inherent contradictions that define what it is to be human.


A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


  • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
  • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
  • By engaging with gadgets, one may form a relationship with them.
  • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

In other words, the most that can be anticipated is engagement.



Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

Turkle has a problem with this because these devices can only respond as if they understand what is being said.


AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


  • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
  • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


References And Further Reading

  • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
  • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
  • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...