Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Artificial Intelligence - Who Is Helen Nissenbaum?

 



In her research, Helen Nissenbaum (1954–), a PhD in philosophy, looks at the ethical and political consequences of information technology.

She's worked at Stanford University, Princeton University, New York University, and Cornell Tech, among other places.

Nissenbaum has also worked as the primary investigator on grants from the National Security Agency, the National Science Foundation, the Air Force Office of Scientific Research, the United States Department of Health and Human Services, and the William and Flora Hewlett Foundation, among others.

Big data, machine learning, algorithms, and models, according to Nissenbaum, lead to output outcomes.

Her primary issue, which runs across all of these themes, is privacy.

Nissenbaum explores these problems in her 2010 book, Privacy in Context: Technology, Policy, and the Integrity of Social Life, by using the concept of contextual integrity, which views privacy in terms of acceptable information flows rather than merely prohibiting all information flows.

In other words, she's interested in establishing an ethical framework within which data may be obtained and utilized responsibly.

The challenge with developing such a framework, however, is that when many data sources are combined, or aggregated, it becomes possible to understand more about the people from whose the data was obtained than it would be feasible to accomplish with each individual source of data.

Such aggregated data is used to profile consumers, allowing credit and insurance businesses to make judgments based on the information.

Outdated data regulation regimes hamper such activities even more.

One big issue is that the distinction between monitoring users to construct profiles and targeting adverts to those profiles is blurry.

To make things worse, adverts are often supplied by third-party websites other than the one the user is currently on.

This leads to the ethical dilemma of many hands, a quandary in which numerous parties are involved and it is unclear who is ultimately accountable for a certain issue, such as maintaining users' privacy in this situation.

Furthermore, because so many organizations may receive this information and use it for a variety of tracking and targeting purposes, it is impossible to adequately inform users about how their data will be used and allow them to consent or opt out.

In addition to these issues, the AI systems that use this data are biased itself.

This prejudice, on the other hand, is a social issue rather than a computational one, since much of the scholarly effort focused on resolving computational bias has been misplaced.

As an illustration of this prejudice, Nissenbaum cites Google's Behavioral Advertising system.

When a search contains a name that is traditionally African American, the Google Behavioral Advertising algorithm will show advertising for background checks more often.

This sort of racism isn't encoded into the coding; rather, it develops through social contact with adverts, since those looking for traditionally African-American names are more likely to click on background check links.

Correcting these bias-related issues, according to Nissenbaum, would need considerable regulatory reforms connected to the ownership and usage of big data.

In light of this, and with few data-related legislative changes on the horizon, Nissenbaum has worked to devise measures that can be implemented right now.

Obfuscation, which comprises purposely adding superfluous information that might interfere with data gathering and monitoring procedures, is the major framework she has utilized to construct these tactics.

She claims that this is justified by the uneven power dynamics that have resulted in near-total monitoring.

Nissenbaum and her partners have created a number of useful internet browser plug-ins based on this obfuscation technology.

TrackMeNot was the first of these obfuscating browser add-ons.

This pluinator makes random queries to a number of search engines in attempt to contaminate the stream of data obtained and prevent search businesses from constructing an aggregated profile based on the user's genuine searches.

This plug-in is designed for people who are dissatisfied with existing data rules and want to take quick action against companies and governments who are aggressively collecting information.

This approach adheres to the obfuscation theory since, rather than concealing the original search phrases, it just hides them with other search terms, which Nissenbaum refers to as "ghosts." Adnostic is a Firefox web browser prototype plugin aimed at addressing the privacy issues related with online behavioral advertising tactics.

Currently, online behavioral advertising is accomplished by recording a user's activity across numerous websites and then placing the most relevant adverts at those sites.

Multiple websites gather, aggregate, and keep this behavioral data forever.

Adnostic provides a technology that enables profiling and targeting to take place exclusively on the user's computer, with no data exchanged with third-party websites.

Although the user continues to get targeted advertisements, third-party websites do not gather or keep behavioral data.

AdNauseam is yet another obfuscation-based plugin.

This program, which runs in the background, clicks all of the adverts on the website.

The declared goal of this activity is to contaminate the data stream, making targeting and monitoring ineffective.

Advertisers' expenses will very certainly rise as a result of this.

This project proved controversial, and in 2017, it was removed from the Chrome Web Store.

Although workarounds exist to enable users to continue installing the plugin, its loss of availability in the store makes it less accessible to the broader public.

Nissenbaum's book goes into great length into the ethical challenges surrounding big data and the AI systems that are developed on top of it.

Nissenbaum has built realistic obfuscation tools that may be accessed and utilized by anybody interested, in addition to offering specific legislative recommendations to solve troublesome privacy issues.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Biometric Privacy and Security; Biometric Technology; Robot Ethics.


References & Further Reading:


Barocas, Solon, and Helen Nissenbaum. 2009. “On Notice: The Trouble with Notice and Consent.” In Proceedings of the Engaging Data Forum: The First International Forum on the Application and Management of Personal Electronic Information, n.p. Cambridge, MA: Massachusetts Institute of Technology.

Barocas, Solon, and Helen Nissenbaum. 2014. “Big Data’s End Run around Consent and Anonymity.” In Privacy, Big Data, and the Public Good, edited by Julia Lane, Victoria Stodden, Stefan Bender, and Helen Nissenbaum, 44–75. Cambridge, UK: Cambridge University Press.

Brunton, Finn, and Helen Nissenbaum. 2015. Obfuscation: A User’s Guide for Privacy and Protest. Cambridge, MA: MIT Press.

Lane, Julia, Victoria Stodden, Stefan Bender, and Helen Nissenbaum, eds. 2014. Privacy, Big Data, and the Public Good. New York: Cambridge University Press.

Nissenbaum, Helen. 2010. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford, CA: Stanford University Press.


Artificial Intelligence - Who Is Elon Musk?

 




Elon Musk (1971–) is an American businessman and inventor.

Elon Musk is an engineer, entrepreneur, and inventor who was born in South Africa.

He is a dual citizen of South Africa, Canada, and the United States, and resides in California.

Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century, as well as an important influencer and contributor to the development of artificial intelligence.

Despite his controversial personality, Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century and an important influencer and contributor to the development of artificial intelligence.

Musk's business instincts and remarkable technological talent were evident from an early age.

By the age of 10, he had self-taught himself how program computers, and by the age of twelve, he had produced a video game and sold the source code to a computer magazine.

Musk has included allusions to some of his favorite novels in SpaceX's Falcon Heavy rocket launch and Tesla's software since he was a youngster.

Musk's official schooling was centered on economics and physics rather than engineering, interests that are mirrored in his subsequent work, such as his efforts in renewable energy and space exploration.

He began his education at Queen's University in Canada, but later transferred to the University of Pennsylvania, where he earned bachelor's degrees in Economics and Physics.

Musk barely stayed at Stanford University for two days to seek a PhD in energy physics before departing to start his first firm, Zip2, with his brother Kimbal Musk.


Musk has started or cofounded many firms, including three different billion-dollar enterprises: SpaceX, Tesla, and PayPal, all driven by his diverse interests and goals.


• Zip2 was a web software business that was eventually purchased by Compaq.

• X.com: an online bank that merged with PayPal to become the online payments corporation PayPal.

• Tesla, Inc.: an electric car and solar panel maker 

• SpaceX: a commercial aircraft manufacturer and space transportation services provider (via its subsidiarity SolarCity) 

• Neuralink: a neurotechnology startup focusing on brain-computer connections 

• The Boring Business: an infrastructure and tunnel construction corporation

 • OpenAI: a nonprofit AI research company focused on the promotion and development of friendly AI Musk is a supporter of environmentally friendly energy and consumption.


Concerns over the planet's future habitability prompted him to investigate the potential of establishing a self-sustaining human colony on Mars.

Other projects include the Hyperloop, a high-speed transportation system, and the Musk electric jet, a jet-powered supersonic electric aircraft.

Musk sat on President Donald Trump's Strategy and Policy Forum and Manufacturing Jobs Initiative for a short time before stepping out when the US withdrew from the Paris Climate Agreement.

Musk launched the Musk Foundation in 2002, which funds and supports research and activism in the domains of renewable energy, human space exploration, pediatric research, and science and engineering education.

Musk's effect on AI is significant, despite his best-known work with Tesla and SpaceX, as well as his contentious social media pronouncements.

In 2015, Musk cofounded the charity OpenAI with the objective of creating and supporting "friendly AI," or AI that is created, deployed, and utilized in a manner that benefits mankind as a whole.

OpenAI's objective is to make AI open and accessible to the general public, reducing the risks of AI being controlled by a few privileged people.

OpenAI is especially concerned about the possibility of Artificial General Intelligence (AGI), which is broadly defined as AI capable of human-level (or greater) performance on any intellectual task, and ensuring that any such AGI is developed responsibly, transparently, and distributed evenly and openly.

OpenAI has had its own successes in taking AI to new levels while staying true to its goals of keeping AI friendly and open.

In June of 2018, a team of OpenAI-built robots defeated a human team in the video game Dota 2, a feat that could only be accomplished through robot teamwork and collaboration.

Bill Gates, a cofounder of Microsoft, praised the achievement on Twitter, calling it "a huge milestone in advancing artificial intelligence" (@BillGates, June 26, 2018).

Musk resigned away from the OpenAI board in February 2018 to prevent any conflicts of interest while Tesla advanced its AI work for autonomous driving.

Musk became the CEO of Tesla in 2008 after cofounding the company in 2003 as an investor.

Musk was the chairman of Tesla's board of directors until 2018, when he stepped down as part of a deal with the US Securities and Exchange Commission over Musk's false claims about taking the company private.

Tesla produces electric automobiles with self-driving capabilities.

Tesla Grohmann Automation and Solar City, two of its subsidiaries, offer relevant automotive technology and manufacturing services and solar energy services, respectively.

Tesla, according to Musk, will reach Level 5 autonomous driving capabilities in 2019, as defined by the National Highway Traffic Safety Administration's (NHTSA) five levels of autonomous driving.

Tes la's aggressive development with autonomous driving has influenced conventional car makers' attitudes toward electric cars and autonomous driving, and prompted a congressional assessment of how and when the technology should be regulated.

Musk is widely credited as a key influencer in moving the automotive industry toward autonomous driving, highlighting the benefits of autonomous vehicles (including reduced fatalities in vehicle crashes, increased worker productivity, increased transportation efficiency, and job creation) and demonstrating that the technology is achievable in the near term.

Tesla's autonomous driving code has been created and enhanced under the guidance of Musk and Tesla's Director of AI, Andrej Karpathy (Autopilot).

The computer vision analysis used by Tesla, which includes an array of cameras on each car and real-time image processing, enables the system to make real-time observations and predictions.

The cameras, as well as other exterior and internal sensors, capture a large quantity of data, which is evaluated and utilized to improve Autopilot programming.

Tesla is the only autonomous car maker that is opposed to the LIDAR laser sensor (an acronym for light detection and ranging).

Tesla uses cameras, radar, and ultrasonic sensors instead.

Though academics and manufacturers disagree on whether LIDAR is required for fully autonomous driving, the high cost of LIDAR has limited Tesla's rivals' ability to produce and sell vehicles at a pricing range that allows a large number of cars on the road to gather data.

Tesla is creating its own AI hardware in addition to its AI programming.

Musk stated in late 2017 that Tesla is building its own silicon for artificial-intelligence calculations, allowing the company to construct its own AI processors rather than depending on third-party sources like Nvidia.

Tesla's AI progress in autonomous driving has been marred by setbacks.

Tesla has consistently missed self-imposed deadlines, and serious accidents have been blamed on flaws in the vehicle's Autopilot mode, including a non-injury accident in 2018, in which the vehicle failed to detect a parked firetruck on a California freeway, and a fatal accident in 2018, in which the vehicle failed to detect a pedestrian outside a crosswalk.

Neuralink was established by Musk in 2016.

With the stated objective of helping humans to keep up with AI breakthroughs, Neuralink is focused on creating devices that can be implanted into the human brain to better facilitate communication between the brain and software.

Musk has characterized the gadgets as a more efficient interface with computer equipment, while people now operate things with their fingertips and voice commands, directives would instead come straight from the brain.

Though Musk has made major advances to AI, his pronouncements regarding the risks linked with AI have been apocalyptic.

Musk has called AI "humanity's greatest existential danger" and "the greatest peril we face as a civilisation" (McFarland 2014).

(Morris 2017).

He cautions against the perils of power concentration, a lack of independent control, and a competitive rush to acceptance without appropriate analysis of the repercussions.

While Musk has used colorful terminology such as "summoning the devil" (McFarland 2014) and depictions of cyborg overlords, he has also warned of more immediate and realistic concerns such as job losses and AI-driven misinformation campaigns.

Though Musk's statements might come out as alarmist, many important and well-respected figures, including as Microsoft cofounder Bill Gates, Swedish-American scientist Max Tegmark, and the late theoretical physicist Stephen Hawking, share his concern.

Furthermore, Musk does not call for the cessation of AI research.

Instead, Musk supports for responsible AI development and regulation, including the formation of a Congressional committee to spend years studying AI with the goal of better understanding the technology and its hazards before establishing suitable legal limits.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Bostrom, Nick; Superintelligence.


References & Further Reading:


Gates, Bill. (@BillGates). 2018. Twitter, June 26, 2018. https://twitter.com/BillGates/status/1011752221376036864.

Marr, Bernard. 2018. “The Amazing Ways Tesla Is Using Artificial Intelligence and Big Data.” Forbes, January 8, 2018. https://www.forbes.com/sites/bernardmarr/2018/01/08/the-amazing-ways-tesla-is-using-artificial-intelligence-and-big-data/.

McFarland, Matt. 2014. “Elon Musk: With Artificial Intelligence, We Are Summoning the Demon.” Washington Post, October 24, 2014. https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/.

Morris, David Z. 2017. “Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization.’” Fortune, July 15, 2017. https://fortune.com/2017/07/15/elon-musk-artificial-intelligence-2/.

Piper, Kelsey. 2018. “Why Elon Musk Fears Artificial Intelligence.” Vox Media, Novem￾ber 2, 2018. https://www.vox.com/future-perfect/2018/11/2/18053418/elon-musk-artificial-intelligence-google-deepmind-openai.

Strauss, Neil. 2017. “Elon Musk: The Architect of Tomorrow.” Rolling Stone, November 15, 2017. https://www.rollingstone.com/culture/culture-features/elon-musk-the-architect-of-tomorrow-120850/.



Artificial Intelligence - Who Is Hans Moravec?

 




Hans Moravec(1948–) is well-known in the computer science community as the long-time head of Carnegie Mellon University's Robotics Institute and an unashamed techno logical optimist.

For the last twenty-five years, he has studied and produced artificially intelligent robots at the CMU lab, where he is still an adjunct faculty member.

Moravec spent almost 10 years as a research assistant at Stanford University's groundbreaking Artificial Intelligence Lab before coming to Carnegie Mellon.

Moravec is also noted for his paradox, which states that, contrary to popular belief, it is simple to program high-level thinking skills into robots—as with chess or Jeopardy!—but difficult to transmit sensorimo tor agility.

Human sensory and motor abilities have developed over millions of years and seem to be easy, despite their complexity.

Higher-order cognitive abilities, on the other hand, are the result of more recent cultural development.

Geometry, stock market research, and petroleum engineering are examples of disciplines that are difficult for people to learn but easier for robots to learn.

"The basic lesson of thirty-five years of AI research is that the hard issues are simple, and the easy ones are hard," writes Steven Pinker of Moravec's scientific career.

Moravec built his first toy robot out of scrap metal when he was eleven years old, and his light-following electronic turtle and a robot operated by punched paper tape earned him two high school science fair honors.

He proposed a Ship of Theseus-like analogy for the viability of artificial brains while still in high school.

Consider replacing a person's human neurons one by one with precisely manufactured equivalents, he said.

When do you think human awareness will vanish? Is anybody going to notice? Is it possible to establish that the person is no longer human? Later in his career, Moravec would suggest that human knowledge and training might be broken down in the same manner, into subtasks that machine intelligences could take over.

Moravec's master's thesis focused on the development of a computer language for artificial intelligence, while his PhD research focused on the development of a robot that could navigate obstacle courses utilizing spatial representation methods.

The area of interest (ROI) in a scene was identified by these robot vision systems.

Moravec's early computer vision robots were extremely sluggish by today's standards, taking around five hours to go from one half of the facility to the other.

To measure distance and develop an internal picture of physical impediments in the room, a remote computer carefully analysed continuous video-camera images recorded by the robot from various angles.

Moravec finally developed 3D occupancy grid technology, which allowed a robot to create an awareness of a cluttered area in a matter of seconds.

Moravec's lab took on a new challenge by converting a Pontiac TransSport minivan into one of the world's first road-ready autonomous cars.

The self-driving minivan reached speeds of up to 60 miles per hour.

DANTE II, a robot capable of going inside the crater of an active volcano on Mount Spurr in Alaska, was also constructed by the CMU Robotics Institute.

While DANTE II's immediate aim was to sample harmful fumarole gases, a job too perilous for humans, it was also planned to demonstrate technologies for robotic expeditions to distant worlds.

The volcanic explorer robot used artificial intelligence to navigate the perilous, boulder-strewn terrain on its own.

Because such rovers produced so much visual and other sensory data that had to be analyzed and managed, Moravec believes that experience with mobile robots spurred the development of powerful artificial intelligence and computer vision methods.

For the National Aeronautics and Space Administration (NASA), Moravec's team built fractal branching ultra-dexterous robots ("Bush robots") in the 1990s.

These robots, which were proposed but never produced due to the lack of necessary manufacturing technologies, comprised of a branching hierarchy of dynamic articulated limbs, starting with a main trunk and splitting down into smaller branches.

As a result, the Bush robot would have "hands" at all scales, from macroscopic to tiny.

The tiniest fingers would be nanoscale in size, allowing them to grip very tiny objects.

Moravec said the robot would need autonomy and depend on artificial intelligence agents scattered throughout the robot's limbs and branches because to the intricacy of manipulating millions of fingers in real time.

He believed that the robots may be made entirely of carbon nanotube material, employing the quick prototyping technology known as 3D printers.

Moravec believes that artificial intelligence will have a significant influence on human civilization.

To stress the role of AI in change, he coined the concept of the "landscape of human capability," which physicist Max Tegmark has later converted into a graphic depiction.

Moravec's picture depicts a three-dimensional environment in which greater altitudes reflect more challenging jobs in terms of human difficulty.

The point where the swelling waters meet the shore reflects the line where robots and humans both struggle with the same duties.

Art, science, and literature are now beyond of grasp for an AI, but the sea has already defeated mathematics, chess, and the game Go.

Language translation, autonomous driving, and financial investment are all on the horizon.

More controversially, in two popular books, Mind Children (1988) and Robot: Mere Machine to Transcendent Mind (1989), Moravec engaged in future conjecture based on what he understood of developments in artificial intelligence research (1999).

In 2040, he said, human intellect will be surpassed by machine intelligence, and the human species would go extinct.

Moravec evaluated the functional equivalence between 50,000 million instructions per second (50,000 MIPS) of computer power and a gram of brain tissue and came up with this figure.

He calculated that home computers in the early 2000s equaled only an insect's nervous system, but that if processing power doubled every eighteen months, 350 million years of human intellect development could be reduced to just 35 years of artificial intelligence advancement.

He estimated that a hundred million MIPS would be required to create human-like universal robots.

Moravec refers to these sophisticated robots as our "mind children" in the year 2040.

Humans, he claims, will devise techniques to delay biological civilization's final demise.

Moravec, for example, was the first to anticipate what is now known as universal basic income, which is delivered by benign artificial superintelligences.

In a completely automated society, a basic income system would provide monthly cash payments to all individuals without any type of employment requirement.

Moravec is more concerned about the idea of a renegade automated corporation breaking its programming and refusing to pay taxes into the human cradle-to-grave social security system than he is about technological unemployment.

Nonetheless, he predicts that these "wild" intelligences will eventually control the universe.

Moravec has said that his books Mind Children and Robot may have had a direct impact on the last third of Stanley Kubrick's original screenplay for A.I. Artificial Intelligence (later filmed by Steven Spielberg).

Moravecs, on the other hand, are self-replicating devices in the science fiction books Ilium and Olympos.

Moravec defended the same physical fundamentalism he expressed in his high school thoughts throughout his life.

He contends in his most transhumanist publications that the only way for humans to stay up with machine intelligences is to merge with them by replacing sluggish human cerebral tissue with artificial neural networks controlled by super-fast algorithms.

In his publications, Moravec has blended the ideas of artificial intelligence with virtual reality simulation.


He's come up with four scenarios for the development of consciousness.

(1) human brains in the physical world, 

(2) a programmed AI implanted in a physical robot, 

(3) a human brain immersed in a virtual reality simulation, and 

(4) an AI functioning inside the boundaries of virtual reality All of them are equally credible depictions of reality, and they are as "real" as we believe them to be.


Moravec is the creator and chief scientist of the Pittsburgh-based Seegrid Corporation, which makes autonomous Robotic Industrial Trucks that can navigate warehouses and factories without the usage of automated guided vehicle systems.

A human trainer physically pushes Seegrid's vehicles through a new facility once.

The robot conducts the rest of the job, determining the most efficient and safe pathways for future journeys, while the trainer stops at the appropriate spots for the truck to be loaded and unloaded.

Seegrid VGVs have transported over two million production miles and eight billion pounds of merchandise for DHL, Whirlpool, and Amazon.

Moravec was born in the Austrian town of Kautzen.

During World War II, his father was a Czech engineer who sold electrical products.

When the Russians invaded Czechoslovakia in 1944, the family moved to Austria.

In 1953, his family relocated to Canada, where he now resides.

Moravec earned a bachelor's degree in mathematics from Acadia University in Nova Scotia, a master's degree in computer science from the University of Western Ontario, and a doctorate from Stanford University, where he worked with John McCarthy and Tom Binford on his thesis.

The Office of Naval Study, the Defense Advanced Research Projects Agency, and NASA have all supported his research.

Elon Musk (1971–) is an American businessman and inventor.

Elon Musk is an engineer, entrepreneur, and inventor who was born in South Africa.

He is a dual citizen of South Africa, Canada, and the United States, and resides in California.

Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century, as well as an important influencer and contributor to the development of artificial intelligence.

Despite his controversial personality, Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century and an important influencer and contributor to the development of artificial intelligence.

Musk's business instincts and remarkable technological talent were evident from an early age.

By the age of 10, he had self-taught himself how program computers, and by the age of twelve, he had produced a video game and sold the source code to a computer maga zine.

Musk has included allusions to some of his favorite novels in SpaceX's Falcon Heavy rocket launch and Tesla's software since he was a youngster.

Musk's official schooling was centered on economics and physics rather than engineering, interests that are mirrored in his subsequent work, such as his efforts in renewable energy and space exploration.

He began his education at Queen's University in Canada, but later transferred to the University of Pennsylvania, where he earned bachelor's degrees in Economics and Physics.

Musk barely stayed at Stanford University for two days to seek a PhD in energy physics before departing to start his first firm, Zip2, with his brother Kimbal Musk.

Musk has started or cofounded many firms, including three different billion-dollar enterprises: SpaceX, Tesla, and PayPal, all driven by his diverse interests and goals.

• Zip2 was a web software business that was eventually purchased by Compaq.

• X.com: an online bank that merged with PayPal to become the online payments corporation PayPal.

• Tesla, Inc.: an electric car and solar panel maker • SpaceX: a commercial aircraft manufacturer and space transportation services provider (via its subsidiarity SolarCity) • Neuralink: a neurotechnology startup focusing on brain-computer connections • The Boring Business: an infrastructure and tunnel construction corporation • OpenAI: a nonprofit AI research company focused on the promotion and development of friendly AI Musk is a supporter of environmentally friendly energy and consumption.

Concerns over the planet's future habitability prompted him to investigate the potential of establishing a self-sustaining human colony on Mars.

Other projects include the Hyperloop, a high-speed transportation system, and the Musk electric jet, a jet-powered supersonic electric aircraft.

Musk sat on President Donald Trump's Strategy and Policy Forum and Manufacturing Jobs Initiative for a short time before stepping out when the US withdrew from the Paris Climate Agreement.

Musk launched the Musk Foundation in 2002, which funds and supports research and activism in the domains of renewable energy, human space exploration, pediatric research, and science and engineering education.

Musk's effect on AI is significant, despite his best-known work with Tesla and SpaceX, as well as his contentious social media pronouncements.

In 2015, Musk cofounded the charity OpenAI with the objective of creating and supporting "friendly AI," or AI that is created, deployed, and utilized in a manner that benefits mankind as a whole.

OpenAI's objective is to make AI open and accessible to the general public, reducing the risks of AI being controlled by a few privileged people.

OpenAI is especially concerned about the possibility of Artificial General Intelligence (AGI), which is broadly defined as AI capable of human-level (or greater) performance on any intellectual task, and ensuring that any such AGI is developed responsibly, transparently, and distributed evenly and openly.

OpenAI has had its own successes in taking AI to new levels while staying true to its goals of keeping AI friendly and open.

In June of 2018, a team of OpenAI-built robots defeated a human team in the video game Dota 2, a feat that could only be accomplished through robot teamwork and collaboration.

Bill Gates, a cofounder of Microsoft, praised the achievement on Twitter, calling it "a huge milestone in advancing artificial intelligence" (@BillGates, June 26, 2018).

Musk resigned away from the OpenAI board in February 2018 to prevent any conflicts of interest while Tesla advanced its AI work for autonomous driving.

Musk became the CEO of Tesla in 2008 after cofounding the company in 2003 as an investor.

Musk was the chairman of Tesla's board of directors until 2018, when he stepped down as part of a deal with the US Securities and Exchange Commission over Musk's false claims about taking the company private.

Tesla produces electric automobiles with self-driving capabilities.

Tesla Grohmann Automation and Solar City, two of its subsidiaries, offer relevant automotive technology and manufacturing services and solar energy services, respectively.

Tesla, according to Musk, will reach Level 5 autonomous driving capabilities in 2019, as defined by the National Highway Traffic Safety Administration's (NHTSA) five levels of autonomous driving.

Tes la's aggressive development with autonomous driving has influenced conventional car makers' attitudes toward electric cars and autonomous driving, and prompted a congressional assessment of how and when the technology should be regulated.

Musk is widely credited as a key influencer in moving the automotive industry toward autonomous driving, highlighting the benefits of autonomous vehicles (including reduced fatalities in vehicle crashes, increased worker productivity, increased transportation efficiency, and job creation) and demonstrating that the technology is achievable in the near term.

Tesla's autonomous driving code has been created and enhanced under the guidance of Musk and Tesla's Director of AI, Andrej Karpathy (Autopilot).

The computer vision analysis used by Tesla, which includes an array of cameras on each car and real-time image processing, enables the system to make real-time observations and predictions.

The cameras, as well as other exterior and internal sensors, capture a large quantity of data, which is evaluated and utilized to improve Autopilot programming.

Tesla is the only autonomous car maker that is opposed to the LIDAR laser sensor (an acronym for light detection and ranging).

Tesla uses cameras, radar, and ultrasonic sensors instead.

Though academics and manufacturers disagree on whether LIDAR is required for fully autonomous driving, the high cost of LIDAR has limited Tesla's rivals' ability to produce and sell vehicles at a pricing range that allows a large number of cars on the road to gather data.

Tesla is creating its own AI hardware in addition to its AI programming.

Musk stated in late 2017 that Tesla is building its own silicon for artificial-intelligence calculations, allowing the company to construct its own AI processors rather than depending on third-party sources like Nvidia.

Tesla's AI progress in autonomous driving has been marred by setbacks.

Tesla has consistently missed self-imposed deadlines, and serious accidents have been blamed on flaws in the vehicle's Autopilot mode, including a non-injury accident in 2018, in which the vehicle failed to detect a parked firetruck on a California freeway, and a fatal accident in 2018, in which the vehicle failed to detect a pedestrian outside a crosswalk.

Neuralink was established by Musk in 2016.

With the stated objective of helping humans to keep up with AI breakthroughs, Neuralink is focused on creating devices that can be implanted into the human brain to better facilitate communication between the brain and software.

Musk has characterized the gadgets as a more efficient interface with computer equipment, while people now operate things with their fingertips and voice commands, directives would instead come straight from the brain.

Though Musk has made major advances to AI, his pronouncements regarding the risks linked with AI have been apocalyptic.

Musk has called AI "humanity's greatest existential danger" and "the greatest peril we face as a civilisation" (McFarland 2014).

(Morris 2017).

He cautions against the perils of power concentration, a lack of independent control, and a competitive rush to acceptance without appropriate analysis of the repercussions.

While Musk has used colorful terminology such as "summoning the devil" (McFarland 2014) and depictions of cyborg overlords, he has also warned of more immediate and realistic concerns such as job losses and AI-driven misinformation campaigns.

Though Musk's statements might come out as alarmist, many important and well-respected figures, including as Microsoft cofounder Bill Gates, Swedish-American scientist Max Tegmark, and the late theoretical physicist Stephen Hawking, share his concern.

Furthermore, Musk does not call for the cessation of AI research.

Instead, Musk supports for responsible AI development and regulation, including the formation of a Congressional committee to spend years studying AI with the goal of better understanding the technology and its hazards before establishing suitable legal limits.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Superintelligence; Technological Singularity; Workplace Automation.



References & Further Reading:


Moravec, Hans. 1988. Mind Children: The Future of Robot and Human Intelligence. Cambridge, MA: Harvard University Press.

Moravec, Hans. 1999. Robot: Mere Machine to Transcendent Mind. Oxford, UK: Oxford University Press.

Moravec, Hans. 2003. “Robots, After All.” Communications of the ACM 46, no. 10 (October): 90–97.

Pinker, Steven. 2007. The Language Instinct: How the Mind Creates Language. New York: Harper.




Artificial Intelligence - Who Was John McCarthy?

 


John McCarthy  (1927–2011) was an American computer scientist and mathematician who was best known for helping to develop the subject of artificial intelligence in the late 1950s and pushing the use of formal logic in AI research.

McCarthy was a creative thinker who earned multiple accolades for his contributions to programming languages and operating systems research.

Throughout McCarthy's life, however, artificial intelligence and "formalizing common sense" remained his primary research interest (McCarthy 1990).

As a graduate student, McCarthy first met the concepts that would lead him to AI at the Hixon conference on "Cerebral Mechanisms in Behavior" in 1948.

The symposium was place at the California Institute of Technology, where McCarthy had just finished his undergraduate studies and was now enrolled in a graduate mathematics program.

In the United States, machine intelligence had become a subject of substantial academic interest under the wide term of cybernetics by 1948, and many renowned cyberneticists, notably Princeton mathematician John von Neumann, were in attendance at the symposium.

McCarthy moved to Princeton's mathematics department a year later, when he discussed some early ideas inspired by the symposium with von Neumann.

McCarthy never published the work, despite von Neumann's urging, since he believed cybernetics could not solve his problems concerning human knowing.

McCarthy finished a PhD on partial differential equations at Princeton.

He stayed at Princeton as an instructor after graduating in 1951, and in the summer of 1952, he had the chance to work at Bell Labs with cyberneticist and inventor of information theory Claude Shannon, whom he persuaded to collaborate on an edited collection of writings on machine intelligence.

Automata Studies received contributions from a variety of fields, ranging from pure mathematics to neuroscience.

McCarthy, on the other hand, felt that the published studies did not devote enough attention to the important subject of how to develop intelligent machines.

McCarthy joined the mathematics department at Stanford in 1953, but was fired two years later, maybe because he spent too much time thinking about intelligent computers and not enough time working on his mathematical studies, he speculated.

In 1955, he accepted a position at Dartmouth, just as IBM was preparing to establish the New England Computation Center at MIT.

The New England Computation Center gave Dartmouth access to an IBM computer that was installed at MIT and made accessible to a group of New England colleges.

McCarthy met IBM researcher Nathaniel Rochester via the IBM initiative, and he recruited McCarthy to IBM in the summer of 1955 to work with his research group.

McCarthy persuaded Rochester of the need for more research on machine intelligence, and he submitted a proposal to the Rockefeller Foundation for a "Summer Research Project on Artificial Intelligence" with Rochester, Shannon, and Marvin Minsky, a graduate student at Princeton, which included the first known use of the phrase "artificial intelligence." Despite the fact that the Dartmouth Project is usually regarded as a watershed moment in the development of AI, the conference did not go as McCarthy had envisioned.

The Rockefeller Foundation supported the proposal at half the proposed budget since it was for such an unique field of research with a relatively young professor as author, and because Shannon's reputation carried substantial weight with the Foundation.

Furthermore, since the event took place over many weeks in the summer of 1955, only a handful of the guests were able to attend the whole period.

As a consequence, the Dartmouth conference was a fluid affair with an ever-changing and unpredictably diverse guest list.

Despite its chaotic implementation, the meeting was crucial in establishing AI as a distinct area of research.

McCarthy won a Sloan grant to spend a year at MIT, closer to IBM's New England Computation Center, while still at Dartmouth in 1957.

McCarthy was given a post in the Electrical Engineering department at MIT in 1958, which he accepted.

Later, he was joined by Minsky, who worked in the mathematics department.

McCarthy and Minsky suggested the construction of an official AI laboratory to Jerome Wiesner, head of MIT's Research Laboratory of Electronics, in 1958.

McCarthy and Minsky agreed on the condition that Wiesner let six freshly accepted graduate students into the laboratory, and the "artificial intelligence project" started teaching its first generation of students.

McCarthy released his first article on artificial intelligence in the same year.

In his book "Programs with Common Sense," he described a computer system he named the Advice Taker that would be capable of accepting and understanding instructions in ordinary natural language from nonexpert users.

McCarthy would later define Advice Taker as the start of a study program aimed at "formalizing common sense." McCarthy felt that everyday common sense notions, such as comprehending that if you don't know a phone number, you'll need to look it up before calling, might be written as mathematical equations and fed into a computer, enabling the machine to come to the same conclusions as humans.

Such formalization of common knowledge, McCarthy felt, was the key to artificial intelligence.

McCarthy's presentation, which was presented at the United Kingdom's National Physical Laboratory's "Symposium on Mechansation of Thought Processes," helped establish the symbolic program of AI research.

McCarthy's research was focused on AI by the late 1950s, although he was also involved in a range of other computing-related topics.

In 1957, he was assigned to a group of the Association for Computing Machinery charged with developing the ALGOL programming language, which would go on to become the de facto language for academic research for the next several decades.

He created the LISP programming language for AI research in 1958, and its successors are widely used in business and academia today.

McCarthy contributed to computer operating system research via the construction of time sharing systems, in addition to his work on programming languages.

Early computers were large and costly, and they could only be operated by one person at a time.

McCarthy identified the necessity for several users throughout a major institution, such as a university or hospital, to be able to use the organization's computer systems concurrently via computer terminals in their offices from his first interaction with computers in 1955 at IBM.

McCarthy pushed for study on similar systems at MIT, serving on a university committee that looked into the issue and ultimately assisting in the development of MIT's Compatible Time-Sharing System (CTSS).

Although McCarthy left MIT before the CTSS work was completed, his advocacy with J.C.R.

Licklider, future office head at the Advanced Research Projects Agency, the predecessor to DARPA, while a consultant at Bolt Beranek and Newman in Cambridge, was instrumental in helping MIT secure significant federal support for computing research.

McCarthy was recruited to join what would become the second department of computer science in the United States, after Purdue's, by Stanford Professor George Forsythe in 1962.

McCarthy insisted on going only as a full professor, which he believed would be too much for Forsythe to handle as a young researcher.

Forsythe was able to persuade Stanford to grant McCarthy a full chair, and he moved to Stanford in 1965 to establish the Stanford AI laboratory.

Until his retirement in 2000, McCarthy oversaw research at Stanford on AI topics such as robotics, expert systems, and chess.

McCarthy was up in a family where both parents were ardent members of the Communist Party, and he had a lifetime interest in Russian events.

He maintained numerous professional relationships with Soviet cybernetics and AI experts, traveling and lecturing there in the mid-1960s, and even arranged a chess match between a Stanford chess computer and a Russian equivalent in 1965, which the Russian program won.

He developed many foundational concepts in symbolic AI theory while at Stanford, such as circumscription, which expresses the idea that a computer must be allowed to make reasonable assumptions about problems presented to it; otherwise, even simple scenarios would have to be specified in such exacting logical detail that the task would be all but impossible.

McCarthy's accomplishments have been acknowledged with various prizes, including the 1971 Turing Award, the 1988 Kyoto Prize, admission into the National Academy of Sciences in 1989, the 1990 Presidential Medal of Science, and the 2003 Benjamin Franklin Medal.

McCarthy was a brilliant thinker who continuously imagined new technologies, such as a space elevator for economically transporting stuff into orbit and a system of carts strung from wires to better urban transportation.

In a 2008 interview, McCarthy was asked what he felt the most significant topics in computing now were, and he answered without hesitation, "Formalizing common sense," the same endeavor that had inspired him from the start.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cybernetics and AI; Expert Systems; Symbolic Logic.


References & Further Reading:


Hayes, Patrick J., and Leora Morgenstern. 2007. “On John McCarthy’s 80th Birthday, in Honor of His Contributions.” AI Magazine 28, no. 4 (Winter): 93–102.

McCarthy, John. 1990. Formalizing Common Sense: Papers, edited by Vladimir Lifschitz. Norwood, NJ: Albex.

Morgenstern, Leora, and Sheila A. McIlraith. 2011. “John McCarthy’s Legacy.” Artificial Intelligence 175, no. 1 (January): 1–24.

Nilsson, Nils J. 2012. “John McCarthy: A Biographical Memoir.” Biographical Memoirs of the National Academy of Sciences. http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/mccarthy-john.pdf.



Artificial Intelligence - Machine Translation.

  



Machine translation is the process of using computer technology to automatically translate human languages.

The US administration saw machine translation as a valuable instrument in diplomatic attempts to restrict communism in the USSR and the People's Republic of China from the 1950s through the 1970s.

Machine translation has lately become a tool for marketing goods and services in countries where they would otherwise be unavailable due to language limitations, as well as a standalone offering.

Machine translation is also one of the litmus tests for artificial intelligence progress.

This artificial intelligence study advances along three broad paradigms.

Rule-based expert systems and statistical methods to machine translation are the earliest.

Neural-based machine translation and example-based machine translation are two more contemporary paradigms (or translation by analogy).

Within computer linguistics, automated language translation is now regarded an academic specialization.

While there are multiple possible roots for the present discipline of machine translation, the notion of automated translation as an academic topic derives from a 1947 communication between crystallographer Andrew D. Booth of Birkbeck College (London) and Warren Weaver of the Rockefeller Foundation.

"I have a manuscript in front of me that is written in Russian, but I am going to assume that it is truly written in English and that it has been coded in some bizarre symbols," Weaver said in a preserved note to colleagues in 1949.

To access the information contained in the text, all I have to do is peel away the code" (Warren Weaver, as cited in Arnold et al. 1994, 13).

Most commercial machine translation systems have a translation engine at their core.

The user's sentences are parsed several times by translation engines, each time applying algorithmic rules to transform the source sentence into the desired target language.

There are rules for word-based and phrase-based trans formation.

The initial objective of a parser software is generally to replace words using a two-language dictionary.

Additional processing rounds of the phrases use comparative grammatical rules that consider sentence structure, verb form, and suffixes.

The intelligibility and accuracy of translation engines are measured.

Machine translation isn't perfect.

Poor grammar in the source text, lexical and structural differences between languages, ambiguous usage, multiple meanings of words and idioms, and local variations in usage can all lead to "word salad" translations.

In 1959–60, MIT philosopher, linguist, and mathematician Yehoshua Bar-Hillel issued the harshest early criticism of machine translation of language.

In principle, according to Bar-Hillel, near-perfect machine translation is impossible.

He used the following sentence to demonstrate the issue: John was on the prowl for his toy box.

He eventually discovered it.

In the pen, there was a box.

John was overjoyed.

The word "pen" poses a problem in this statement since it might refer to a child's playpen or a writing ballpoint pen.

Knowing the difference necessitates a broad understanding of the world, which a computer lacks.

When the National Academy of Sciences Automatic Language Processing Advisory Committee (ALPAC) released an extremely damaging report about the poor quality and high cost of machine translation in 1964, the initial rounds of US government funding eroded.

ALPAC came to the conclusion that the country already had an abundant supply of human translators capable of producing significantly greater translations.

Many machine translation experts slammed the ALPAC report, pointing to machine efficiency in the preparation of first drafts and the successful rollout of a few machine translation systems.

In the 1960s and 1970s, there were only a few machine translation research groups.

The TAUM group in Canada, the Mel'cuk and Apresian groups in the Soviet Union, the GETA group in France, and the German Saarbrücken SUSY group were among the biggest.

SYSTRAN (System Translation), a private corporation financed by government contracts founded by Hungarian-born linguist and computer scientist Peter Toma, was the main supplier of automated translation technology and services in the United States.

In the 1950s, Toma became interested in machine translation while studying at the California Institute of Technology.

Around 1960, Toma moved to Georgetown University and started collaborating with other machine translation experts.

The Georgetown machine translation project, as well as SYSTRAN's initial contract with the United States Air Force in 1969, were both devoted to translating Russian into English.

That same year, at Wright-Patterson Air Force Base, the company's first machine translation programs were tested.

SYSTRAN software was used by the National Aeronautics and Space Administration (NASA) as a translation help during the Apollo-Soyuz Test Project in 1974 and 1975.

Shortly after, SYSTRAN was awarded a contract by the Commission of the European Communities to offer automated translation services, and the company has subsequently amalgamated with the European Commission (EC).

By the 1990s, the EC had seventeen different machine translation systems focused on different language pairs in use for internal communications.

In 1992, SYSTRAN began migrating its mainframe software to personal computers.

SYSTRAN Professional Premium for Windows was launched in 1995 by the company.

SYSTRAN continues to be the industry leader in machine translation.

METEO, which has been in use by the Canadian Meteorological Center in Montreal since 1977 for the purpose of translating weather bulletins from English to French; ALPS, developed by Brigham Young University for Bible translation; SPANAM, the Pan American Health Organization's Spanish-to-English automatic translation system; and METAL, developed at the University of Toronto.

In the late 1990s, machine translation became more readily accessible to the general public through web browsers.

Babel Fish, a web-based application created by a group of researchers at Digital Equipment Corporation using SYSTRAN machine translation technology, was one of the earliest online language translation services (DEC).

Thirty-six translation pairs between thirteen languages were supported by the technology.

Babel Fish began as an AltaVista web search engine tool before being sold to Yahoo! and then Microsoft.

The majority of online translation services still use rule-based and statistical machine translation.

Around 2016, SYSTRAN, Microsoft Translator, and Google Translate made the switch to neural machine translation.

103 languages are supported by Google Translate.

Predictive deep learning algorithms, artificial neural networks, or connectionist systems based after biological brains are used in neural machine translation.

Machine translation based on neural networks is achieved in two steps.

The translation engine models its interpretation in the first phase based on the context of each source word within the entire sentence.

The artificial neural network then translates the entire word model into the target language in the second phase.

Simply said, the engine predicts the probability of word sequences and combinations inside whole sentences, resulting in a fully integrated translation model.

The underlying algorithms use statistical models to learn language rules.

The Harvard SEAS natural language processing group, in collaboration with SYSTRAN, has launched OpenNMT, an open-source neural machine translation system.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Cheng, Lili; Natural Language Processing and Speech Understanding.



Further Reading:


Arnold, Doug J., Lorna Balkan, R. Lee Humphreys, Seity Meijer, and Louisa Sadler. 1994. Machine Translation: An Introductory Guide. Manchester and Oxford: NCC Blackwell.

Bar-Hillel, Yehoshua. 1960. “The Present Status of Automatic Translation of Languages.” Advances in Computers 1: 91–163.

Garvin, Paul L. 1967. “Machine Translation: Fact or Fancy?” Datamation 13, no. 4: 29–31.

Hutchins, W. John, ed. 2000. Early Years in Machine Translation: Memoirs and Biographies of Pioneers. Philadelphia: John Benjamins.

Locke, William Nash, and Andrew Donald Booth, eds. 1955. Machine Translation of Languages. New York: Wiley.

Yngve, Victor H. 1964. “Implications of Mechanical Translation Research.” Proceedings of the American Philosophical Society 108 (August): 275–81.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...