Showing posts with label AI Research. Show all posts
Showing posts with label AI Research. Show all posts

Artificial Intelligence - Who Was Herbert A. Simon?

 


Herbert A. Simon (1916–2001) was a multidisciplinary scholar who contributed significantly to artificial intelligence.


He is largely regarded as one of the twentieth century's most prominent social scientists.

His contributions at Carnegie Mellon University lasted five decades.

Early artificial intelligence research was driven by the idea of the computer as a symbol manipulator rather than a number cruncher.

Emil Post, who originally wrote about this sort of computational model in 1943, is credited with inventing production systems, which included sets of rules for symbol strings used to establish conditions—which must exist before rules can be applied—and the actions to be done or conclusions to be drawn.

Simon and his Carnegie Mellon colleague Allen Newell popularized these theories regarding symbol manipulation and production systems by praising their potential benefits for general-purpose reading, storing, and replicating, as well as comparing and contrasting various symbols and patterns.


Simon, Newell, and Cliff Shaw's Logic Theorist software was the first to employ symbol manipulation to construct "intelligent" behavior.


Theorems presented in Bertrand Russell and Alfred North Whitehead's Principia Mathematica (1910) might be independently proved by logic theorists.

Perhaps most notably, the Logic Theorist program uncovered a shorter, more elegant demonstration of Theorem 2.85 in the Principia Mathematica, which was subsequently rejected by the Journal of Symbolic Logic since it was coauthored by a machine.

Although it was theoretically conceivable to prove the Principia Mathematica's theorems in an exhaustively detailed and methodical manner, it was impractical in reality due to the time required.

Newell and Simon were fascinated by the human rules of thumb for solving difficult issues for which an extensive search for answers was impossible due to the massive quantities of processing necessary.

They used the term "heuristics" to describe procedures that may solve issues but do not guarantee success.


A heuristic is a "rule of thumb" used to solve a problem that is too difficult or time consuming to address using an exhaustive search, a formula, or a step-by-step method.


Heuristic approaches are often compared with algorithmic methods in computer science, with the result of the method being a significant differentiating element.

According to this contrast, a heuristic program will provide excellent results in most cases, but not always, while an algorithmic program is a clear technique that guarantees a solution.

This is not, however, a technical difference.

In fact, a heuristic procedure that consistently yields the best result may no longer be deemed "heuristic"—alpha-beta pruning is an example of this.

Simon's heuristics are still utilized by programmers who are trying to solve issues that demand a lot of time and/or memory.

The game of chess is one such example, in which an exhaustive search of all potential board configurations for the proper solution is beyond the human mind's or any computer's capabilities.


Indeed, for artificial intelligence research, Herbert Simon and Allen Newell referred to computer chess as the Drosophila or fruit fly.


Heuristics may also be used to solve issues that don't have a precise answer, such as in medical diagnosis, when heuristics are applied to a collection of symptoms to determine the most probable diagnosis.

Production rules are derived from a class of cognitive science models that apply heuristic principles to productions (situations).

In practice, these rules reduce down to "IF-THEN" statements that reflect specific preconditions or antecedents, as well as the conclusions or consequences that these preconditions or antecedents justify.

"IF there are two X's in a row, THEN put an O to block," is a frequent example offered for the application of production rules to the tic-tac-toe game.

These IF-THEN statements are incorporated into expert systems' inference mechanisms so that a rule interpreter can apply production rules to specific situations lodged in the context data structure or short-term working memory buffer containing information supplied about that situation and draw conclusions or make recommendations.


Production rules were crucial in the development of artificial intelligence as a discipline.


Joshua Lederberg, Edward Feigenbaum, and other Stanford University partners would later use this fundamental finding to develop DENDRAL, an expert system for detecting molecular structure, in the 1960s.

These production guidelines were developed in DENDRAL after discussions between the system's developers and other mass spectrometry specialists.

Edward Shortliffe, Bruce Buchanan, and Edward Feigenbaum used production principles to create MYCIN in the 1970s.

MYCIN has over 600 IFTHEN statements in it, each reflecting domain-specific knowledge about microbial illness diagnosis and treatment.

PUFF, EXPERT, PROSPECTOR, R1, and CLAVIER were among the several production rule systems that followed.


Simon, Newell, and Shaw demonstrated how heuristics may overcome the drawbacks of classical algorithms, which promise answers but take extensive searches or heavy computing to find.


A process for solving issues in a restricted, clear sequence of steps is known as an algorithm.

Sequential operations, conditional operations, and iterative operations are the three kinds of fundamental instructions required to create computable algorithms.

Sequential operations perform tasks in a step-by-step manner.

The algorithm only moves on to the next job when each step is completed.

Conditional operations are made up of instructions that ask questions and then choose the next step dependent on the response.

One kind of conditional operation is the "IF-THEN" expression.

Iterative operations run "loops" of instructions.

These statements tell the task flow to go back and repeat a previous series of statements in order to solve an issue.

Algorithms are often compared to cookbook recipes, in which a certain order and execution of actions in the manufacture of a product—in this example, food—are dictated by a specific sequence of set instructions.


Newell, Shaw, and Simon created list processing for the Logic Theorist software in 1956.


List processing is a programming technique for allocating dynamic storage.

It's mostly utilized in symbol manipulation computer applications like compiler development, visual or linguistic data processing, and artificial intelligence, among others.

Allen Newell, J. Clifford Shaw, and Herbert A. Simon are credited with creating the first list processing software with enormous, sophisticated, and flexible memory structures that were not reliant on subsequent computer/machine memory.

List processing techniques are used in a number of higher-order languages.

IPL and LISP, two artificial intelligence languages, are the most well-known.


Simon and Newell's Generic Problem Solver (GPS) was published in the early 1960s, and it thoroughly explains the essential properties of symbol manipulation as a general process that underpins all types of intelligent problem-solving behavior.


GPS formed the foundation for decades of early AI research.

To arrive at a solution, General Problem Solver is a software for a problem-solving method that employs means-ends analysis and planning.

GPS was created with the goal of separating the problem-solving process from information particular to the situation at hand, allowing it to be used to a wide range of issues.

Simon is an economist, a political scientist, and a cognitive psychologist.


Simon is known for the notions of limited rationality, satisficing, and power law distributions in complex systems, in addition to his important contributions to organizational theory, decision-making, and problem-solving.


Computer and data scientists are interested in all three themes.

Human reasoning is inherently constrained, according to bounded rationality.

Humans lack the time or knowledge required to make ideal judgments; problems are difficult, and the mind has cognitive limitations.

Satisficing is a term used to describe a decision-making process that produces a solution that "satisfies" and "suffices," rather than the most ideal answer.

Customers use satisficing in market conditions when they choose things that are "good enough," meaning sufficient or acceptable.


Simon described how power law distributions were obtained from preferred attachment mechanisms in his study on complex organizations.


When a relative change in one variable induces a proportionate change in another, power laws, also known as scaling laws, come into play.

A square is a simple illustration; when the length of a side doubles, the square's area quadruples.

Power laws may be found in biological systems, fractal patterns, and wealth distributions, among other things.

Preferential attachment processes explain why the affluent grow wealthier in income/wealth distributions: Individuals' wealth is dispersed according on their current level of wealth; those with more wealth get proportionately more income, and hence greater overall wealth, than those with less.

When graphed, such distributions often create so-called long tails.

These long-tailed distributions are being employed to explain crowdsourcing, microfinance, and online marketing, among other things.



Simon was born in Milwaukee, Wisconsin, to a Jewish electrical engineer with multiple patents who came from Germany in the early twentieth century.


His mother was a musical prodigy. Simon grew interested in the social sciences after reading books on psychology and economics written by an uncle.

He has said that two works inspired his early thinking on the subjects: Norman Angell's The Great Illusion (1909) and Henry George's Progress and Poverty (1879).



Simon obtained his doctorate in organizational decision-making from the University of Chicago in 1943.

Rudolf Carnap, Harold Lasswell, Edward Merriam, Nicolas Rashevsky, and Henry Schultz were among his instructors.

He started his career as a political science professor at the Illinois Institute of Technology, where he taught and conducted research.

In 1949, he transferred to Carnegie Mellon University, where he stayed until 2001.

He progressed through the ranks of the Department of Industrial Management to become its chair.

He has written twenty-seven books and several articles that have been published.

In 1959, he was elected a member of the American Academy of Arts and Sciences.

In 1975, Simon was awarded the coveted Turing Award, and in 1978, he was awarded the Nobel Prize in Economics.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; Expert Systems; General Problem Solver; Newell, Allen.


References & Further Reading:


Crowther-Heyck, Hunter. 2005. Herbert A. Simon: The Bounds of Reason in Modern America. Baltimore: Johns Hopkins Press.

Newell, Allen, and Herbert A. Simon. 1956. The Logic Theory Machine: A Complex Information Processing System. Santa Monica, CA: The RAND Corporation.

Newell, Allen, and Herbert A. Simon. 1976. “Computer Science as Empirical Inquiry: Symbols and Search.” Communications of the ACM 19, no. 3: 113–26.

Simon, Herbert A. 1996. Models of My Life. Cambridge, MA: MIT Press.



Artificial Intelligence - What Is The MYCIN Expert System?




MYCIN is an interactive expert system for infectious illness diagnosis and treatment developed by computer scientists Edward Feigenbaum (1936–) and Bruce Buchanan at Stanford University in the 1970s.

MYCIN was Feigenbaum's second expert system (after DENDRAL), but it was the first to be commercially accessible as a standalone software package.

TeKnowledge, the software business cofounded by Feigenbaum and other partners, offered EMYCIN as the most successful expert shell for this purpose by the 1980s.

MYCIN was developed by Feigenbaum's Heuristic Programming Project (HPP) in collaboration with Stanford Medical School's Infectious Diseases Group (IDG).

The expert clinical physician was IDG's Stanley Cohen.

Feigenbaum and Buchanan had read stories of antibiotics being prescribed wrongly owing to misdiagnoses in the early 1970s.

MYCIN was created to assist a human expert in making the best judgment possible.

MYCIN started out as a consultation tool.

MYCIN supplied a diagnosis that included the necessary antibiotics and dose after inputting the results of a patient's blood test, bacterial cultures, and other data.



MYCIN also served as an explanation system.

In simple English, the physician-user may ask MYCIN to expound on a certain inference.

Finally, MYCIN had a knowledge-acquisition software that was used to keep the system's knowledge base up to date.

Feigenbaum and his collaborators introduced two additional features to MYCIN after gaining experience with DENDRAL.

MYCIN's inference engine comes with a rule interpreter to begin with.

This enabled "goal-directed backward chaining" to be used to achieve diagnostic findings (Cendrowska and Bramer 1984, 229).

MYCIN set itself the objective of discovering a useful clinical parameter that matched the patient data submitted at each phase in the procedure.

The inference engine looked for a set of rules that applied to the parameter in question.

MYCIN typically required more information when evaluating the premise of one of the rules in this parameter set.

The system's next subgoal was to get that data.

MYCIN might test out new regulations or ask the physician for further information.

This process was repeated until MYCIN had enough data on numerous factors to make a diagnosis.

The certainty factor was MYCIN's second unique feature.

These elements should not be seen "as conditional probabilities, [though] they are loosely grounded on probability theory," according to William van Melle (then a doctoral student working on MYCIN for his thesis project) (van Melle 1978, 314).

The execution of production rules was assigned a value between –1 and +1 by MYCIN (dependent on how strongly the system felt about their correctness).

MYCIN's diagnosis also included these certainty elements, allowing the physician-user to make their own final decision.

The software package, known as EMYCIN, was released in 1976 and comprised an inference engine, user interface, and short-term memory.

It didn't have any information.

("E" stood for "Empty" at first, then "Essential.") Customers of EMYCIN were required to link their own knowledge base to the system.

Faced with high demand for EMYCIN packages and high interest in MOLGEN (Feigenbaum's third expert system), HPP decided to form IntelliCorp and TeKnowledge, the first two expert system firms.

TeKnowledge was eventually founded by a group of roughly twenty individuals, including all of the previous HPP students who had developed expert systems.

EMYCIN was and continues to be their most popular product.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Expert Systems; Knowledge Engineering


References & Further Reading:


Cendrowska, J., and M. A. Bramer. 1984. “A Rational Reconstruction of the MYCIN Consultation System.” International Journal of Man-Machine Studies 20 (March): 229–317.

Crevier, Daniel. 1993. AI: Tumultuous History of the Search for Artificial Intelligence. Princeton, NJ: Princeton University Press.

Feigenbaum, Edward. 2000. “Oral History.” Charles Babbage Institute, October 13, 2000. van Melle, William. 1978. “MYCIN: A Knowledge-based Consultation Program for Infectious Disease Diagnosis.” International Journal of Man-Machine Studies 10 (May): 313–22.







Artificial Intelligence - Who Is Elon Musk?

 




Elon Musk (1971–) is an American businessman and inventor.

Elon Musk is an engineer, entrepreneur, and inventor who was born in South Africa.

He is a dual citizen of South Africa, Canada, and the United States, and resides in California.

Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century, as well as an important influencer and contributor to the development of artificial intelligence.

Despite his controversial personality, Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century and an important influencer and contributor to the development of artificial intelligence.

Musk's business instincts and remarkable technological talent were evident from an early age.

By the age of 10, he had self-taught himself how program computers, and by the age of twelve, he had produced a video game and sold the source code to a computer magazine.

Musk has included allusions to some of his favorite novels in SpaceX's Falcon Heavy rocket launch and Tesla's software since he was a youngster.

Musk's official schooling was centered on economics and physics rather than engineering, interests that are mirrored in his subsequent work, such as his efforts in renewable energy and space exploration.

He began his education at Queen's University in Canada, but later transferred to the University of Pennsylvania, where he earned bachelor's degrees in Economics and Physics.

Musk barely stayed at Stanford University for two days to seek a PhD in energy physics before departing to start his first firm, Zip2, with his brother Kimbal Musk.


Musk has started or cofounded many firms, including three different billion-dollar enterprises: SpaceX, Tesla, and PayPal, all driven by his diverse interests and goals.


• Zip2 was a web software business that was eventually purchased by Compaq.

• X.com: an online bank that merged with PayPal to become the online payments corporation PayPal.

• Tesla, Inc.: an electric car and solar panel maker 

• SpaceX: a commercial aircraft manufacturer and space transportation services provider (via its subsidiarity SolarCity) 

• Neuralink: a neurotechnology startup focusing on brain-computer connections 

• The Boring Business: an infrastructure and tunnel construction corporation

 • OpenAI: a nonprofit AI research company focused on the promotion and development of friendly AI Musk is a supporter of environmentally friendly energy and consumption.


Concerns over the planet's future habitability prompted him to investigate the potential of establishing a self-sustaining human colony on Mars.

Other projects include the Hyperloop, a high-speed transportation system, and the Musk electric jet, a jet-powered supersonic electric aircraft.

Musk sat on President Donald Trump's Strategy and Policy Forum and Manufacturing Jobs Initiative for a short time before stepping out when the US withdrew from the Paris Climate Agreement.

Musk launched the Musk Foundation in 2002, which funds and supports research and activism in the domains of renewable energy, human space exploration, pediatric research, and science and engineering education.

Musk's effect on AI is significant, despite his best-known work with Tesla and SpaceX, as well as his contentious social media pronouncements.

In 2015, Musk cofounded the charity OpenAI with the objective of creating and supporting "friendly AI," or AI that is created, deployed, and utilized in a manner that benefits mankind as a whole.

OpenAI's objective is to make AI open and accessible to the general public, reducing the risks of AI being controlled by a few privileged people.

OpenAI is especially concerned about the possibility of Artificial General Intelligence (AGI), which is broadly defined as AI capable of human-level (or greater) performance on any intellectual task, and ensuring that any such AGI is developed responsibly, transparently, and distributed evenly and openly.

OpenAI has had its own successes in taking AI to new levels while staying true to its goals of keeping AI friendly and open.

In June of 2018, a team of OpenAI-built robots defeated a human team in the video game Dota 2, a feat that could only be accomplished through robot teamwork and collaboration.

Bill Gates, a cofounder of Microsoft, praised the achievement on Twitter, calling it "a huge milestone in advancing artificial intelligence" (@BillGates, June 26, 2018).

Musk resigned away from the OpenAI board in February 2018 to prevent any conflicts of interest while Tesla advanced its AI work for autonomous driving.

Musk became the CEO of Tesla in 2008 after cofounding the company in 2003 as an investor.

Musk was the chairman of Tesla's board of directors until 2018, when he stepped down as part of a deal with the US Securities and Exchange Commission over Musk's false claims about taking the company private.

Tesla produces electric automobiles with self-driving capabilities.

Tesla Grohmann Automation and Solar City, two of its subsidiaries, offer relevant automotive technology and manufacturing services and solar energy services, respectively.

Tesla, according to Musk, will reach Level 5 autonomous driving capabilities in 2019, as defined by the National Highway Traffic Safety Administration's (NHTSA) five levels of autonomous driving.

Tes la's aggressive development with autonomous driving has influenced conventional car makers' attitudes toward electric cars and autonomous driving, and prompted a congressional assessment of how and when the technology should be regulated.

Musk is widely credited as a key influencer in moving the automotive industry toward autonomous driving, highlighting the benefits of autonomous vehicles (including reduced fatalities in vehicle crashes, increased worker productivity, increased transportation efficiency, and job creation) and demonstrating that the technology is achievable in the near term.

Tesla's autonomous driving code has been created and enhanced under the guidance of Musk and Tesla's Director of AI, Andrej Karpathy (Autopilot).

The computer vision analysis used by Tesla, which includes an array of cameras on each car and real-time image processing, enables the system to make real-time observations and predictions.

The cameras, as well as other exterior and internal sensors, capture a large quantity of data, which is evaluated and utilized to improve Autopilot programming.

Tesla is the only autonomous car maker that is opposed to the LIDAR laser sensor (an acronym for light detection and ranging).

Tesla uses cameras, radar, and ultrasonic sensors instead.

Though academics and manufacturers disagree on whether LIDAR is required for fully autonomous driving, the high cost of LIDAR has limited Tesla's rivals' ability to produce and sell vehicles at a pricing range that allows a large number of cars on the road to gather data.

Tesla is creating its own AI hardware in addition to its AI programming.

Musk stated in late 2017 that Tesla is building its own silicon for artificial-intelligence calculations, allowing the company to construct its own AI processors rather than depending on third-party sources like Nvidia.

Tesla's AI progress in autonomous driving has been marred by setbacks.

Tesla has consistently missed self-imposed deadlines, and serious accidents have been blamed on flaws in the vehicle's Autopilot mode, including a non-injury accident in 2018, in which the vehicle failed to detect a parked firetruck on a California freeway, and a fatal accident in 2018, in which the vehicle failed to detect a pedestrian outside a crosswalk.

Neuralink was established by Musk in 2016.

With the stated objective of helping humans to keep up with AI breakthroughs, Neuralink is focused on creating devices that can be implanted into the human brain to better facilitate communication between the brain and software.

Musk has characterized the gadgets as a more efficient interface with computer equipment, while people now operate things with their fingertips and voice commands, directives would instead come straight from the brain.

Though Musk has made major advances to AI, his pronouncements regarding the risks linked with AI have been apocalyptic.

Musk has called AI "humanity's greatest existential danger" and "the greatest peril we face as a civilisation" (McFarland 2014).

(Morris 2017).

He cautions against the perils of power concentration, a lack of independent control, and a competitive rush to acceptance without appropriate analysis of the repercussions.

While Musk has used colorful terminology such as "summoning the devil" (McFarland 2014) and depictions of cyborg overlords, he has also warned of more immediate and realistic concerns such as job losses and AI-driven misinformation campaigns.

Though Musk's statements might come out as alarmist, many important and well-respected figures, including as Microsoft cofounder Bill Gates, Swedish-American scientist Max Tegmark, and the late theoretical physicist Stephen Hawking, share his concern.

Furthermore, Musk does not call for the cessation of AI research.

Instead, Musk supports for responsible AI development and regulation, including the formation of a Congressional committee to spend years studying AI with the goal of better understanding the technology and its hazards before establishing suitable legal limits.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Bostrom, Nick; Superintelligence.


References & Further Reading:


Gates, Bill. (@BillGates). 2018. Twitter, June 26, 2018. https://twitter.com/BillGates/status/1011752221376036864.

Marr, Bernard. 2018. “The Amazing Ways Tesla Is Using Artificial Intelligence and Big Data.” Forbes, January 8, 2018. https://www.forbes.com/sites/bernardmarr/2018/01/08/the-amazing-ways-tesla-is-using-artificial-intelligence-and-big-data/.

McFarland, Matt. 2014. “Elon Musk: With Artificial Intelligence, We Are Summoning the Demon.” Washington Post, October 24, 2014. https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/.

Morris, David Z. 2017. “Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization.’” Fortune, July 15, 2017. https://fortune.com/2017/07/15/elon-musk-artificial-intelligence-2/.

Piper, Kelsey. 2018. “Why Elon Musk Fears Artificial Intelligence.” Vox Media, Novem￾ber 2, 2018. https://www.vox.com/future-perfect/2018/11/2/18053418/elon-musk-artificial-intelligence-google-deepmind-openai.

Strauss, Neil. 2017. “Elon Musk: The Architect of Tomorrow.” Rolling Stone, November 15, 2017. https://www.rollingstone.com/culture/culture-features/elon-musk-the-architect-of-tomorrow-120850/.



Artificial Intelligence - Who Is Hans Moravec?

 




Hans Moravec(1948–) is well-known in the computer science community as the long-time head of Carnegie Mellon University's Robotics Institute and an unashamed techno logical optimist.

For the last twenty-five years, he has studied and produced artificially intelligent robots at the CMU lab, where he is still an adjunct faculty member.

Moravec spent almost 10 years as a research assistant at Stanford University's groundbreaking Artificial Intelligence Lab before coming to Carnegie Mellon.

Moravec is also noted for his paradox, which states that, contrary to popular belief, it is simple to program high-level thinking skills into robots—as with chess or Jeopardy!—but difficult to transmit sensorimo tor agility.

Human sensory and motor abilities have developed over millions of years and seem to be easy, despite their complexity.

Higher-order cognitive abilities, on the other hand, are the result of more recent cultural development.

Geometry, stock market research, and petroleum engineering are examples of disciplines that are difficult for people to learn but easier for robots to learn.

"The basic lesson of thirty-five years of AI research is that the hard issues are simple, and the easy ones are hard," writes Steven Pinker of Moravec's scientific career.

Moravec built his first toy robot out of scrap metal when he was eleven years old, and his light-following electronic turtle and a robot operated by punched paper tape earned him two high school science fair honors.

He proposed a Ship of Theseus-like analogy for the viability of artificial brains while still in high school.

Consider replacing a person's human neurons one by one with precisely manufactured equivalents, he said.

When do you think human awareness will vanish? Is anybody going to notice? Is it possible to establish that the person is no longer human? Later in his career, Moravec would suggest that human knowledge and training might be broken down in the same manner, into subtasks that machine intelligences could take over.

Moravec's master's thesis focused on the development of a computer language for artificial intelligence, while his PhD research focused on the development of a robot that could navigate obstacle courses utilizing spatial representation methods.

The area of interest (ROI) in a scene was identified by these robot vision systems.

Moravec's early computer vision robots were extremely sluggish by today's standards, taking around five hours to go from one half of the facility to the other.

To measure distance and develop an internal picture of physical impediments in the room, a remote computer carefully analysed continuous video-camera images recorded by the robot from various angles.

Moravec finally developed 3D occupancy grid technology, which allowed a robot to create an awareness of a cluttered area in a matter of seconds.

Moravec's lab took on a new challenge by converting a Pontiac TransSport minivan into one of the world's first road-ready autonomous cars.

The self-driving minivan reached speeds of up to 60 miles per hour.

DANTE II, a robot capable of going inside the crater of an active volcano on Mount Spurr in Alaska, was also constructed by the CMU Robotics Institute.

While DANTE II's immediate aim was to sample harmful fumarole gases, a job too perilous for humans, it was also planned to demonstrate technologies for robotic expeditions to distant worlds.

The volcanic explorer robot used artificial intelligence to navigate the perilous, boulder-strewn terrain on its own.

Because such rovers produced so much visual and other sensory data that had to be analyzed and managed, Moravec believes that experience with mobile robots spurred the development of powerful artificial intelligence and computer vision methods.

For the National Aeronautics and Space Administration (NASA), Moravec's team built fractal branching ultra-dexterous robots ("Bush robots") in the 1990s.

These robots, which were proposed but never produced due to the lack of necessary manufacturing technologies, comprised of a branching hierarchy of dynamic articulated limbs, starting with a main trunk and splitting down into smaller branches.

As a result, the Bush robot would have "hands" at all scales, from macroscopic to tiny.

The tiniest fingers would be nanoscale in size, allowing them to grip very tiny objects.

Moravec said the robot would need autonomy and depend on artificial intelligence agents scattered throughout the robot's limbs and branches because to the intricacy of manipulating millions of fingers in real time.

He believed that the robots may be made entirely of carbon nanotube material, employing the quick prototyping technology known as 3D printers.

Moravec believes that artificial intelligence will have a significant influence on human civilization.

To stress the role of AI in change, he coined the concept of the "landscape of human capability," which physicist Max Tegmark has later converted into a graphic depiction.

Moravec's picture depicts a three-dimensional environment in which greater altitudes reflect more challenging jobs in terms of human difficulty.

The point where the swelling waters meet the shore reflects the line where robots and humans both struggle with the same duties.

Art, science, and literature are now beyond of grasp for an AI, but the sea has already defeated mathematics, chess, and the game Go.

Language translation, autonomous driving, and financial investment are all on the horizon.

More controversially, in two popular books, Mind Children (1988) and Robot: Mere Machine to Transcendent Mind (1989), Moravec engaged in future conjecture based on what he understood of developments in artificial intelligence research (1999).

In 2040, he said, human intellect will be surpassed by machine intelligence, and the human species would go extinct.

Moravec evaluated the functional equivalence between 50,000 million instructions per second (50,000 MIPS) of computer power and a gram of brain tissue and came up with this figure.

He calculated that home computers in the early 2000s equaled only an insect's nervous system, but that if processing power doubled every eighteen months, 350 million years of human intellect development could be reduced to just 35 years of artificial intelligence advancement.

He estimated that a hundred million MIPS would be required to create human-like universal robots.

Moravec refers to these sophisticated robots as our "mind children" in the year 2040.

Humans, he claims, will devise techniques to delay biological civilization's final demise.

Moravec, for example, was the first to anticipate what is now known as universal basic income, which is delivered by benign artificial superintelligences.

In a completely automated society, a basic income system would provide monthly cash payments to all individuals without any type of employment requirement.

Moravec is more concerned about the idea of a renegade automated corporation breaking its programming and refusing to pay taxes into the human cradle-to-grave social security system than he is about technological unemployment.

Nonetheless, he predicts that these "wild" intelligences will eventually control the universe.

Moravec has said that his books Mind Children and Robot may have had a direct impact on the last third of Stanley Kubrick's original screenplay for A.I. Artificial Intelligence (later filmed by Steven Spielberg).

Moravecs, on the other hand, are self-replicating devices in the science fiction books Ilium and Olympos.

Moravec defended the same physical fundamentalism he expressed in his high school thoughts throughout his life.

He contends in his most transhumanist publications that the only way for humans to stay up with machine intelligences is to merge with them by replacing sluggish human cerebral tissue with artificial neural networks controlled by super-fast algorithms.

In his publications, Moravec has blended the ideas of artificial intelligence with virtual reality simulation.


He's come up with four scenarios for the development of consciousness.

(1) human brains in the physical world, 

(2) a programmed AI implanted in a physical robot, 

(3) a human brain immersed in a virtual reality simulation, and 

(4) an AI functioning inside the boundaries of virtual reality All of them are equally credible depictions of reality, and they are as "real" as we believe them to be.


Moravec is the creator and chief scientist of the Pittsburgh-based Seegrid Corporation, which makes autonomous Robotic Industrial Trucks that can navigate warehouses and factories without the usage of automated guided vehicle systems.

A human trainer physically pushes Seegrid's vehicles through a new facility once.

The robot conducts the rest of the job, determining the most efficient and safe pathways for future journeys, while the trainer stops at the appropriate spots for the truck to be loaded and unloaded.

Seegrid VGVs have transported over two million production miles and eight billion pounds of merchandise for DHL, Whirlpool, and Amazon.

Moravec was born in the Austrian town of Kautzen.

During World War II, his father was a Czech engineer who sold electrical products.

When the Russians invaded Czechoslovakia in 1944, the family moved to Austria.

In 1953, his family relocated to Canada, where he now resides.

Moravec earned a bachelor's degree in mathematics from Acadia University in Nova Scotia, a master's degree in computer science from the University of Western Ontario, and a doctorate from Stanford University, where he worked with John McCarthy and Tom Binford on his thesis.

The Office of Naval Study, the Defense Advanced Research Projects Agency, and NASA have all supported his research.

Elon Musk (1971–) is an American businessman and inventor.

Elon Musk is an engineer, entrepreneur, and inventor who was born in South Africa.

He is a dual citizen of South Africa, Canada, and the United States, and resides in California.

Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century, as well as an important influencer and contributor to the development of artificial intelligence.

Despite his controversial personality, Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century and an important influencer and contributor to the development of artificial intelligence.

Musk's business instincts and remarkable technological talent were evident from an early age.

By the age of 10, he had self-taught himself how program computers, and by the age of twelve, he had produced a video game and sold the source code to a computer maga zine.

Musk has included allusions to some of his favorite novels in SpaceX's Falcon Heavy rocket launch and Tesla's software since he was a youngster.

Musk's official schooling was centered on economics and physics rather than engineering, interests that are mirrored in his subsequent work, such as his efforts in renewable energy and space exploration.

He began his education at Queen's University in Canada, but later transferred to the University of Pennsylvania, where he earned bachelor's degrees in Economics and Physics.

Musk barely stayed at Stanford University for two days to seek a PhD in energy physics before departing to start his first firm, Zip2, with his brother Kimbal Musk.

Musk has started or cofounded many firms, including three different billion-dollar enterprises: SpaceX, Tesla, and PayPal, all driven by his diverse interests and goals.

• Zip2 was a web software business that was eventually purchased by Compaq.

• X.com: an online bank that merged with PayPal to become the online payments corporation PayPal.

• Tesla, Inc.: an electric car and solar panel maker • SpaceX: a commercial aircraft manufacturer and space transportation services provider (via its subsidiarity SolarCity) • Neuralink: a neurotechnology startup focusing on brain-computer connections • The Boring Business: an infrastructure and tunnel construction corporation • OpenAI: a nonprofit AI research company focused on the promotion and development of friendly AI Musk is a supporter of environmentally friendly energy and consumption.

Concerns over the planet's future habitability prompted him to investigate the potential of establishing a self-sustaining human colony on Mars.

Other projects include the Hyperloop, a high-speed transportation system, and the Musk electric jet, a jet-powered supersonic electric aircraft.

Musk sat on President Donald Trump's Strategy and Policy Forum and Manufacturing Jobs Initiative for a short time before stepping out when the US withdrew from the Paris Climate Agreement.

Musk launched the Musk Foundation in 2002, which funds and supports research and activism in the domains of renewable energy, human space exploration, pediatric research, and science and engineering education.

Musk's effect on AI is significant, despite his best-known work with Tesla and SpaceX, as well as his contentious social media pronouncements.

In 2015, Musk cofounded the charity OpenAI with the objective of creating and supporting "friendly AI," or AI that is created, deployed, and utilized in a manner that benefits mankind as a whole.

OpenAI's objective is to make AI open and accessible to the general public, reducing the risks of AI being controlled by a few privileged people.

OpenAI is especially concerned about the possibility of Artificial General Intelligence (AGI), which is broadly defined as AI capable of human-level (or greater) performance on any intellectual task, and ensuring that any such AGI is developed responsibly, transparently, and distributed evenly and openly.

OpenAI has had its own successes in taking AI to new levels while staying true to its goals of keeping AI friendly and open.

In June of 2018, a team of OpenAI-built robots defeated a human team in the video game Dota 2, a feat that could only be accomplished through robot teamwork and collaboration.

Bill Gates, a cofounder of Microsoft, praised the achievement on Twitter, calling it "a huge milestone in advancing artificial intelligence" (@BillGates, June 26, 2018).

Musk resigned away from the OpenAI board in February 2018 to prevent any conflicts of interest while Tesla advanced its AI work for autonomous driving.

Musk became the CEO of Tesla in 2008 after cofounding the company in 2003 as an investor.

Musk was the chairman of Tesla's board of directors until 2018, when he stepped down as part of a deal with the US Securities and Exchange Commission over Musk's false claims about taking the company private.

Tesla produces electric automobiles with self-driving capabilities.

Tesla Grohmann Automation and Solar City, two of its subsidiaries, offer relevant automotive technology and manufacturing services and solar energy services, respectively.

Tesla, according to Musk, will reach Level 5 autonomous driving capabilities in 2019, as defined by the National Highway Traffic Safety Administration's (NHTSA) five levels of autonomous driving.

Tes la's aggressive development with autonomous driving has influenced conventional car makers' attitudes toward electric cars and autonomous driving, and prompted a congressional assessment of how and when the technology should be regulated.

Musk is widely credited as a key influencer in moving the automotive industry toward autonomous driving, highlighting the benefits of autonomous vehicles (including reduced fatalities in vehicle crashes, increased worker productivity, increased transportation efficiency, and job creation) and demonstrating that the technology is achievable in the near term.

Tesla's autonomous driving code has been created and enhanced under the guidance of Musk and Tesla's Director of AI, Andrej Karpathy (Autopilot).

The computer vision analysis used by Tesla, which includes an array of cameras on each car and real-time image processing, enables the system to make real-time observations and predictions.

The cameras, as well as other exterior and internal sensors, capture a large quantity of data, which is evaluated and utilized to improve Autopilot programming.

Tesla is the only autonomous car maker that is opposed to the LIDAR laser sensor (an acronym for light detection and ranging).

Tesla uses cameras, radar, and ultrasonic sensors instead.

Though academics and manufacturers disagree on whether LIDAR is required for fully autonomous driving, the high cost of LIDAR has limited Tesla's rivals' ability to produce and sell vehicles at a pricing range that allows a large number of cars on the road to gather data.

Tesla is creating its own AI hardware in addition to its AI programming.

Musk stated in late 2017 that Tesla is building its own silicon for artificial-intelligence calculations, allowing the company to construct its own AI processors rather than depending on third-party sources like Nvidia.

Tesla's AI progress in autonomous driving has been marred by setbacks.

Tesla has consistently missed self-imposed deadlines, and serious accidents have been blamed on flaws in the vehicle's Autopilot mode, including a non-injury accident in 2018, in which the vehicle failed to detect a parked firetruck on a California freeway, and a fatal accident in 2018, in which the vehicle failed to detect a pedestrian outside a crosswalk.

Neuralink was established by Musk in 2016.

With the stated objective of helping humans to keep up with AI breakthroughs, Neuralink is focused on creating devices that can be implanted into the human brain to better facilitate communication between the brain and software.

Musk has characterized the gadgets as a more efficient interface with computer equipment, while people now operate things with their fingertips and voice commands, directives would instead come straight from the brain.

Though Musk has made major advances to AI, his pronouncements regarding the risks linked with AI have been apocalyptic.

Musk has called AI "humanity's greatest existential danger" and "the greatest peril we face as a civilisation" (McFarland 2014).

(Morris 2017).

He cautions against the perils of power concentration, a lack of independent control, and a competitive rush to acceptance without appropriate analysis of the repercussions.

While Musk has used colorful terminology such as "summoning the devil" (McFarland 2014) and depictions of cyborg overlords, he has also warned of more immediate and realistic concerns such as job losses and AI-driven misinformation campaigns.

Though Musk's statements might come out as alarmist, many important and well-respected figures, including as Microsoft cofounder Bill Gates, Swedish-American scientist Max Tegmark, and the late theoretical physicist Stephen Hawking, share his concern.

Furthermore, Musk does not call for the cessation of AI research.

Instead, Musk supports for responsible AI development and regulation, including the formation of a Congressional committee to spend years studying AI with the goal of better understanding the technology and its hazards before establishing suitable legal limits.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Superintelligence; Technological Singularity; Workplace Automation.



References & Further Reading:


Moravec, Hans. 1988. Mind Children: The Future of Robot and Human Intelligence. Cambridge, MA: Harvard University Press.

Moravec, Hans. 1999. Robot: Mere Machine to Transcendent Mind. Oxford, UK: Oxford University Press.

Moravec, Hans. 2003. “Robots, After All.” Communications of the ACM 46, no. 10 (October): 90–97.

Pinker, Steven. 2007. The Language Instinct: How the Mind Creates Language. New York: Harper.




What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...