Showing posts with label Applied AI. Show all posts
Showing posts with label Applied AI. Show all posts

Artificial Intelligence - What Are Expert Systems?

 






Expert systems are used to solve issues that would normally be addressed by humans.


In the early decades of artificial intelligence research, they emerged as one of the most promising application strategies.

The core concept is to convert an expert's knowledge into a computer-based knowledge system.




Dan Patterson, a statistician and computer scientist at the University of Texas in El Paso, differentiates various properties of expert systems:


• They make decisions based on knowledge rather than facts.

• The task of representing heuristic knowledge in expert systems is daunting.

• Knowledge and the program are generally separated so that the same program can operate on different knowledge bases.

• Expert systems should be able to explain their decisions, represent knowledge symbolically, and have and use meta knowledge, that is, knowledge about knowledge.





(Patterson, et al., 2008) Expert systems generally often reflect domain-specific knowledge.


The subject of medical research was a frequent test application for expert systems.

Expert systems were created as a tool to assist medical doctors in their work.

Symptoms were usually communicated by the patient in the form of replies to inquiries.

Based on its knowledge base, the system would next attempt to identify the ailment and, in certain cases, recommend relevant remedies.

MYCIN, a Stanford University-developed expert system for detecting bacterial infections and blood disorders, is one example.




Another well-known application in the realm of engineering and engineering design tries to capture the heuristic knowledge of the design process in the design of motors and generators.


The expert system assists in the initial design phase, when choices like as the number of poles, whether to use AC or DC, and so on are made (Hoole et al. 2003).

The knowledge base and the inference engine are the two components that make up the core framework of expert systems.




The inference engine utilizes the knowledge base to make choices, whereas the knowledge base holds the expert's expertise.

In this way, the knowledge is isolated from the software that manipulates it.

Knowledge must first be gathered, then comprehended, categorized, and stored in order to create expert systems.

It is retrieved to answer issues depending on predetermined criteria.

The four main processes in the design of an expert system, according to Thomson Reuters chief scientist Peter Jackson, are obtaining information, representing that knowledge, directing reasoning via an inference engine, and explaining the expert system's answer (Jackson 1999).

The expert system's largest issue was acquiring domain knowledge.

Human specialists may be challenging to obtain information from.


Many variables contribute to the difficulty of acquiring knowledge, but the complexity of encoding heuristic and experienced information is perhaps the most important.



The knowledge acquisition process is divided into five phases, according to Hayes-Roth et al. (1983).

Identification, or recognizing the problem and the data that must be used to arrive at a solution; conceptualization, or comprehending the key concepts and relationships between the data; formalization, or comprehending the relevant search space; implementation, or converting formalized knowledge into a software program; and testing the rules for completeness and accuracy are among them.


  • Production (rule-based) or non-production systems may be used to represent domain knowledge.
  • In rule-based systems, knowledge is represented by rules in the form of IF THEN-ELSE expressions.



The inference process is carried out by iteratively going over the rules, either through a forward or backward chaining technique.



  • Forward chaining asks what would happen next if the condition and rules were known to be true. Going from a goal to the rules we know to be true, backward chaining asks why this occurred.
  • Forward chaining is defined as when the left side of the rule is assessed first, that is, when the conditions are verified first and the rules are performed left to right (also known as data-driven inference).
  • Backward chaining occurs when the rules are evaluated from the right side, that is, when the outcomes are verified first (also known as goal-driven inference).
  • CLIPS, a public domain example of an expert system tool that implements the forward chaining method, was created at NASA's Johnson Space Center. MYCIN is an expert system that works backwards.



Associative/semantic networks, frame representations, decision trees, and neural networks may be used in expert system designs based on nonproduction architectures.


Nodes make form an associative/semantic network, which may be used to represent hierarchical knowledge. 

  • An example of a system based on an associative network is CASNET.
  • The most well-known use of CASNET was the development of an expert system for glaucoma diagnosis and therapy.

Frames are structured sets of closely related knowledge in frame architectures.


  • A frame-based architecture is an example of PIP (Present Illness Program).
  • MIT and Tufts-New England Clinical Center developed PIP to generate hypotheses regarding renal illness.

Top-down knowledge is represented via decision tree structures.


Blackboard system designs are complex systems in which the inference process's direction may be changed during runtime.


A blackboard system architecture may be seen in DARPA's HEARSAY domain independent expert system.


  • Knowledge is spread throughout a neural network in the form of nodes in neural network topologies.
  • Case-based reasoning is attempting to examine and find answers for a problem using previously solved examples.
  • A loose connection may be formed between case-based reasoning and judicial law, in which the decision of a comparable but previous case is used to solve a current legal matter.
  • Case-based reasoning is often implemented as a frame, which necessitates a more involved matching and retrieval procedure.



There are three options for manually constructing the knowledge base.


  • Knowledge may be elicited via an interview with a computer using interactive tools. This technique is shown by the computer-graphics-based OPAL software, which enabled clinicians with no prior medical training to construct expert medical knowledge bases for the care of cancer patients.
  • Text scanning algorithms that read books into memory are a second alternative to human knowledge base creation.
  • Machine learning algorithms that build competence on their own, with or without supervision from a human expert, are a third alternative still under development.




DENDRAL, a project started at Stanford University in 1965, is an early example of a machine learning architecture project.


DENDRAL was created in order to study the molecular structure of organic molecules.


  • While DENDRAL followed a set of rules to complete its work, META-DENDRAL created its own rules.
  • META-DENDRAL chose the important data points to observe with the aid of a human chemist.




Expert systems may be created in a variety of methods.


  • User-friendly graphical user interfaces are used in interactive development environments to assist programmers as they code.
  • Special languages may be used in the construction of expert systems.
  • Prolog (Logic Programming) and LISP are two of the most common options (List Programming).
  • Because Prolog is built on predicate logic, it belongs to the logic programming paradigm.
  • One of the first programming languages for artificial intelligence applications was LISP.



Expert system shells are often used by programmers.



A shell provides a platform for knowledge to be programmed into the system.


  • The shell is a layer without a knowledge basis, as the name indicates.
  • The Java Expert System Shell (JESS) is a strong expert shell built in Java.


Many efforts have been made to blend disparate paradigms to create hybrid systems.


  • Object-oriented programming seeks to combine logic-based and object-oriented systems.
  • Object orientation, despite its lack of a rigorous mathematical basis, is very useful in modeling real-world circumstances.

  • Knowledge is represented as objects that encompass both the data and the ways for working with it.
  • Object-oriented systems are more accurate models of real-world things than procedural programming.
  • The Object Inference Knowledge Specification Language (OI-KSL) is one way (Mascrenghe et al. 2002).



Although other languages, such as Visual Prolog, have merged object-oriented programming, OI-KSL takes a different approach.


Backtracking in Visual Prolog occurs inside the objects; that is, the methods backtracked.

Backtracking is taken to a whole new level in OI KSL, with the item itself being backtracked.

To cope with uncertainties in the given data, probability theory, heuristics, and fuzzy logic are sometimes utilized.

A fuzzy electric lighting system was one example of a Prolog implementation of fuzzy logic, in which the quantity of natural light influenced the voltage that flowed to the electric bulb (Mascrenghe 2002).

This allowed the system to reason in the face of uncertainty and with little data.


Interest in expert systems started to wane in the late 1990s, owing in part to unrealistic expectations for the technology and the expensive cost of upkeep.

Expert systems were unable to deliver on their promises.



Even today, technology generated in expert systems research is used in various fields like data science, chatbots, and machine intelligence.


  • Expert systems are designed to capture the collective knowledge that mankind has accumulated through millennia of learning, experience, and practice.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 


Clinical Decision Support Systems; Computer-Assisted Diagnosis; DENDRAL; Expert Systems.



Further Reading:


Hayes-Roth, Frederick, Donald A. Waterman, and Douglas B. Lenat, eds. 1983. Building Expert Systems. Teknowledge Series in Knowledge Engineering, vol. 1. Reading, MA: Addison Wesley.

Hoole, S. R. H., A. Mascrenghe, K. Navukkarasu, and K. Sivasubramaniam. 2003. “An Expert Design Environment for Electrical Devices and Its Engineering Assistant.” IEEE Transactions on Magnetics 39, no. 3 (May): 1693–96.

Jackson, Peter. 1999. Introduction to Expert Systems. Third edition. Reading, MA: Addison-Wesley.

Mascrenghe, A. 2002. “The Fuzzy Electric Bulb: An Introduction to Fuzzy Logic with Sample Implementation.” PC AI 16, no. 4 (July–August): 33–37.

Mascrenghe, A., S. R. H. Hoole, and K. Navukkarasu. 2002. “Prototype for a New Electromagnetic Knowledge Specification Language.” In CEFC Digest. Perugia, Italy: IEEE.

Patterson, Dan W. 2008. Introduction to Artificial Intelligence and Expert Systems. New Delhi, India: PHI Learning.

Rich, Elaine, Kevin Knight, and Shivashankar B. Nair. 2009. Artificial Intelligence. New Delhi, India: Tata McGraw-Hill.



Artificial Intelligence - What Is Explainable AI Or XAI?

 




AI that can be explained Explainable AI (XAI) refers to approaches or design decisions used in automated systems such that artificial intelligence and machine learning produce outputs with a logic that humans can understand and explain.




The extensive usage of algorithmically assisted decision-making in social situations has raised considerable concerns about the possibility of accidental prejudice and bias being encoded in the choice.




Furthermore, the application of machine learning in domains that need a high degree of accountability and transparency, such as medicine or law enforcement, emphasizes the importance of outputs that are easy to understand.

The fact that a human operator is not involved in automated decision-making does not rule out the possibility of human bias being embedded in the outcomes produced by machine computation.




Artificial intelligence's already limited accountability is exacerbated by the lack of due process and human logic.




The consequences of algorithmically driven processes are often so complicated that even their engineering designers are unable to understand or predict them.

The black box of AI is a term that has been used to describe this situation.

To address these flaws, the General Data Protection Regulation (GDPR) of the European Union contains a set of regulations that provide data subjects the right to an explanation.

Article 22, which deals with automated individual decision-making, and Articles 13, 14, and 15, which deal with transparency rights in relation to automated decision-making and profiling, are the ones to look out for.


When a decision based purely on automated processing has "legal implications" or "similarly substantial" effects on a person, Article 22 of the GDPR reserves a "right not to be subject to a decision based entirely on automated processing" (GDPR 2016).





It also provides three exceptions to this right, notably when it is required for a contract, when a member state of the European Union has approved a legislation establishing an exemption, or when a person has expressly accepted to algorithmic decision-making.

Even if an exemption to Article 22 applies, the data subject has the right to "request human involvement on the controller's side, to voice his or her point of view, and to challenge the decision" (GDPR 2016).





Articles 13 through 15 of the GDPR provide a number of notification rights when personal data is obtained (Article 13) or from third parties (Article 14), as well as the ability to access such data at any time (Article 15), including "meaningful information about the logic involved" (GDPR 2016).

Recital 71 protects the data subject's right to "receive an explanation of the conclusion taken following such evaluation and to contest the decision" where an automated decision is made that has legal consequences or has a comparable impact on the person (GDPR 2016).





Recital 71 is not legally binding, but it does give advice on how to interpret relevant provisions of the GDPR.

The question of whether a mathematically interpretable model is sufficient to account for an automated judgment and provide transparency in automated decision-making is gaining traction.

Ex-ante/ex-post auditing is an alternative technique that focuses on the processes around machine learning models rather than the models themselves, which may be incomprehensible and counterintuitive.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 


Algorithmic Bias and Error; Deep Learning.


Further Reading:


Brkan, Maja. 2019. “Do Algorithms Rule the World? Algorithmic Decision-Making in the 

Framework of the GDPR and Beyond.” International Journal of Law and Information Technology 27, no. 2 (Summer): 91–121.

GDPR. 2016. European Union. https://gdpr.eu/.

Goodman, Bryce, and Seth Flaxman. 2017. “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation.’” AI Magazine 38, no. 3 (Fall): 50–57.

Kaminski, Margot E. 2019. “The Right to Explanation, Explained.” Berkeley Technology Law Journal 34, no. 1: 189–218.

Karanasiou, Argyro P., and Dimitris A. Pinotsis. 2017. “A Study into the Layers of Automated Decision-Making: Emergent Normative and Legal Aspects of Deep Learn￾ing.” International Review of Law, Computers & Technology 31, no. 2: 170–87.

Selbst, Andrew D., and Solon Barocas. 2018. “The Intuitive Appeal of Explainable Machines.” Fordham Law Review 87, no. 3: 1085–1139.



Artificial Intelligence - Who Is J. Doyne Farmer?

 


J. Doyne Farmer (1952–) is a leading expert in artificial life, artificial evolution, and artificial intelligence in the United States.


He is most known for being the head of a group of young people who utilized a wearable computer to get an edge while playing on the roulette wheel at various Nevada casinos.

Farmer founded Eudaemonic Enterprises with boyhood buddy Norman Packard and others in graduate school in order to conquer the game of roulette in Las Vegas.


Farmer felt that by understanding the mechanics of a roulette ball in motion, they could design a computer to anticipate which numbered pocket it would end up in.


After releasing the ball on the spinning roulette wheel, the group identified and exploited the fact that it takes around 10 seconds for a croupier to settle bets.

The findings of their research were finally encoded into a little computer buried within a shoe's sole.

The shoe's user entered the ball's location and velocity information with his big toe, and a second person placed the bets when the signal was given.

The gang did not win big quantities of money while gambling due to frequent hardware issues, and they left after approximately a dozen excursions to different casinos.


According to the gang, they had a 20 percent edge over the house.


Several breakthroughs in chaos theory and complexity systems research are ascribed to the Eudaemonic Enterprises group.

Farmer's metadynamics AI algorithms have been used to model the beginning of life and the human immune system's operation.

While at the Santa Fe Institute, Farmer became regarded as a pioneer of complexity economics, or "econophysics." Farmer demonstrated how, similar to a natural food chain, enterprises and groupings of firms build a market ecology of species.


The growth and earnings of individual enterprises, as well as the groups to which they belong, are influenced by this web and the trading methods used by the firms.



Trading businesses, like natural predators, take advantage of these patterns of influence and diversity.


He observed that trading businesses might use both stabilizing and destabilizing techniques to help or hurt the whole market ecology.


  • Farmer cofounded the Prediction Company in order to create advanced statistical financial trading methods and automated quantitative trading in the hopes of outperforming the stock market and making quick money. UBS ultimately bought the firm.
  • He is now working on a book on the rational expectations approach to behavioral economics, and he proposes that complexity economics, which is made up of common "rules of thumb" or heuristics discovered in psychological tests and sociological studies of humans, is the way ahead. In chess, for example, "a queen is better than a rook" is an example heuristic.



Farmer is presently Oxford University's Baillie Gifford Professor of Mathematics.


  • He earned his bachelor's degree in physics from Stanford University and his master's degree in physics from the University of California, Santa Cruz, where he studied under George Blumenthal.
  • He is a cofounder of the journal Quantitative Finance and an Oppenheimer Fellow.
  • Farmer grew up in Silver City, New Mexico, where he was motivated by his Scoutmaster, scientist Tom Ingerson, who had the lads looking for abandoned Spanish gold mines and plotting a journey to Mars.
  • He credits such early events with instilling in him a lifelong passion for scientific research.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Newell, Allen.


Further Reading:


Bass, Thomas A. 1985. The Eudaemonic Pie. Boston: Houghton Mifflin Harcourt.

Bass, Thomas A. 1998. The Predictors: How a Band of Maverick Physicists Used Chaos Theory to Trade Their Way to a Fortune on Wall Street. New York: Henry Holt.

Brockman, John, ed. 2005. Curious Minds: How a Child Becomes a Scientist. New York: Vintage Books.

Freedman, David H. 1994. Brainmakers: How Scientists Are Moving Beyond Computers to Create a Rival to the Human Brain. New York: Simon & Schuster.

Waldrop, M. Mitchell. 1992. Complexity: The Emerging Science at the Edge of Order and Chaos. New York: Simon & Schuster.





Artificial Intelligence - Who Is Anne Foerst?

 


 

Anne Foerst (1966–) is a Lutheran clergyman, theologian, author, and computer science professor at Allegany, New York's St. Bonaventure University.



In 1996, Foerst earned a doctorate in theology from the Ruhr-University of Bochum in Germany.

She has worked as a research associate at Harvard Divinity School, a project director at MIT, and a research scientist at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory.

She supervised the God and Computers Project at MIT, which encouraged people to talk about existential questions brought by scientific research.



Foerst has written several scientific and popular pieces on the need for improved conversation between religion and science, as well as shifting concepts of personhood in the light of robotics research.



God in the Machine, published in 2004, details her work as a theological counselor to the MIT Cog and Kismet robotics teams.

Foerst's study has been influenced by her work as a hospital counselor, her years at MIT collecting ethnographic data, and the writings of German-American Lutheran philosopher and theologian Paul Tillich.



As a medical counselor, she started to rethink what it meant to be a "normal" human being.


Foerst was inspired to investigate the circumstances under which individuals are believed to be persons after seeing variations in physical and mental capabilities in patients.

In her work, Foerst contrasts between the terms "human" and "person," with human referring to members of our biological species and per son referring to a person who has earned a form of reversible social inclusion.



Foerst uses the Holocaust as an illustration of how personhood must be conferred but may also be revoked.


As a result, personhood is always vulnerable.

Foerst may explore the inclusion of robots as individuals using this personhood schematic—something people bestow to each other.


Tillich's ideas on sin, alienation, and relationality are extended to the connections between humans and robots, as well as robots and other robots, in her work on robots as potential people.


  • People become alienated, according to Tillich, when they ignore opposing polarities in their life, such as the need for safety and novelty or freedom.
  • People reject reality, which is fundamentally ambiguous, when they refuse to recognize and interact with these opposing forces, cutting out or neglecting one side in order to concentrate entirely on the other.
  • People are alienated from their lives, from the people around them, and (for Tillich) from God if they do not accept the complicated conflicts of existence.


The threat of reducing all things to items or data that can be measured and studied, as well as the possibility to enhance people's capacity to create connections and impart identity, are therefore opposites of danger and opportunity in AI research.



Foerst has attempted to establish a dialogue between theology and other structured fields of inquiry, following Tillich's paradigm.


Despite being highly welcomed in labs and classrooms, Foerst's work has been met with skepticism and pushback from some concerned that she is bringing counter-factual notions into the realm of science.

These concerns are crucial data for Foerst, who argues for a mutualistic approach in which AI researchers and theologians accept strongly held preconceptions about the universe and the human condition in order to have fruitful discussions.

Many valuable discoveries come from these dialogues, according to Foerst's study, as long as the parties have the humility to admit that neither side has a perfect grasp of the universe or human existence.



Foerst's work on AI is marked by humility, as she claims that researchers are startled by the vast complexity of the human person while seeking to duplicate human cognition, function, and form in the figure of the robot.


The way people are socially rooted, socially conditioned, and socially accountable adds to the complexity of any particular person.

Because human beings' embedded complexity is intrinsically physical, Foerst emphasizes the significance of an embodied approach to AI.

Foerst explored this embodied technique while at MIT, where having a physical body capable of interaction is essential for robotic research and development.


When addressing the evolution of artificial intelligence, Foerst emphasizes a clear difference between robots and computers in her work (AI).


Robots have bodies, and those bodies are an important aspect of their learning and interaction abilities.

Although supercomputers can accomplish amazing analytic jobs and participate in certain forms of communication, they lack the ability to learn through experience and interact with others.

Foerst is dismissive of research that assumes intelligent computers may be created by re-creating the human brain.

Rather, she contends that bodies are an important part of intellect.


Foerst proposes for growing robots in a way similar to human child upbringing, in which robots are given opportunities to interact with and learn from the environment.


This process is costly and time-consuming, just as it is for human children, and Foerst reports that funding for creative and time-intensive AI research has vanished, replaced by results-driven and military-focused research that justifies itself through immediate applications, especially since the terrorist attacks of September 11, 2001.

Foerst's work incorporates a broad variety of sources, including religious texts, popular films and television programs, science fiction, and examples from the disciplines of philosophy and computer science.



Loneliness, according to Foerst, is a fundamental motivator for humans' desire of artificial life.


Both fictional imaginings of the construction of a mechanical companion species and con actual robotics and AI research are driven by feelings of alienation, which Foerst ties to the theological position of a lost contact with God.


Academic opponents of Foerst believe that she has replicated a paradigm initially proposed by German theologian and scholar Rudolph Otto in his book The Idea of the Holy (1917).


The heavenly experience, according to Otto, may be discovered in a moment of attraction and dread, which he refers to as the numinous.

Critics contend that Foerst used this concept when she claimed that humans sense attraction and dread in the figure of the robot.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 


Embodiment, AI and; Nonhuman Rights and Personhood; Pathetic Fallacy; Robot Ethics; Spiritual Robots.


Further Reading:


Foerst, Anne. 2005. God in the Machine: What Robots Teach Us About Humanity and God. New York: Plume.

Geraci, Robert M. 2007. “Robots and the Sacred in Science Fiction: Theological Implications of Artificial Intelligence.” Zygon 42, no. 4 (December): 961–80.

Gerhart, Mary, and Allan Melvin Russell. 2004. “Cog Is to Us as We Are to God: A Response to Anne Foerst.” Zygon 33, no. 2: 263–69.

Groks Science Radio Show and Podcast with guest Anne Foerst. Audio available online at http://ia800303.us.archive.org/3/items/groks146/Groks122204_vbr.mp3. Transcript available at https://grokscience.wordpress.com/transcripts/anne-foerst/.

Reich, Helmut K. 2004. “Cog and God: A Response to Anne Foerst.” Zygon 33, no. 2: 255–62.



Artificial Intelligence - Who Is Martin Ford?


 


Martin Ford (active from 2009 until the present) is a futurist and author who focuses on artificial intelligence, automation, and the future of employment.


Rise of the Robots, his 2015 book, was named the Financial Times and McKinsey Business Book of the Year, as well as a New York Times bestseller.



Artificial intelligence, according to Ford, is the "next killer app" in the American economy.


Ford highlights in his writings that most economic sectors in the United States are becoming more mechanized.


  • The transportation business is being turned upside down by self-driving vehicles and trucks.
  • Self-checkout is transforming the retail industry.
  • The hotel business is being transformed by food preparation robots.


According to him, each of these developments will have a significant influence on the American workforce.



Not only will robots disrupt blue-collar labor, but they will also pose a danger to white-collar employees and professionals in fields such as medicine, media, and finance.


  • According to Ford, the majority of this job is similarly regular and can be automated.
  • Under particular, middle management is in jeopardy.
  • According to Ford, there will be no link between human education and training and automation vulnerability in the future, just as worker productivity and remuneration have become unrelated phenomena.

Artificial intelligence will alter knowledge and information work as sophisticated algorithms, machine-learning tools, and clever virtual assistants are incorporated into operating systems, business software, and databases.


Ford’s viewpoint has been strengthened by a 2013 research by Carl Benedikt Frey and Michael Osborne of the Oxford University Martin Program on the Impacts of Future Technology and the Oxford University Engineering Sciences Department.

Frey and Osborne’s study, done with the assistance of machine-learning algorithms, indicated that over half of 702 various types of American employment may be automated in the next 10 to twenty years.



Ford points out that when automation precipitates primary job losses in areas susceptible to computerization, it will also cause a secondary wave of job destruction in sectors that are sustained by them, even if they are themselves automation resistant.


  • Ford suggests that capitalism will not go away in the process, but it will need to adapt if it is to survive.
  • Job losses will not be immediately staunched by new technology jobs in the highly automated future.

Ford has advocated a universal basic income—or “citizens dividend”—as one way to help American workers transition to the economy of the future.


  • Without consumers making wages, he asserts, there simply won’t be markets for the abundant goods and services that robots will produce.
  • And those displaced workers would no longer have access to home owner ship or a college education.
  • A universal basic income could be guaranteed by placing value added taxes on automated industries.
  • The wealthy owners in these industries would agree to this tax out of necessity and survival.



Further financial incentives, he argues, should be targeted at individuals who are working to enhance human culture, values, and wisdom, engaged in earning new credentials or innovating outside the mainstream automated economy.


  • Political and sociocultural changes will be necessary as well.
  • Automation and artificial intelligence, he says, have exacerbated economic inequality and given extraordinary power to special interest groups in places like the Silicon Valley.
  • He also suggests that Americans will need to rethink the purpose of employment as they are automated out of jobs.



Work, Ford believes, will not primarily be about earning a living, but rather about finding purpose and meaning and community.


  • Education will also need to change.
  • As the number of high-skill jobs is depleted, fewer and fewer highly educated students will find work after graduation.



Ford has been criticized for assuming that hardly any job will remain untouched by computerization and robotics.


  • It may be that some occupational categories are particularly resistant to automation, for instance, the visual and performing arts, counseling psychology, politics and governance, and teaching.
  • It may also be the case that human energies currently focused on manufacture and service will be replaced by work pursuits related to entrepreneurship, creativity, research, and innovation.



Ford speculates that it will not be possible for all of the employed Americans in the manufacturing and service economy to retool and move to what is likely to be a smaller, shallower pool of jobs.



In The Lights in the Tunnel: Automation, Accelerating Technology, and the Economy of the Future (2009), Ford introduced the metaphor of “lights in a tunnel” to describe consumer purchasing power in the mass market.


A billion individual consumers are represented as points of light that vary in intensity corresponding to purchasing power.

An overwhelming number of lights are of middle intensity, corresponding to the middle classes around the world.

  • Companies form the tunnel. Five billion other people, mostly poor, exist outside the tunnel.
  • In Ford’s view, automation technologies threaten to dim the lights and collapse the tunnel.
  • Automation poses dangers to markets, manufacturing, capitalist economics, and national security.



In Rise of the Robots: Technology and the Threat of a Jobless Future (2015), Ford focused on the differences between the current wave of automation and prior waves.


  • He also commented on disruptive effects of information technology in higher education, white-collar jobs, and the health-care industry.
  • He made a case for a new economic paradigm grounded in the basic income, incentive structures for risk-taking, and environmental sensitivity, and he described scenarios where inaction might lead to economic catastrophe or techno-feudalism.


Ford’s book Architects of Intelligence: The Truth about AI from the People Building It (2018) includes interviews and conversations with two dozen leading artificial intelligence researchers and entrepreneurs.


  • The focus of the book is the future of artificial general intelligence and predictions about how and when human-level machine intelligence will be achieved.



Ford holds an undergraduate degree in Computer Engineering from the University of Michigan.

He earned an MBA from the UCLA Anderson School of Management.

He is the founder and chief executive officer of the software development company Solution-Soft located in Santa Clara, California.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Brynjolfsson, Erik; Workplace Automation.


Further Reading:


Ford, Martin. 2009. The Lights in the Tunnel: Automation, Accelerating Technology, and the Economy of the Future. Charleston, SC: Acculant.

Ford, Martin. 2013. “Could Artificial Intelligence Create an Unemployment Crisis?” Communications of the ACM 56 7 (July): 37–39.

Ford, Martin. 2016. Rise of the Robots: Technology and the Threat of a Jobless Future. New York: Basic Books.

Ford, Martin. 2018. Architects of Intelligence: The Truth about AI from the People Build￾ing It. Birmingham, UK: Packt Publishing



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...