Artificial Intelligence - Who Is Sherry Turkle?




Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.

She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.

Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.

She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.

However, this viewpoint has given place to one that considers intelligence to be emergent.

This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.

Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.

Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).

She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clich├ęs that are often employed when addressing technology.

This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.

Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.

In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.

Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.

Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).

She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

Adam appreciates the fact that he is able to create something fresh when playing.

Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

Turkle believes that this transition is critical.

She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

  • the complete complexity and inherent contradictions that define what it is to be human.

A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.

Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.

  • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
  • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
  • By engaging with gadgets, one may form a relationship with them.
  • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

In other words, the most that can be anticipated is engagement.

Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.

Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

Turkle has a problem with this because these devices can only respond as if they understand what is being said.

AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.

While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.

Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.

  • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
  • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

You may also want to read more about Artificial Intelligence here.

See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.

References And Further Reading

  • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
  • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
  • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.

Artificial Intelligence - What Is The Turing Test?



The Turing Test is a method of determining whether or not a machine can exhibit intelligence that mimics or is equivalent and likened to Human intelligence. 

The Turing Test, named after computer scientist Alan Turing, is an AI benchmark that assigns intelligence to any machine capable of displaying intelligent behavior comparable to that of a person.

Turing's "Computing Machinery and Intelligence" (1950), which establishes a simple prototype—what Turing calls "The Imitation Game," is the test's locus classicus.

In this game, a person is asked to determine which of the two rooms is filled by a computer and which is occupied by another human based on anonymized replies to natural language questions posed by the judge to each inhabitant.

Despite the fact that the human respondent must offer accurate answers to the court's queries, the machine's purpose is to fool the judge into thinking it is human.

According to Turing, the machine may be considered intelligent to the degree that it is successful at this job.

The fundamental benefit of this essentially operationalist view of intelligence is that it avoids complex metaphysics and epistemological issues about the nature and inner experience of intelligent activities.

According to Turing's criteria, little more than empirical observation of outward behavior is required for predicting object intelligence.

This is in sharp contrast to the widely Cartesian epistemological tradition, which holds that some internal self-awareness is a need for intelligence.

Turing's method avoids the so-called "problem of other minds" that arises from such a viewpoint—namely, how to be confident of the presence of other intelligent individuals if it is impossible to know their thoughts from a presumably required first-person perspective.

Nonetheless, the Turing Test, at least insofar as it considers intelligence in a strictly formalist manner, is bound up with the spirit of Cartesian epistemol ogy.

The machine in the Imitation Game is a digital computer in the sense of Turing: a set of operations that may theoretically be implemented in any material.

A digital computer consists of three parts: a knowledge store, an executive unit that executes individual orders, and a control that regulates the executive unit.

However, as Turing points out, it makes no difference whether these components are created using electrical or mechanical means.

What matters is the formal set of rules that make up the computer's very nature.

Turing holds to the core belief that intellect is inherently immaterial.

If this is true, it is logical to assume that human intellect functions in a similar manner to a digital computer and may therefore be copied artificially.

Since Turing's work, AI research has been split into two camps: 

  1. those who embrace and 
  2. those who oppose this fundamental premise.

To describe the first camp, John Haugeland created the term "good old-fashioned AI," or GOFAI.

Marvin Minsky, Allen Newell, Herbert Simon, Terry Winograd, and, most notably, Joseph Weizenbaum, whose software ELIZA was controversially hailed as the first to pass the Turing Test in 1966.

Nonetheless, detractors of Turing's formalism have proliferated, particularly in the past three decades, and GOFAI is now widely regarded as a discredited AI technique.

John Searle's Minds, Brains, and Programs (1980), in which Searle builds his now-famous Chinese Room thought experiment, is one of the most renowned criticisms of GOFAI in general—and the assumptions of the Turing Test in particular.

In the latter, a person with no prior understanding of Chinese is placed in a room and forced to correlate Chinese characters she receives with other Chinese characters she puts out, according to an English-scripted software.

Searle thinks that, assuming adequate mastery of the software, the person within the room may pass the Turing Test, fooling a native Chinese speaker into thinking she knew Chinese.

If, on the other hand, the person in the room is a digital computer, Turing-type tests, according to Searle, fail to capture the phenomena of understanding, which he claims entails more than the functionally accurate connection of inputs and outputs.

Searle's argument implies that AI research should take materiality issues seriously in ways that Turing's Imitation Game's formalism does not.

Searle continues his own explanation of the Chinese Room thought experiment by saying that human species' physical makeup—particularly their sophisticated nerve systems, brain tissue, and so on—should not be discarded as unimportant to conceptions of intelligence.

This viewpoint has influenced connectionism, an altogether new approach to AI that aims to build computer intelligence by replicating the electrical circuitry of human brain tissue.

The effectiveness of this strategy has been hotly contested, although it looks to outperform GOFAI in terms of developing generalized kinds of intelligence.

Turing's test, on the other hand, may be criticized not just from the standpoint of materialism, but also from the one of fresh formalism.

As a result, one may argue that Turing tests are insufficient as a measure of intelligence since they attempt to reproduce human behavior, which is frequently exceedingly dumb.

According to certain variants of this argument, if criteria of rationality are to distinguish rational from illogical human conduct in the first place, they must be derived a priori rather than from real human experience.

This line of criticism has gotten more acute as AI research has shifted its focus to the potential of so-called super-intelligence: forms of generalized machine intelligence that far outperform human intellect.

Should this next level of AI be attained, Turing tests would seem to be outdated.

Furthermore, simply discussing the idea of superintelligence would seem to need additional intelligence criteria in addition to severe Turing testing.

Turing may be defended against such accusation by pointing out that establishing a universal criterion of intellect was never his goal.

Indeed, according to Turing (1997, 29–30), the purpose is to replace the metaphysically problematic issue "can machines think" with the more empirically verifiable alternative: 

"What will happen when a computer assumes the role [of the man in the Imitation Game]" (Turing 1997, 29–30).

Thus, Turing's test's above-mentioned flaw—that it fails to establish a priori rationality standards—is also part of its strength and drive.

It also explains why, since it was initially presented three-quarters of a century ago, it has had such a lengthy effect on AI research in all domains.

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

You may also want to read more about Artificial Intelligence here.

See also: 

Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.

References And Further Reading

Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.

Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.

Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cam￾bridge, MA: MIT Press.

Artificial Intelligence - Who Was Alan Turing?



 Alan Mathison Turing OBE FRS(1912–1954) was a logician and mathematician from the United Kingdom.

He is known as the "Father of Artificial Intelligence" and "The Father of Computer Science." 

Turing earned a first-class honors degree in mathematics from King's College, Cambridge, in 1934.

Turing received his PhD from Princeton University after a fellowship at King's College, where he studied under American mathematician Alonzo Church.

Turing wrote numerous important publications during his studies, including "On Computable Numbers, with an Application to the Entscheidungsproblem," which proved that the so-called "decision problem" had no solution.

The decision issue asks if there is a method for determining the correctness of any assertion inside a mathematical system.

This paper also explored a hypothetical Turing machine (basically an early computer) that, if represented by an algorithm, could execute any mathematical operation.

Turing is best known for his codebreaking work at Bletchley Park's Government Code and Cypher School (GC&CS) during World War II (1939–1945).

Turing's work at GC&CS included heading Hut 8, which was tasked with cracking the German Enigma and other very difficult naval encryption.

Turing's work undoubtedly shortened the war by years, saving millions of lives, but it is hard to measure with precision.

Turing wrote "The Applications of Probability to Cryptography" and "Paper on Statistics of Repetitions" during his tenure at GC&CS, both of which were held secret for seventy years by the Government Communications Headquarters (GCHQ) until being given to the UK National Archives in 2012.

Following WWII, Turing enrolled at the Victoria University of Manchester to study mathematical biology while continuing his work in mathematics, stored-program digital computers, and artificial intelligence.

Turing's 1950 paper "Computing Machinery and Intelligence" looked into artificial intelligence and introduced the concept of the Imitation Game (also known as the Turing Test), in which a human judge uses a set of written questions and responses to try to distinguish between a computer program and a human.

If the computer program imitates a person to the point that the human judge cannot discern the difference between the computer program's and the human's replies, the program has passed the test, indicating that it is capable of intelligent reasoning.

Turochamp, a chess program written by Turing and his colleague D.G. Champernowne, was meant to be executed by a computer, but no machine with adequate capacity existed to test the program.

Turing instead manually ran the algorithms to test the software.

Turing was well-recognized during his lifetime, despite the fact that most of his work remained secret until after his death.

Turing was made a Fellow of the Royal Society in 1951 and was awarded to the Order of the British Empire in 1946.(FRS).

The Turing Award, named after him, is given annually by the Association for Computing Machinery for contributions to the area of computing.

The Turing Award, which comes with a $1 million reward, is commonly recognized as the Nobel Prize of Computing.

Turing was outspoken about his sexuality at a period when homosexuality was still illegal in the United Kingdom.

Turing was accused in 1952 under Section 11 of the Criminal Law Amendment Act 1885 with "gross indecency." 

Turing was found guilty, granted probation, and was sentenced to a year of "chemical castration," in which he was injected with synthetic estrogen.

Turing's conviction had an influence on his career as well.

His security clearance was withdrawn, and he was compelled to stop working for the GCHQ as a cryptographer.

Following successful campaigning for an apology and pardon, the British government passed the Alan Turing bill in 2016, which retrospectively pardoned hundreds of persons imprisoned under Section 11 and other historical laws.

In 1954, Turing died of cyanide poisoning.

Turing's death may have been caused by inadvertent inhalation of cyanide vapors, despite the fact that it was officially considered a suicide.

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

You may also want to read more about Artificial Intelligence here.

See also: 

Chatbots and Loebner Prize; General and Narrow AI; Moral Turing Test; Turing Test.

References And Further Reading

Hodges, Andrew. 2004. “Turing, Alan Mathison (1912–1954).” In Oxford Dictionary of National Biography.

Lavington, Simon. 2012. Alan Turing and His Contemporaries: Building the World’s First Computers. Swindon, UK: BCS, The Chartered Institute for IT.

Sharkey, Noel. 2012. “Alan Turing: The Experiment that Shaped Artificial Intelligence.” BBC News, June 21, 2012.

Artificial Intelligence - What Is The Trolley Problem?


Philippa Foot used the term "trolley problem" in 1967 to describe an ethical difficulty.

Artificial intelligence advancements in different domains have sparked ethical debates regarding how these systems' decision-making processes should be designed.

Of course, there is widespread worry about AI's capacity to assess ethical challenges and respect societal values.

An operator finds herself near a trolley track, standing next to a lever that determines whether the trolley will continue on its current path or go in a different direction, in this classic philosophical thought experiment.

Five people are standing on the track where the trolley is running, unable to get out of the path and certain to be murdered if the trolley continues on its current course.

On the opposite track, there is another person who will be killed if the operator pulls the lever.

The operator has the option of pulling the lever, killing one person while rescuing the other five, or doing nothing and allowing the other five to perish.

This is a typical problem between utilitarianism (activities should maximize the well-being of affected persons) and deontology (actions should maximize the well-being of affected individuals) (whether the action is right or wrong based on rules, as opposed to the consequences of the action).

With the development of artificial intelligence, the issue has arisen how we should teach robots to behave in scenarios that are perceived as inescapable realities, such as the Trolley Problem.

The Trolley Problem has been investigated with relation to artificial intelligence in fields such as primary health care, the operating room, security, self-driving automobiles, and weapons technology.

The subject has been studied most thoroughly in the context of self-driving automobiles, where regulations, guidelines, and norms have already been suggested or developed.

Because autonomous vehicles have already gone millions of kilometers in the United States, they face this difficulty.

The problem is made more urgent by the fact that a few self-driving car users have actually died while utilizing the technology.

Accidents have sparked even greater public discussion over the proper use of this technology.

Moral Machine is an online platform established by a team at the Massachusetts Institute of Technology to crowdsource responses to issues regarding how self-driving automobiles should prioritize lives.

The makers of The Moral Machine urge users to the website to guess what option a self-driving automobile would make in a variety of Trolley Problem-style problems.

Respondents must prioritize the lives of car passengers, pedestrians, humans and animals, people walking legally or illegally, and people of various fitness levels and socioeconomic status, among other variables.

When respondents are in a car, they almost always indicate that they would move to save their own lives.

It's possible that crowd-sourced solutions aren't the best method to solve Trolley Problem problems.

Trading a pedestrian life for a vehicle passenger's life, for example, may be seen as arbitrary and unjust.

The aggregated solutions currently do not seem to represent simple utilitarian calculations that maximize lives saved or favor one sort of life over another.

It's unclear who will get to select how AI will be programmed and who will be held responsible if AI systems fail.

This obligation might be assigned to policymakers, the corporation that develops the technology, or the people who end up utilizing it.

Each of these factors has its own set of ramifications that must be handled.

The Trolley Problem's usefulness in resolving AI quandaries is not widely accepted.

The Trolley Problem is dismissed by some artificial intelligence and ethics academics as a helpful thinking exercise.

Their arguments are usually based on the notion of trade-offs between different lifestyles.

They claim that the Trolley Problem lends credence to the idea that these trade-offs (as well as autonomous vehicle disasters) are unavoidable.

Instead than concentrating on the best methods to avoid a dilemma like the trolley issue, policymakers and programmers should instead concentrate on the best ways to react to the different circumstances.

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

You may also want to read more about Artificial Intelligence here.

See also: 

Accidents and Risk Assessment; Air Traffic Control, AI and; Algorithmic Bias and Error; Autonomous Weapons Systems, Ethics of; Driverless Cars and Trucks; Moral Turing Test; Robot Ethics.

References And Further Reading

Cadigan, Pat. 2018. AI and the Trolley Problem. New York: Tor.

Etzioni, Amitai, and Oren Etzioni. 2017. “Incorporating Ethics into Artificial Intelligence.” Journal of Ethics 21: 403–18.

Goodall, Noah. 2014. “Ethical Decision Making during Automated Vehicle Crashes.” Transportation Research Record: Journal of the Transportation Research Board 2424: 58–65.

Moolayil, Amar Kumar. 2018. “The Modern Trolley Problem: Ethical and Economically Sound Liability Schemes for Autonomous Vehicles.” Case Western Reserve Journal of Law, Technology & the Internet 9, no. 1: 1–32.

Artificial Intelligence - Who Is Mark Tilden?


Mark Tilden(1961–) is a biomorphic robot freelance designer from Canada.

A number of his robots are sold as toys.

Others have appeared in television and cinema as props.

Tilden is well-known for his opposition to the notion that strong artificial intelligence is required for complicated robots.

Tilden is a forerunner in the field of BEAM robotics (biology, electronics, aesthetics, and mechanics).

To replicate biological neurons, BEAM robots use analog circuits and systems, as well as continuously varying signals, rather as digital electronics and microprocessors.

Biomorphic robots are programmed to change their gaits in order to save energy.

When such robots come into impediments or changes in the underlying terrain, they are knocked out of their lowest energy condition, forcing them to adapt to a new walking pattern.

The mechanics of the underlying machine rely heavily on self-adaptation.

After failing to develop a traditional electronic robot butler in the late 1980s, Tilden resorted to BEAM type robots.

The robot could barely vacuum floors after being programmed with Isaac Asimov's Three Laws of Robotics.

After hearing MIT roboticist Rodney Brooks speak at Waterloo University on the advantages of basic sensorimotor, stimulus-response robotics versus computationally complex mobile devices, Tilden completely abandoned the project.

Til den left Brooks' lecture questioning if dependable robots might be built without the use of computer processors or artificial intelligence.

Rather than having the intelligence written into the robot's programming, Til den hypothesized that the intelligence may arise from the robot's operating environment, as well as the emergent features that resulted from that world.

Tilden studied and developed a variety of unusual analog robots at the Los Alamos National Laboratory in New Mexico, employing fast prototyping and off-the-shelf and cannibalized components.

Los Alamos was looking for robots that could operate in unstructured, unpredictable, and possibly hazardous conditions.

Tilden built almost a hundred robot prototypes.

His SATBOT autonomous spaceship prototype could align itself with the Earth's magnetic field on its own.

He built fifty insectoid robots capable of creeping through minefields and identifying explosive devices for the Marine Corps Base Quantico.

A robot known as a "aggressive ashtray" spits water at smokers.

A "solar spinner" was used to clean the windows.

The actions of an ant were reproduced by a biomorph made from five broken Sony Walkmans.

Tilden started building Living Machines powered by solar cells at Los Alamos.

These machines ran at extremely sluggish rates due to their energy source, but they were dependable and efficient for lengthy periods of time, often more than a year.

Tilden's first robot designs were based on thermodynamic conduit engines, namely tiny and efficient solar engines that could fire single neurons.

Rather than the workings of their brains, his "nervous net" neurons controlled the rhythms and patterns of motion in robot bodies.

Tilden's idea was to maximize the amount of patterns conceivable while using the fewest number of implanted transistors feasible.

He learned that with just twelve transistors, he could create six different movement patterns.

Tilden might replicate hopping, leaping, running, sitting, crawling, and a variety of other patterns of behavior by folding the six patterns into a figure eight in a symmetrical robot chassis.

Since then, Tilden has been a proponent of a new set of robot principles for such survivalist wild automata.

Tilden's Laws of Robotics say that (1) a robot must safeguard its survival at all costs; (2) a robot must get and keep access to its own power source; and (3) a robot must always seek out better power sources.

Tilden thinks that wild robots will be used to rehabilitate ecosystems that have been harmed by humans.

Tilden had another breakthrough when he introduced very inexpensive robots as toys for the general public and robot aficionados.

He wanted his robots to be in the hands of as many people as possible, so that hackers, hobbyists, and members of different maker communities could reprogramme and modify them.

Tilden designed the toys in such a way that they could be dismantled and analyzed.

They might be hacked in a basic way.

Everything is color-coded and labeled, and all of the wires have gold-plated contacts that can be ripped apart.

Tilden is presently working with WowWee Toys in Hong Kong on consumer-oriented entertainment robots:

  • B.I.O. Bugs, Constructobots, G.I. Joe Hoverstrike, Robosapien, Roboraptor, Robopet, Roborep tile, Roboquad, Roboboa, Femisapien, and Joebot are all popular WowWee robot toys.
  • The Roboquad was designed for the Jet Propulsion Laboratory's (JPL) Mars exploration program.
  • Tilden is also the developer of the Roomscooper cleaning robot.

WowWee Toys sold almost three million of Tilden's robot designs by 2005.

Tilden made his first robotic doll when he was three years old.

At the age of six, he built a Meccano suit of armor for his cat.

At the University of Waterloo, he majored in Systems Engineering and Mathematics.

Tilden is presently working on OpenCog and OpenCog Prime alongside artificial intelligence pioneer Ben Goertzel.

OpenCog is a worldwide initiative supported by the Hong Kong government that aims to develop an open-source emergent artificial general intelligence framework as well as a common architecture for embodied robotic and virtual cognition.

Dozens of IT businesses across the globe are already using OpenCog components.

Tilden has worked on a variety of films and television series as a technical adviser or robot designer, including Lara Croft: Tomb Raider (2001), The 40-Year-Old Virgin (2005), Paul Blart Mall Cop (2009), and X-Men: The Last Stand (2006).

In the Big Bang Theory (2007–2019), his robots are often displayed on the bookshelves of Sheldon's apartment.

~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram

You may also want to read more about Artificial Intelligence here.

See also: 

Brooks, Rodney; Embodiment, AI and.

References And Further Reading

Frigo, Janette R., and Mark W. Tilden. 1995. “SATBOT I: Prototype of a Biomorphic Autonomous Spacecraft.” Mobile Robotics, 66–75.

Hapgood, Fred. 1994. “Chaotic Robots.” Wired, September 1, 1994.

Hasslacher, Brosl, and Mark W. Tilden. 1995. “Living Machines.” Robotics and Autonomous Systems 15, no. 1–2: 143–69.

Marsh, Thomas. 2010. “The Evolution of a Roboticist: Mark Tilden.” Robot Magazine, December 7, 2010.

Menzel, Peter, and Faith D’Aluisio. 2000. “Biobots.” Discover Magazine, September 1, 2000.

Rietman, Edward A., Mark W. Tilden, and Manor Askenazi. 2003. “Analog Computation with Rings of Quasiperiodic Oscillators: The Microdynamics of Cognition in Living Machines.” Robotics and Autonomous Systems 45, no. 3–4: 249–63.

Samans, James. 2005. The Robosapiens Companion: Tips, Tricks, and Hacks. New York: Apress.

Artificial Intelligence - Who Is Sherry Turkle?

      Sherry Turkle (1948–) has a background in sociology and psychology , and her work focuses on the human-technology interaction . ...