Showing posts with label Rodney Brooks. Show all posts
Showing posts with label Rodney Brooks. Show all posts

Artificial Intelligence - Who Is Mark Tilden?

 


Mark Tilden(1961–) is a biomorphic robot freelance designer from Canada.

A number of his robots are sold as toys.

Others have appeared in television and cinema as props.

Tilden is well-known for his opposition to the notion that strong artificial intelligence is required for complicated robots.

Tilden is a forerunner in the field of BEAM robotics (biology, electronics, aesthetics, and mechanics).

To replicate biological neurons, BEAM robots use analog circuits and systems, as well as continuously varying signals, rather as digital electronics and microprocessors.

Biomorphic robots are programmed to change their gaits in order to save energy.

When such robots come into impediments or changes in the underlying terrain, they are knocked out of their lowest energy condition, forcing them to adapt to a new walking pattern.

The mechanics of the underlying machine rely heavily on self-adaptation.

After failing to develop a traditional electronic robot butler in the late 1980s, Tilden resorted to BEAM type robots.

The robot could barely vacuum floors after being programmed with Isaac Asimov's Three Laws of Robotics.



After hearing MIT roboticist Rodney Brooks speak at Waterloo University on the advantages of basic sensorimotor, stimulus-response robotics versus computationally complex mobile devices, Tilden completely abandoned the project.

Til den left Brooks' lecture questioning if dependable robots might be built without the use of computer processors or artificial intelligence.

Rather than having the intelligence written into the robot's programming, Til den hypothesized that the intelligence may arise from the robot's operating environment, as well as the emergent features that resulted from that world.

Tilden studied and developed a variety of unusual analog robots at the Los Alamos National Laboratory in New Mexico, employing fast prototyping and off-the-shelf and cannibalized components.



Los Alamos was looking for robots that could operate in unstructured, unpredictable, and possibly hazardous conditions.

Tilden built almost a hundred robot prototypes.

His SATBOT autonomous spaceship prototype could align itself with the Earth's magnetic field on its own.

He built fifty insectoid robots capable of creeping through minefields and identifying explosive devices for the Marine Corps Base Quantico.

A robot known as a "aggressive ashtray" spits water at smokers.

A "solar spinner" was used to clean the windows.

The actions of an ant were reproduced by a biomorph made from five broken Sony Walkmans.

Tilden started building Living Machines powered by solar cells at Los Alamos.

These machines ran at extremely sluggish rates due to their energy source, but they were dependable and efficient for lengthy periods of time, often more than a year.

Tilden's first robot designs were based on thermodynamic conduit engines, namely tiny and efficient solar engines that could fire single neurons.

Rather than the workings of their brains, his "nervous net" neurons controlled the rhythms and patterns of motion in robot bodies.

Tilden's idea was to maximize the amount of patterns conceivable while using the fewest number of implanted transistors feasible.

He learned that with just twelve transistors, he could create six different movement patterns.

Tilden might replicate hopping, leaping, running, sitting, crawling, and a variety of other patterns of behavior by folding the six patterns into a figure eight in a symmetrical robot chassis.

Since then, Tilden has been a proponent of a new set of robot principles for such survivalist wild automata.

Tilden's Laws of Robotics say that (1) a robot must safeguard its survival at all costs; (2) a robot must get and keep access to its own power source; and (3) a robot must always seek out better power sources.

Tilden thinks that wild robots will be used to rehabilitate ecosystems that have been harmed by humans.

Tilden had another breakthrough when he introduced very inexpensive robots as toys for the general public and robot aficionados.

He wanted his robots to be in the hands of as many people as possible, so that hackers, hobbyists, and members of different maker communities could reprogramme and modify them.

Tilden designed the toys in such a way that they could be dismantled and analyzed.

They might be hacked in a basic way.

Everything is color-coded and labeled, and all of the wires have gold-plated contacts that can be ripped apart.

Tilden is presently working with WowWee Toys in Hong Kong on consumer-oriented entertainment robots:

  • B.I.O. Bugs, Constructobots, G.I. Joe Hoverstrike, Robosapien, Roboraptor, Robopet, Roborep tile, Roboquad, Roboboa, Femisapien, and Joebot are all popular WowWee robot toys.
  • The Roboquad was designed for the Jet Propulsion Laboratory's (JPL) Mars exploration program.
  • Tilden is also the developer of the Roomscooper cleaning robot.


WowWee Toys sold almost three million of Tilden's robot designs by 2005.


Tilden made his first robotic doll when he was three years old.

At the age of six, he built a Meccano suit of armor for his cat.

At the University of Waterloo, he majored in Systems Engineering and Mathematics.


Tilden is presently working on OpenCog and OpenCog Prime alongside artificial intelligence pioneer Ben Goertzel.


OpenCog is a worldwide initiative supported by the Hong Kong government that aims to develop an open-source emergent artificial general intelligence framework as well as a common architecture for embodied robotic and virtual cognition.

Dozens of IT businesses across the globe are already using OpenCog components.

Tilden has worked on a variety of films and television series as a technical adviser or robot designer, including Lara Croft: Tomb Raider (2001), The 40-Year-Old Virgin (2005), Paul Blart Mall Cop (2009), and X-Men: The Last Stand (2006).

In the Big Bang Theory (2007–2019), his robots are often displayed on the bookshelves of Sheldon's apartment.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Brooks, Rodney; Embodiment, AI and.


References And Further Reading

Frigo, Janette R., and Mark W. Tilden. 1995. “SATBOT I: Prototype of a Biomorphic Autonomous Spacecraft.” Mobile Robotics, 66–75.

Hapgood, Fred. 1994. “Chaotic Robots.” Wired, September 1, 1994. https://www.wired.com/1994/09/tilden/.

Hasslacher, Brosl, and Mark W. Tilden. 1995. “Living Machines.” Robotics and Autonomous Systems 15, no. 1–2: 143–69.

Marsh, Thomas. 2010. “The Evolution of a Roboticist: Mark Tilden.” Robot Magazine, December 7, 2010. http://www.botmag.com/the-evolution-of-a-roboticist-mark-tilden.

Menzel, Peter, and Faith D’Aluisio. 2000. “Biobots.” Discover Magazine, September 1, 2000. https://www.discovermagazine.com/technology/biobots.

Rietman, Edward A., Mark W. Tilden, and Manor Askenazi. 2003. “Analog Computation with Rings of Quasiperiodic Oscillators: The Microdynamics of Cognition in Living Machines.” Robotics and Autonomous Systems 45, no. 3–4: 249–63.

Samans, James. 2005. The Robosapiens Companion: Tips, Tricks, and Hacks. New York: Apress.



Artificial Intelligence - What Are Non-Player Characters And Emergent Gameplay?

 


Emergent gameplay occurs when a player in a video game encounters complicated scenarios as a result of their interactions with other players in the game.


Players may fully immerse themselves in an intricate and realistic game environment and feel the consequences of their choices in today's video games.

Players may personalize and build their character and tale.

Players take on the role of a cyborg in a dystopian metropolis in the Deus Ex series (2000), for example, one of the first emergent game play systems.

They may change the physical appearance of their character as well as their skill sets, missions, and affiliations.

Players may choose between militarized adaptations that allow for more aggressive play and stealthier options.

The plot and experience are altered by the choices made on how to customize and play, resulting in unique challenges and results for each player.


When players interact with other characters or items, emergent gameplay guarantees that the game environment reacts.



Because of many options, the tale unfolds in surprising ways as the gaming world changes.

Specific outcomes are not predetermined by the designer, and emergent gameplay can even take advantage of game flaws to generate actions in the game world, which some consider to be a form of emergence.

Artificial intelligence has become more popular among game creators in order to have the game environment respond to player actions in a timely manner.

Artificial intelligence aids the behavior of video characters and their interactions via the use of algorithms, basic rule-based forms that help in generating the game environment in sophisticated ways.

"Game AI" refers to the usage of artificial intelligence in games.

The most common use of AI algorithms is to construct the form of a non-player character (NPC), which are characters in the game world with whom the player interacts but does not control.


In its most basic form, AI will use pre-scripted actions for the characters, who will then concentrate on reacting to certain events.


Pre-scripted character behaviors performed by AI are fairly rudimentary, and NPCs are meant to respond to certain "case" events.

The NPC will evaluate its current situation before responding in a range determined by the AI algorithm.

Pac-Man is a good early and basic illustration of this (1980).

Pac-Man is controlled by the player through a labyrinth while being pursued by a variety of ghosts, who are the game's non-player characters.


Players could only interact with ghosts (NPCs) by moving about; ghosts had limited replies and their own AI-programmed pre-scripted movement.




The AI planned reaction would occur if the ghost ran into a wall.

It would then roll an AI-created die that would determine whether or not the NPC would turn toward or away from the direction of the player.

If the NPC decided to go after the player, the AI pre-scripted pro gram would then detect the player’s location and turn the ghost toward them.

If the NPC decided not to go after the player, it would turn in an opposite or a random direction.

This NPC interaction is very simple and limited; however, this was an early step in AI providing emergent gameplay.



Contemporary games provide a variety of options available and a much larger set of possible interactions for the player.


Players in contemporary role-playing games (RPGs) are given an incredibly high number of potential options, as exemplified by Fallout 3 (2008) and its sequels.

Fallout is a role-playing game, where the player takes on the role of a survivor in a post-apocalyptic America.

The story narrative gives the player a goal with no direction; as a result, the player is given the freedom to play as they see fit.

The player can punch every NPC, or they can talk to them instead.

In addition to this variety of actions by the player, there are also a variety of NPCs controlled through AI.

Some of the NPCs are key NPCs, which means they have their own unique scripted dialogue and responses.

This provides them with a personality and provides a complexity through the use of AI that makes the game environment feel more real.


When talking to key NPCs, the player is given options for what to say, and the Key NPCs will have their own unique responses.


This differs from the background character NPCs, as the key NPCs are supposed to respond in such a way that it would emulate interaction with a real personality.

These are still pre-scripted responses to the player, but the NPC responses are emergent based on the possible combination of the interaction.

As the player makes decisions, the NPC will examine this decision and decide how to respond in accordance to its script.

The NPCs that the players help or hurt and the resulting interactions shape the game world.

Game AI can emulate personalities and present emergent gameplay in a narrative setting; however, AI is also involved in challenging the player in difficulty settings.


A variety of pre-scripted AI can still be used to create difficulty.

Pre scripted AI are often made to make suboptimal decisions for enemy NPCs in games where players fight.

This helps make the game easier and also makes the NPCs seem more human.

Suboptimal pre-scripted decisions make the enemy NPCs easier to handle.

Optimal decisions however make the opponents far more difficult to handle.

This can be seen in contemporary games like Tom Clancy’s The Division (2016), where players fight multiple NPCs.

The enemy NPCs range from angry rioters to fully trained paramilitary units.

The rioter NPCs offer an easier challenge as they are not trained in combat and make suboptimal decisions while fighting the player.

The military trained NPCs are designed to have more optimal decision-making AI capabilities in order to increase the difficulty for the player fighting them.



Emergent gameplay has evolved to its full potential through use of adaptive AI.


Similar to prescript AI, the character examines a variety of variables and plans about an action.

However, unlike the prescript AI that follows direct decisions, the adaptive AI character will make their own decisions.

This can be done through computer-controlled learning.


AI-created NPCs follow rules of interactions with the players.


As players go through the game, the player interactions are analyzed, and some AI judgments become more weighted than others.

This is done in order to provide distinct player experiences.

Various player behaviors are actively examined, and modifications are made by the AI when designing future challenges.

The purpose of the adaptive AI is to challenge the players to a degree that the game is fun while not being too easy or too challenging.

Difficulty may still be changed if players seek a different challenge.

This may be observed in the Left 4 Dead game (2008) series’ AI Director.

Players navigate through a level, killing zombies and gathering resources in order to live.


The AI Director chooses which zombies to spawn, where they will spawn, and what supplies will be spawned.

The choice to spawn them is not made at random; rather, it is based on how well the players performed throughout the level.

The AI Director makes its own decisions about how to respond; as a result, the AI Director adapts to the level's player success.

The AI Director gives less resources and spawns more adversaries as the difficulty level rises.


Changes in emergent gameplay are influenced by advancements in simulation and game world design.


As virtual reality technology develops, new technologies will continue to help in this progress.

Virtual reality games provide an even more immersive gaming experience.

Players may use their own hands and eyes to interact with the environment.

Computers are growing more powerful, allowing for more realistic pictures and animations to be rendered.


Adaptive AI demonstrates the capability of really autonomous decision-making, resulting in a truly participatory gaming experience.


Game makers are continuing to build more immersive environments as AI improves to provide more lifelike behavior.

These cutting-edge technology and new AI will elevate emergent gameplay to new heights.

The importance of artificial intelligence in videogames has emerged as a crucial part of the industry for developing realistic and engrossing gaming.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Brooks, Rodney; Distributed and Swarm Intelligence; General and Narrow AI.



Further Reading:



Brooks, Rodney. 1986. “A Robust Layered Control System for a Mobile Robot.” IEEE Journal of Robotics and Automation 2, no. 1 (March): 14–23.

Brooks, Rodney. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems6, no. 1–2 (June): 3–15.

Brooks, Rodney. 1991. “Intelligence Without Representation.” Artificial Intelligence Journal 47: 139–60.

Dennett, Daniel C. 1997. “Cog as a Thought Experiment.” Robotics and Autonomous Systems 20: 251–56.

Gallagher, Shaun. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press.

Pfeifer, Rolf, and Josh Bongard. 2007. How the Body Shapes the Way We Think: A New View of Intelligence. Cambridge, MA: MIT Press.




Artificial Intelligence - What Is AI Embodiment Or Embodied Artificial Intelligence?

 



Embodied Artificial Intelligence is a method for developing AI that is both theoretical and practical.

It is difficult to fully trace its his tory due to its beginnings in different fields.

Rodney Brooks' Intelligence Without Representation, written in 1987 and published in 1991, is one claimed for the genesis of this concept.


Embodied AI is still a very new area, with some of the first references to it dating back to the early 2000s.


Rather than focusing on either modeling the brain (connectionism/neural net works) or linguistic-level conceptual encoding (GOFAI, or the Physical Symbol System Hypothesis), the embodied approach to AI considers the mind (or intelligent behavior) to emerge from interaction between the body and the world.

There are hundreds of different and sometimes contradictory approaches to interpret the role of the body in cognition, the majority of which utilize the term "embodied." 

The idea that the physical body's shape is related to the structure and content of the mind is shared by all of these viewpoints.


Despite the success of neural network or GOFAI (Good Old-Fashioned Artificial Intelligence or classic symbolic artificial intelligence) techniques in building row expert systems, the embodied approach contends that general artificial intelligence cannot be accomplished in code alone.




For example, in a tiny robot with four motors, each driving a separate wheel, and programming that directs the robot to avoid obstacles, the same code might create dramatically different observable behaviors if the wheels were relocated to various areas of the body or replaced with articulated legs.

This is a basic explanation of why the shape of a body must be taken into account when designing robotic systems, and why embodied AI (rather than merely robotics) considers the dynamic interaction between the body and the surroundings to be the source of sometimes surprising emergent behaviors.


The instance of passive dynamic walkers is an excellent illustration of this method.

The passive dynamic walker is a bipedal walking model that depends on the dynamic interaction of the leg design and the environment's structure.

The gait is not generated by an active control system.

The walker is propelled forward by gravity, inertia, and the forms of the feet, legs, and inclination.


This strategy is based on the biological concept of stigmergy.

  • Stigmergy is based on the idea that signs or marks left by actions in the environment inspire future actions.




AN APPROACH INFORMED BY ENGINEERING.



Embodied AI is influenced by a variety of domains. Engineering and philosophy are two frequent methods.


Rodney Brooks proposed the "subsumption architecture" in 1986, which is a method of generating complex behaviors by arranging lower-level layers of the system to interact with the environment in prioritized ways, tightly coupling perception and action, and attempting to eliminate the higher-level processing of other models.


For example, the Smithsonian's robot Genghis was created to traverse rugged terrain, a talent that made the design and engineering of other robots very challenging at the time.


The success of this approach was primarily due to the design choice to divide the processing of various motors and sensors throughout the network rather than trying higher-level system integration to create a full representational model of the robot and its surroundings.

To put it another way, there was no central processing region where all of the robot's parts sought to integrate data for the system.


Cog, a humanoid torso built by the MIT Humanoid Robotics Group in the 1990s, was an early effort at embodied AI.


Cog was created to learn about the world by interacting with it physically.

Cog, for example, may be shown learning how to apply force and weight to a drum while holding drumsticks for the first time, or learning how to gauge the weight of a ball once it was put in Cog's hand.

These early notions of letting the body conduct the learning are still at the heart of the embodied AI initiative.


The Swiss Robots, created and constructed in the AI Lab at Zurich University, are perhaps one of the most prominent instances of embodied emergent intelligence.



Simple small robots with two motors (one on each side) and two infrared sensors, the Swiss Robots (one on each side).

The only high-level instructions in their programming were that if a sensor detected an item on one side, it should move in the other direction.

However, when combined with a certain body form and sensor location, this resulted in what seemed to be high-level cleaning or clustering behavior in certain situations.

A similar strategy is used in many other robotics projects.


Shakey the Robot, developed by SRI International in the 1960s, is frequently credited as being the first mobile robot with thinking ability.


Shakey was clumsy and sluggish, and he's often portrayed as the polar antithesis of what embodied AI is attempting to achieve by moving away from higher-level thinking and processing.

However, even in 1968, SRI's approach to embodiment was a clear forerunner of Brooks', since they were the first to assert that the finest reservoir of knowledge about the actual world is the real world itself.

The greatest model of the world is the world itself, according to this notion, which has become a rallying cry against higher-level representation in embodied AI.

Earlier robots, in contrast to the embodied AI software, were mostly preprogrammed and did not actively interface with their environs in the manner that this method does.


Honda's ASIMO robot, for example, isn't an excellent illustration of embodied AI; instead, it's representative of other and older approaches to robotics.


Work in embodied AI is exploding right now, with Boston Dynamics' robots serving as excellent examples (particularly the non-humanoid forms).

Embodied AI is influenced by a number of philosophical ideas.

Rodney Brooks, a roboticist, particularly rejects philosophical influence on his technical concerns in a 1991 discussion of his subsumption architecture, while admitting that his arguments mirror Heidegger's.

In several essential design aspects, his ideas match those of phenom enologist Merleau-Ponty, demonstrating how earlier philosophical issues at least reflect, and likely shape, much of the design work in contemplating embodied AI.

Because of its methodology in experimenting toward an understanding of how awareness and intelligent behavior originate, which are highly philosophical activities, this study in embodied robotics is deeply philosophical.

Other clearly philosophical themes may be found in a few embodied AI projects as well.

Rolf Pfeifer and Josh Bongard, for example, often draw to philosophical (and psychological) literature in their work, examining how these ideas intersect with their own methods to developing intelligent machines.


They discuss how these ideas may (and frequently do not) guide the development of embodied AI.


This covers a broad spectrum of philosophical inspirations, such as George Lakoff and Mark Johnson's conceptual metaphor work, Shaun Gallagher's (2005) body image and phenomenology work, and even John Dewey's early American pragmatism.

It's difficult to say how often philosophical concerns drive engineering concerns, but it's clear that the philosophy of embodiment is probably the most robust of the various disciplines within cognitive science to have done embodiment work, owing to the fact that theorizing took place long before the tools and technologies were available to actually realize the machines being imagined.

This suggests that for roboticists interested in the strong AI project, that is, broad intellectual capacities and functions that mimic the human brain, there are likely still unexplored resources here.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 


Brooks, Rodney; Distributed and Swarm Intelligence; General and Narrow AI.


Further Reading:


Brooks, Rodney. 1986. “A Robust Layered Control System for a Mobile Robot.” IEEE Journal of Robotics and Automation 2, no. 1 (March): 14–23.

Brooks, Rodney. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems 6, no. 1–2 (June): 3–15.

Brooks, Rodney. 1991. “Intelligence Without Representation.” Artificial Intelligence Journal 47: 139–60.

Dennett, Daniel C. 1997. “Cog as a Thought Experiment.” Robotics and Autonomous Systems 20: 251–56.

Gallagher, Shaun. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press.

Pfeifer, Rolf, and Josh Bongard. 2007. How the Body Shapes the Way We Think: A New View of Intelligence. Cambridge, MA: MIT Press.




Artificial Intelligence - Who Is Rodney Brooks?

 


Rodney Brooks (1954–) is a business and policy adviser, as well as a computer science researcher.

He is a recognized expert in the fields of computer vision, artificial intelligence, robotics, and artificial life.

Brooks is well-known for his work in artificial intelligence and behavior-based robotics.

His iRobot Roomba autonomous robotic vacuum cleaners are among the most widely used home robots in America.

Brooks is well-known for his support for a bottom-up approach to computer science and robotics, which he discovered while on a lengthy, continuous visit to his wife's relatives in Thailand.

Brooks claims that situatedness, embodiment, and perception are just as crucial as cognition in describing the dynamic actions of intelligent beings.

This method is currently known as behavior-based artificial intelligence or action-based robotics.

Brooks' approach to intelligence, which avoids explicitly planned reasoning, contrasts with the symbolic reasoning and representation method that dominated artificial intelligence research over the first few decades.

Much of the early advances in robotics and artificial intelligence, according to Brooks, was based on the formal framework and logical operators of Alan Turing and John von Neumann's universal computer architecture.

He argued that these artificial systems had become far far from the biological systems that they were supposed to reflect.

Low-speed, massively parallel processing and adaptive interaction with their surroundings were essential for living creatures.

These were not, in his opinion, elements of traditional computer design, but rather components of what Brooks coined the term "subsumption architecture" in the mid-1980s.

According to Brooks, behavior-based robots are placed in real-world contexts and learn effective behaviors from them.

They need to be embodied in order to be able to interact with the environment and get instant feedback from their sensory inputs.

Specific conditions, signal changes, and real-time physical interactions are usually the source of intelligence.

Intelligence may be difficult to define functionally since it comes through a variety of direct and indirect interactions between different robot components and the environment.

As a professor at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory, Brooks developed numerous notable mobile robots based on the subsumption architecture.

Allen, the first of these behavior-based robots, was outfitted with sonar range and motion sensors.

Three tiers of control were present on the robot.

Its capacity to avoid static and dynamic impediments was given to it by the first rudimentary layer.

The second implanted a random walk algorithm that gave the robot the ability to shift course on occasion.

The third behavioral layer kept an eye on faraway locations that may be objectives while the other two control levels were turned off.

Another robot, Herbert, used a dispersed array of 8-bit microprocessors and 30 infrared proximity sensors to avoid obstructions, navigate low walls, and gather empty drink cans scattered across several workplaces.

Genghis was a six-legged robot that could walk across rugged terrain and had four onboard microprocessors, 22 sensors, and 12 servo motors.

Genghis was able to stand, balance, and maintain itself, as well as climb stairs and follow humans.

Brooks started thinking scenarios in which behavior-based robots may assist in the exploration of the surface of other planets with the support of Anita Flynn, an MIT Mobile Robotics Group research scientist.

The two roboticists argued in their 1989 essay "Fast, Cheap, and Out of Control," published by the British Interplanetary Society, that space organizations like the Jet Propulsion Laboratory should reconsider plans for expensive, large, and slow-moving mission rovers and instead consider using larger sets of small mission rovers to save money and avoid risk.

Brooks and Flynn came to the conclusion that their autonomous robot technology could be constructed and tested swiftly by space agencies, and that it could serve dependably on other planets even when it was out of human control.

When the Sojourner rover arrived on Mars in 1997, it had some behavior-based autonomous robot capabilities, despite its size and unique design.

Brooks and a new Humanoid Robotics Group at the MIT Artificial Intelligence Laboratory started working on Cog, a humanoid robot, in the 1990s.

The term "cog" had two meanings: it referred to the teeth on gears as well as the word "cognitive." Cog had a number of objectives, many of which were aimed at encouraging social communication between the robot and a human.

Cog had a human-like visage and a lot of motor mobility in his head, trunk, arms, and legs when he was created.

Cog was equipped with sensors that allowed him to see, hear, touch, and speak.

Cynthia Breazeal, the group researcher who designed Cog's mechanics and control system, used the lessons learned from human interaction with the robot to create Kismet, a new robot in the lab.

Kismet is an affective robot that is capable of recognizing, interpreting, and replicating human emotions.

The meeting of Cog and Kis is a watershed moment in the history of artificial emotional intelligence.

Rodney Brooks, cofounder and chief technology officer of iRobot Corporation, has sought commercial and military applications of his robotics research in recent decades.

PackBot, a robot commonly used to detect and defuse improvised explosive devices in Iraq and Afghanistan, was developed with a grant from the Defense Advanced Research Projects Agency (DARPA) in 1998.

PackBot was also used to examine the damage to the Fukushima Daiichi nuclear power facility in Japan after the terrorist attacks of September 11, 2001, and at the site of the World Trade Center after the terrorist attacks on September 11, 2001.

Brooks and others at iRobot created a toy robot that was sold by Hasbro in 2000.

My Real Baby, the end product, is a realistic doll that can cry, fuss, sleep, laughing, and showing hunger.

The Roomba cleaning robot was created by the iRobot Corporation.

Roomba is a disc-shaped vacuum cleaner featuring roller wheels, brushes, filters, and a squeegee vacuum that was released in 2002.

The Roomba, like other Brooks behavior-based robots, uses sensors to detect obstacles and avoid dangers such as falling down stairs.

For self-charging and room mapping, newer versions use infrared beams and photocell sensors.

By 2019, iRobot has sold over 25 million robots throughout the globe.

Brooks is also Rethink Robotics' cofounder and chief technology officer.

Heartland Robotics, which was formed in 2008 as Heartland Robotics, creates low-cost industrial robots.

Baxter, Rethink's first robot, can do basic repetitive activities including loading, unloading, assembling, and sorting.

Baxter poses in front of a computer screen with an animated human face created on it.

Bax vehicle has sensors and cameras integrated in it that allow it identify and prevent crashes when people are around, which is a critical safety feature.

Baxter may be utilized in normal industrial settings without the need for a security cage.

Unskilled personnel may rapidly train the robot by simply moving its arms around in the desired direction to control its actions.

Baxter remembers these gestures and adjusts them to other jobs.

The controls on its arms may be used to make fine motions.

Sawyer is a smaller version of Rethink's Baxter collaborative robot, which is advertised for accomplishing risky or boring industrial jobs in restricted places.

Brooks has often said that science are still unable to solve the difficult challenges of consciousness.

He claims that artificial intelligence and artificial life researchers have overlooked an essential aspect of living systems that maintains the gap between nonliving and living worlds wide.

Even if all of our world's live aspects are made out of nonliving atoms, this remains true.

Brooks speculates that some of the AI and ALife researchers' parameters are incorrect, or that current models are too simple.

It's also possible that researchers are still lacking in raw computing power.

However, Brooks thinks that there may be something about biological life and subjective experience—an component or a property—that is now undetectable or concealed from scientific perspective.

Brooks attended Flinders University in Adelaide, South Australia, to study pure mathematics.

At Stanford University, he earned his PhD under the supervision of John McCarthy, an American computer scientist and cognitive scientist.

Model-Based Computer Vision was the title of his dissertation thesis, which he extended and published (1984).

From 1997 until 2007, he was the Director of the MIT Artificial Intelligence Laboratory (CSAIL), which was renamed Computer Science & Artificial Intelligence Laboratory (CSAIL) in 2003.

Brooks has received various distinctions and prizes for his contributions to artificial intelligence and robotics.

He is a member of both the American Academy of Arts and Sciences and the Association for Computing Machinery.

Brooks has won the IEEE Robotics and Automation Award as well as the Joseph F. Engelberger Robotics Award for Leadership.

He is now the vice chairman of the Toyota Research Institute's advisory board.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Embodiment, AI and; Tilden, Mark.



Further Reading

Brooks, Rodney A. 1984. Model-Based Computer Vision. Ann Arbor, MI: UMI Research Press.

Brooks, Rodney A. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems 6, no. 1–2 (June): 3–15.

Brooks, Rodney A. 1991. “Intelligence without Reason.” AI Memo No. 1293. Cambridge, MA: MIT Artificial Intelligence Laboratory.

Brooks, Rodney A. 1999. Cambrian Intelligence: The Early History of the New AI. Cambridge, MA: MIT Press.

Brooks, Rodney A. 2002. Flesh and Machines: How Robots Will Change Us. New York: Pantheon.

Brooks, Rodney A., and Anita M. Flynn. 1989. “Fast, Cheap, and Out of Control.” Journal of the British Interplanetary Society 42 (December): 478–85.

What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...