Artificial Intelligence - What Is Swarm Intelligence and Distributed Intelligence?



From developing single autonomous agents to building groups of distributed autonomous agents that coordinate themselves, distributed intelligence is the obvious next step.

A multi-agent system is made up of many agents.

Communication is a prerequisite for cooperation.

The fundamental concept is to allow for distributed problem-solving rather than employing a collection of agents as a simple parallelization of the single-agent technique.

Agents effectively cooperate, exchange information, and assign duties to one another.

Sensor data, for example, is exchanged to learn about the current condition of the environment, and an agent is given a task based on who is in the best position to complete that job at the time.

Agents might be software or embodied agents in the form of robots, resulting in a multi-robot system.

RoboCup Soccer (Kitano et al.1997) is an example of this, in which two teams of robots compete in soccer.

Typical challenges include detecting the ball cooperatively and sharing that knowledge, as well as assigning tasks, such as who will go after the ball next.



Agents may have a complete global perspective or simply a partial picture of the surroundings.

The agent's and the entire approach's complexity may be reduced by restricting information to the local area.

Regardless of their local perspective, agents may communicate, disseminate, and transmit information across the agent group, resulting in a distributed collective vision of global situations.





A scalable decentralized system, a non-scalable decentralized system, and a decentralized system are three separate concepts of distributed intelligence that may be used to construct distributed intelligence.

Without a master-slave hierarchy or a central control element, all agents in scalable decentralized systems function in equal roles.

Because the system only allows for local agent-to-agent communication, there is no need for all agents to coordinate with each other.

This allows for potentially huge system sizes.

All-to-all communication is an important aspect of the coordination mechanism in non-scalable decentralized systems, but it may become a bottleneck in systems with too many agents.

A typical RoboCup-Soccer system, for example, requires all robots to cooperate with all other robots at all times.

Finally, in decentralized systems with central components, the agents may interact with one another through a central server (e.g., cloud) or be coordinated by a central control.

It is feasible to mix the decentralized and central approaches by delegating basic tasks to the agents, who will complete them independently and locally, while more difficult activities will be managed centrally.

Vehicle ad hoc networks are an example of a use case (Liang et al.2015).

Each agent is self-contained, yet collaboration aids in traffic coordination.

For example, intelligent automobiles may build dynamic multi-hop networks to notify others about an accident that is still hidden from view.

For a safer and more efficient traffic flow, cars may coordinate passing moves.

All of this may be accomplished by worldwide communication with a central server or, depending on the stability of the connection, through local car-to-car communication.

Natural swarm systems and artificial, designed distributed systems are combined in swarm intelligence research.

Extracting fundamental principles from decentralized biological systems and translating them into design principles for decentralized engineering systems is a core notion in swarm intelligence (scalable decentralized systems as defined above).

Swarm intelligence was inspired by flocks, swarms, and herds' collective activities.

Social insects such as ants, honeybees, wasps, and termites are a good example.

These swarm systems are built on self-organization and work in a fundamentally decentralized manner.

Crystallization, pattern creation in embryology, and synchronization in swarms are examples of self-organization, which is a complex interaction of positive (deviations are encouraged) and negative feedback (deviations are damped).

In swarm intelligence, four key features of systems are investigated: • The system is made up of a large number of autonomous agents that are homogeneous in terms of their capabilities and behaviors.

• Each agent follows a set of relatively simple rules compared to the task's complexity.

• The resulting system behavior is heavily reliant on agent interaction and collaboration.

Reynolds (1987) produced a seminal paper detailing flocking behavior in birds based on three basic local rules: alignment (align direction of movement with neighbors), cohesiveness (remain near to your neighbors), and separation (stay away from your neighbors) (keep a minimal distance to any agent).

As a consequence, a real-life mimicked self-organizing flocking behavior emerges.

By depending only on local interactions between agents, a high level of resilience may be achieved.

Any agent, at any moment, has only a limited understanding of the system's global state (swarm-level state) and relies on communication with nearby agents to complete its duty.

Because the swarm's knowledge is spread, a single point of failure is rare.

An perfectly homogenous swarm has a high degree of redundancy; that is, all agents have the same capabilities and can therefore be replaced by any other.

By depending only on local interactions between agents, a high level of scalability may be obtained.

Due to the dispersed data storage architecture, there is less requirement to synchronize or maintain data coherent.

Because the communication and coordination overhead for each agent is dictated by the size of its neighborhood, the same algorithms may be employed for systems of nearly any scale.

Ant Colony Optimization (ACO) and Particle Swarm Optimization are two well-known examples of swarm intelligence in engineered systems from the optimization discipline (PSO).

Both are metaheuristics, which means they may be used to solve a wide range of optimization problems.

Ants and their use of pheromones to locate the shortest pathways inspired ACO.

A graph must be used to depict the optimization issue.

A swarm of virtual ants travels from node to node, choosing which edge to use next based on the likelihood of how many other ants have used it before (through pheromone, implementing positive feedback) and a heuristic parameter, such as journey length (greedy search).

Evaporation of pheromones balances the exploration-exploitation trade-off (negative feedback).

The traveling salesman dilemma, automobile routing, and network routing are all examples of ACO applications.

Flocking is a source of inspiration for PSO.

Agents navigate search space using average velocity vectors that are impacted by global and local best-known solutions (positive feedback), the agent's past path, and a random direction.

While both ACO and PSO conceptually function in a completely distributed manner, they do not need parallel computing to be deployed.

They may, however, be parallelized with ease.

Swarm robotics is the application of swarm intelligence to embodied systems, while ACO and PSO are software-based methods.

Swarm robotics applies the concept of self-organizing systems based on local information to multi-robot systems with a high degree of resilience and scalability.

Following the example of social insects, the goal is to make each individual robot relatively basic in comparison to the task complexity while yet allowing them to collaborate to perform complicated problems.

A swarm robot can only communicate with other swarm robots since it can only function on local information.

Given a fixed swarm density, the applied control algorithms are meant to allow maximum scalability (i.e., constant number of robots per area).

The same control methods should perform effectively regardless of the system size whether the swarm size is grown or lowered by adding or deleting robots.

A super-linear performance improvement is often found, meaning that doubling the size of the swarm improves the swarm's performance by more than two.

As a result, each robot is more productive than previously.

Swarm robotics systems have been demonstrated to be effective for a wide range of activities, including aggregation and dispersion behaviors, as well as more complicated tasks like item sorting, foraging, collective transport, and collective decision-making.

Rubenstein et al. (2014) conducted the biggest scientific experiment using swarm robots to date, using 1024 miniature mobile robots to mimic self-assembly behavior by arranging the robots in predefined designs.

The majority of the tests were conducted in the lab, but new research has taken swarm robots to the field.

Duarte et al. (2016), for example, built a swarm of autonomous surface watercraft that cruise the ocean together.

Modeling the relationship between individual behavior and swarm behavior, creating advanced design principles, and deriving assurances of system attributes are all major issues in swarm intelligence.

The micro-macro issue is defined as the challenge of identifying the ensuing swarm behavior based on a given individual behavior and vice versa.

It has shown to be a difficult challenge that manifests itself in both mathematical modeling and the robot controller design process as an engineering difficulty.

The creation of complex tactics to design swarm behavior is not only crucial to swarm intelligence research, but it has also proved to be very difficult.

Similarly, due to the combinatorial explosion of action-to-agent assignments, multi-agent learning and evolutionary swarm robotics (i.e., application of evolutionary computation techniques to swarm robotics) do not scale well with task complexity.

Despite the benefits of robustness and scalability, obtaining strong guarantees for swarm intelligence systems is challenging.

Swarm systems' availability and reliability can only be assessed experimentally in general. 


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


AI and Embodiment.


Further Reading:


Bonabeau, Eric, Marco Dorigo, and Guy Theraulaz. 1999. Swarm Intelligence: From Natural to Artificial System. New York: Oxford University Press.

Duarte, Miguel, Vasco Costa, Jorge Gomes, Tiago Rodrigues, Fernando Silva, Sancho Moura Oliveira, Anders Lyhne Christensen. 2016. “Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots.” PloS One 11, no. 3: e0151834.

Hamann, Heiko. 2018. Swarm Robotics: A Formal Approach. New York: Springer.

Kitano, Hiroaki, Minoru Asada, Yasuo Kuniyoshi, Itsuki Noda, Eiichi Osawa, Hitoshi Matsubara. 1997. “RoboCup: A Challenge Problem for AI.” AI Magazine 18, no. 1: 73–85.

Liang, Wenshuang, Zhuorong Li, Hongyang Zhang, Shenling Wang, Rongfang Bie. 2015. “Vehicular Ad Hoc Networks: Architectures, Research Issues, Methodologies, Challenges, and Trends.” International Journal of Distributed Sensor Networks 11, no. 8: 1–11.

Reynolds, Craig W. 1987. “Flocks, Herds, and Schools: A Distributed Behavioral Model.” Computer Graphics 21, no. 4 (July): 25–34.

Rubenstein, Michael, Alejandro Cornejo, and Radhika Nagpal. 2014. “Programmable Self-Assembly in a Thousand-Robot Swarm.” Science 345, no. 6198: 795–99.




Artificial Intelligence - What Is Immortality in the Digital Age?




The act of putting a human's memories, knowledge, and/or personality into a long-lasting digital memory storage device or robot is known as digital immortality.

Human intelligence is therefore displaced by artificial intelligence that resembles the mental pathways or imprint of the brain in certain respects.

The National Academy of Engineering has identified reverse-engineering the brain to attain substrate independence—that is, copying the thinking and feeling mind and reproducing it on a range of physical or virtual media.

Whole Brain Emulation (also known as mind uploading) is a theoretical science that assumes the mind is a dynamic process independent of the physical biology of the brain and its unique sets or patterns of atoms.

Instead, the mind is a collection of information-processing functions that can be computed.

Whole Brain Emulation is presently assumed to be based on the neural networking discipline of computer science, which has as its own ambitious objective the programming of an operating system modeled after the human brain.

Artificial neural networks (ANNs) are statistical models built from biological neural networks in artificial intelligence research.

Through connections and weighting, as well as backpropagation and parameter adjustment in algorithms and rules, ANNs may process information in a nonlinear way.

Through his online "Mind Uploading Home Page," Joe Strout, a computational neurobiology enthusiast at the Salk Institute, facilitated debate of full brain emulation in the 1990s.

Strout argued for the material origins of consciousness, claiming that evidence from damage to actual people's brains indicates to neuronal, connectionist, and chemical beginnings.

Strout shared timelines of previous and contemporary technical advancements as well as suggestions for future uploading techniques through his website.

Mind uploading proponents believe that one of two methods will eventually be used: (1) gradual copy-and-transfer of neurons by scanning the brain and simulating its underlying information states, or (2) deliberate replacement of natural neurons with more durable artificial mechanical devices or manufactured biological products.

Strout gathered information on a variety of theoretical ways for achieving the objective of mind uploading.

One is a microtome method, which involves slicing a live brain into tiny slices and scanning it with a sophisticated electron microscope.

The brain is then reconstructed in a synthetic substrate using the picture data.

Nanoreplacement involves injecting small devices into the brain to monitor the input and output of neurons.

When these minuscule robots have a complete understanding of all biological interactions, they will eventually kill the neurons and replace them.

A robot with billions of appendages that delve deep into every section of the brain, as envisioned by Carnegie Mellon University roboticist Hans Moravec, is used in a variation of this process.

In this approach, the robot creates a virtual model of every portion and function of the brain, gradually replacing it.

Everything that the physical brain used to be is eventually replaced by a simulation.

In copy-and-transfer whole brain emulation, scanning or mapping neurons is commonly considered harmful.

The live brain is plasticized or frozen before being divided into parts , scanned and simulated on a computational media.

Philosophically, the technique creates a mental clone of a person, not the person who agrees to participate in the experiment.

Only a duplicate of that individual's personal identity survives the duplicating experiment; the original person dies.

Because, as philosopher John Locke reasoned, someone who recalls thinking about something in the past is the same person as the person who performed the thinking in the first place, the copy may be thought of as the genuine person.

Alternatively, it's possible that the experiment may turn the original and copy into completely different persons, or that they will soon diverge from one another through time and experience as a result of their lack of shared history beyond the experiment.

There have been many nondestructive approaches proposed as alternatives to damaging the brain during the copy-and-transfer process.

It is hypothesized that sophisticated types of gamma-ray holography, x-ray holography, magnetic resonance imaging (MRI), biphoton interferometry, or correlation mapping using probes might be used to reconstruct function.

With 3D reconstructions of atomic-level detail, the present limit of available technology, in the form of electron microscope tomography, has reached the sub-nanometer scale.

The majority of the remaining challenges are related to the geometry of tissue specimens and tomographic equipment's so-called tilt-range restrictions.

Advanced kinds of picture recognition, as well as neurocomputer manufacturing to recreate scans as information processing components, are in the works.

Professor of Electrical and Computer Engineering Alice Parker leads the BioRC Biomimetic Real-Time Cortex Project at the University of Southern California, which focuses on reverse-engineering the brain.

Parker is now building and producing a memory and carbon nanotube brain nanocircuit for a future synthetic cortex based on statistical predictions with nanotechnology professor Chongwu Zhou and her students.

Her neuromorphic circuits are designed to mimic the complexities of human neural computations, including glial cell connections (these are nonneuronal cells that form myelin, control homeostasis, and protect and support neurons).

Members of the BioRC Project are developing systems that scale to the size of human brains.

Parker is attempting to include dendritic plasticity into these systems, which will allow them to adapt and expand as they learn.

Carver Mead, a Caltech electrical engineer who has been working on electronic models of human neurological and biological components since the 1980s, is credited with the approach's roots.

The Terasem Movement, which began in 2002, aims to educate and urge the public to embrace technical advancements that advance the science of mind uploading and integrate science, religion, and philosophy.

The Terasem Movement, the Terasem Movement Foundation, and the Terasem Movement Transreligion are all incorporated entities that operate together.

Martine Rothblatt and Bina Aspen Rothblatt, serial entrepreneurs, founded the group.

The Rothblatts are inspired by the religion of Earthseed, which may be found in Octavia Butler's 1993 novel Parable of the Sower.

"Life is intentional, death is voluntary, God is technology, and love is fundamental," according to Rothblatt's trans-religious ideas (Roy 2014).

Terasem's CyBeRev (Cybernetic Beingness Revival) project collects all available data about a person's life—their personal history, recorded memories, photographs, and so on—and stores it in a separate data file in the hopes that their personality and consciousness can be pieced together and reanimated one day by advanced software.

The Terasem Foundation-sponsored Lifenaut research retains mindfiles with biographical information on individuals for free and keeps track of corresponding DNA samples (biofiles).

Bina48, a social robot created by the foundation, demonstrates how a person's consciousness may one day be transplanted into a lifelike android.

Numenta, an artificial intelligence firm based in Silicon Valley, is aiming to reverse-engineer the human neocortex.

Jeff Hawkins (creator of the portable PalmPilot personal digital assistant), Donna Dubinsky, and Dileep George are the company's founders.

Numenta's idea of the neocortex is based on Hawkins' and Sandra Blakeslee's theory of hierarchical temporal memory, which is outlined in their book On Intelligence (2004).

Time-based learning algorithms, which are capable of storing and recalling tiny patterns in data change over time, are at the heart of Numenta's emulation technology.

Grok, a commercial tool that identifies flaws in computer servers, was created by the business.

Other applications, such as detecting anomalies in stock market trading or abnormalities in human behavior, have been provided by the business.

Carboncopies is a non-profit that funds research and cooperation to capture and preserve unique configurations of neurons and synapses carrying human memories.

Computational modeling, neuromorphic hardware, brain imaging, nanotechnology, and philosophy of mind are all areas where the organization supports research.

Randal Koene, a computational neuroscientist educated at McGill University and head scientist at neuroprosthetic company Kernel, is the organization's creator.

Dmitry Itskov, a Russian new media millionaire, donated early funding for Carbon copies.

Itskov is also the founder of the 2045 Initiative, a non-profit organization dedicated to extreme life extension.

The purpose of the 2045 Initiative is to develop high-tech methods for transferring personalities into a "advanced nonbiological carrier." Global Future 2045, a meeting aimed to developing "a new evolutionary strategy for mankind," is organized by Koene and Itskov.

Proponents of digital immortality see a wide range of practical results as a result of their efforts.

For example, in the case of death by accident or natural causes, a saved backup mind may be used to reawaken into a new body.

(It's reasonable to assume that elderly brains would seek out new bodies long before aging becomes apparent.) This is also the basis of Arthur C.

Clarke's science fiction book City of the Stars (1956), which influenced Koene's decision to pursue a career in science at the age of thirteen.

Alternatively, mankind as a whole may be able to lessen the danger of global catastrophe by uploading their thoughts to virtual reality.

Civilization might be saved on a high-tech hard drive buried deep into the planet's core, safe from hostile extraterrestrials and incredibly strong natural gamma ray bursts.

Another potential benefit is the potential for life extension over lengthy periods of interstellar travel.

For extended travels throughout space, artificial brains might be implanted into metal bodies.

This is a notion that Clarke foreshadowed in the last pages of his science fiction classic Childhood's End (1953).

It's also the response offered by Manfred Clynes and Nathan Kline in their 1960 Astronautics article "Cyborgs and Space," which includes the first mention of astronauts with physical capacities that transcend beyond conventional limitations (zero gravity, space vacuum, cosmic radiation) thanks to mechanical help.

Under real mind uploading circumstances, it may be able to simply encode and send the human mind as a signal to a neighboring exoplanet that is the greatest possibility for alien life discovery.

The hazards to humans are negligible in each situation when compared to the present threats to astronauts, which include exploding rockets, high-speed impacts with micrometeorites, and faulty suits and oxygen tanks.

Another potential benefit of digital immortality is real restorative justice and rehabilitation through criminal mind retraining.

Or, alternatively, mind uploading might enable for penalties to be administered well beyond the normal life spans of those who have committed heinous crimes.

Digital immortality has far-reaching social, philosophical, and legal ramifications.

The concept of digital immortality has long been a hallmark of science fiction.

The short story "The Tunnel Under the World" (1955) by Frederik Pohl is a widely reprinted story about chemical plant workers who are killed in a chemical plant explosion, only to be rebuilt as miniature robots and subjected to advertising campaigns and jingles over the course of a long Truman Show-like repeating day.

The Silicon Man (1991) by Charles Platt relates the tale of an FBI agent who finds a hidden operation named LifeScan.

The project has found a technique to transfer human thought patterns to a computer dubbed MAPHIS, which is headed by an old millionaire and a mutinous crew of government experts (Memory Array and Processors for Human Intelligence Storage).

MAPHIS is capable of delivering any standard stimuli, including pseudomorphs, which are simulations of other persons.

The Autoverse is introduced in Greg Egan's hard science fiction Permutation City (1994), which mimics complex miniature biospheres and virtual worlds populated by artificial living forms.

Egan refers to human consciousnesses scanned into the Autoverse as copies.

The story is inspired by John Conway's Game of Life's cellular automata, quantum ontology (the link between the quantum universe and human perceptions of reality), and what Egan refers to as dust theory.

The premise that physics and arithmetic are same, and that individuals residing in whatever mathematical, physical, and spacetime systems (and all are feasible) are essentially data, processes, and interactions, is at the core of dust theory.

This claim is similar to MIT physicist Max Tegmark's Theory of Everything, which states that "all structures that exist mathematically exist also physically," meaning that "all structures that exist mathematically exist also physically," meaning that "all structures that exist mathematically exist also physically, by which we mean that in those complex enough to contain self-aware substructures (SASs), these SASs will subjectively perceive themselves as existing in a physically'real' world" (Tegmark 1998, 1).

Hans Moravec, a roboticist at Carnegie Mellon University, makes similar assertions in his article "Simulation, Consciousness, Existence" (1998).

Tron (1982), Freejack (1992), and The 6th Day are examples of mind uploading and digital immortality in movies (2000).

Kenneth D. Miller, a theoretical neurologist at Columbia University, is a notable skeptic.

While rebuilding an active, functional mind may be achievable, connectomics researchers (those working on a wiring schematic of the whole brain and nervous system) remain millennia away from finishing their job, according to Miller.

And, he claims, connectomics is just concerned with the first layer of brain activities that must be comprehended in order to replicate the complexity of the human brain.

Others have wondered what happens to personhood in situations where individuals are no longer constrained as physical organisms.

Is identity just a series of connections between neurons in the brain? What is going to happen to markets and economic forces? Is a body required for immortality? Professor Robin Hanson of George Mason University's nonfiction publication The Age of Em: Work, Love, and Life When Robots Rule the Earth provides an economic and social viewpoint on digital immortality (2016).

Hanson's hypothetical ems are scanned emulations of genuine humans who exist in both virtual reality environments and robot bodies.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Technological Singularity.


Further Reading:


Clynes, Manfred E., and Nathan S. Kline. 1960. “Cyborgs and Space.” Astronautics 14, no. 9 (September): 26–27, 74–76.

Farnell, Ross. 2000. “Attempting Immortality: AI, A-Life, and the Posthuman in Greg Egan’s ‘Permutation City.’” Science Fiction Studies 27, no. 1: 69–91.

Global Future 2045. http://gf2045.com/.

Hanson, Robin. 2016. The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford, UK: Oxford University Press.

Miller, Kenneth D. 2015. “Will You Ever Be Able to Upload Your Brain?” New York Times, October 10, 2015. https://www.nytimes.com/2015/10/11/opinion/sunday/will-you-ever-be-able-to-upload-your-brain.html.

Moravec, Hans. 1999. “Simulation, Consciousness, Existence.” Intercommunication 28 (Spring): 98–112.

Roy, Jessica. 2014. “The Rapture of the Nerds.” Time, April 17, 2014. https://time.com/66536/terasem-trascendence-religion-technology/.

Tegmark, Max. 1998. “Is ‘the Theory of Everything’ Merely the Ultimate Ensemble The￾ory?” Annals of Physics 270, no. 1 (November): 1–51.

2045 Initiative. http://2045.com/.


Artificial Intelligence - Who Is Peter Diamandis?



Peter Diamandis (1961–) is a Harvard medical doctor who also has an MIT degree in aeronautical engineering.

He's also a serial entrepreneur, having created or cofounded twelve businesses, the most of which are still operational today, including International Space University and Singularity University.

The XPRIZE Foundation is his idea, and it hosts challenges in futuristic fields including space technology, low-cost mobile medical diagnostics, and oil spill cleanup.

Singularity University, which trains CEOs and graduate students on exponentially developing technology, is chaired by him.

The major difficulties that mankind faces are the focus of Diamandis' work.

His interests were first solely centered on space travel.

He believed that mankind should be a multiplanetary species when he was a teenager.

When he recognized that the US government was unwilling to fund NASA's lofty ambitions for colonization of other planets, he selected the private sector as the new space engine.

He launched many not-for-profit start-ups while still a student at Harvard and MIT, including International Space University (1987), which is now situated in France.

International Microspace, a for-profit microsatellite launcher, was created by him in 1989.

In 1992, Diamandis founded Zero Gravity, a firm dedicated to provide consumers with the sensation of weightlessness via parabolic flights.

Stephen Hawking is the most renowned of the 12,000 clients who have experienced zero gravity thus far.

In 2004, he established the XPRIZE Foundation, which is essentially a large incentive reward (five to ten million dol lars).

Diamandis and Ray Kurzweil cofounded Singularity University in 2008 to teach individuals how to conceive in terms of exponential technologies and to assist entrepreneurs use exponential technologies to solve humanity's most urgent challenges.

Planetary Resources, an asteroid mining firm that promises to create low-cost spacecraft, was established by him in 2012.

Diamandis is often referred to as a futurist.

If that's the case, he's a unique kind of futurist, since he doesn't extrapolate patterns or make intricate prophecies.

Diamandis' primary task is matchmaking: he finds major issues on the one hand and then connects them to viable remedies on the other.

He has developed incentive prizes and a network of influential billionaires to fund such prizes in order to uncover viable answers.

Larry Page, James Cameron, and the late Ross Perot are among the billionaires who have backed Diamandis' endeavors.

Diamandis began by focusing on the difficulty of getting humans into space, but over the last three decades, he has broadened his focus to include all of humanity's big concerns in exploration, including space and seas, life sciences, education, global development, energy, and the environment.

The next frontier for Diamandis is increased longevity, or living longer.

He believes that the causes of early mortality may be eliminated, and that the general people can live longer and healthier lives.

He also thinks that a person's mental peak may be extended by 20 years.

In 2014, Diamandis founded Human Longevity, a biotechnology business based in San Diego, alongside genomics specialist Craig Venter and stem cell pioneer Robert Hariri to tackle the challenge of longevity.

Four years later, he cofounded Celularity, a longevity-focused firm that offers stem cell-based antiaging therapies, alongside Hariri.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Kurzweil, Ray; Technological Singularity.


Further Reading:


Diamandis, Peter. 2012. Abundance: The Future Is Better Than You Think. New York: Free Press.

Guthrie, Julian. 2016. How to Make a Spaceship: A Band of Renegades, an Epic Race, and the Birth of Private Spaceflight. New York: Penguin Press.




Artificial Intelligence - Who Is Daniel Dennett?

 



At Tufts University, Daniel Dennett(1942–) is the Austin B. Fletcher Professor of Philosophy and Co-Director of the Center for Cognitive Studies.

Philosophy of mind, free will, evolutionary biology, cognitive neuroscience, and artificial intelligence are his main areas of study and publishing.

He has written over a dozen books and hundreds of articles.

Much of this research has focused on the origins and nature of consciousness, as well as how naturalistically it may be described.

Dennett is also an ardent atheist, and one of the New Atheism's "Four Horsemen." Richard Dawkins, Sam Harris, and Christopher Hitchens are the others.

Dennett's worldview is naturalistic and materialistic throughout.

He opposes Cartesian dualism, which holds that the mind and body are two distinct things that merge.

Instead, he contends that the brain is a form of computer that has developed through time due to natural selection.

Dennett also opposes the homunculus theory of the mind, which holds that the brain has a central controller or "little man" who performs all of the thinking and emotion.

Dennett, on the other hand, argues for a viewpoint he refers to as the numerous drafts model.

According to his theory, which he lays out in his 1991 book Consciousness Explained, the brain is constantly sifting through, interpreting, and editing sensations and inputs, forming overlapping drafts of experience.

Dennett later used the metaphor of "fame in the brain" to describe how various aspects of ongoing neural processes are periodically emphasized at different times and under different circumstances.

Consciousness is a story made up of these varied interpretations of human events.

Dennett dismisses the assumption that these ideas coalesce or are structured in a central portion of the brain, which he mockingly refers to as "Cartesian theater." The brain's story is made up of a never-ending, un-centralized flow of bottom-up awareness that spans time and place.

Dennett denies the existence of qualia, which are subjective individual experiences such as how colors seem to the human eye or how food feels.

He does not deny that colors and tastes exist; rather, he claims that the sensation of color and taste does not exist as a separate thing in the human mind.

He claims that there is no difference between human and computer "sensation experiences." According to Dennett, just as some robots can discern between colors without people deciding that they have qualia, so can the human brain.

For Dennett, the color red is just the quality that brains sense and which is referred to as red in the English language.

It has no extra, indescribable quality.

This is a crucial consideration for artificial intelligence because the ability to experience qualia is frequently seen as a barrier to the development of Strong AI (AI that is functionally equivalent to that of a human) and as something that will invariably distinguish human and machine intelligence.

However, if qualia do not exist, as Dennett contends, it cannot constitute a stumbling block to the creation of machine intelligence comparable to that of humans.

Dennett compares our brains to termite colonies in another metaphor.

Termites do not join together and plot to form a mound, but their individual activities cause it to happen.

The mound is the consequence of natural selection producing uncomprehending expertise in cooperative mound-building rather than intellectual design by the termites.

To create a mound, termites don't need to comprehend what they're doing.

Likewise, comprehension is an emergent attribute of such abilities.

Brains, according to Dennett, are control centers that have evolved to respond swiftly and effectively to threats and opportunities in the environment.

As the demands of responding to the environment grow more complicated, understanding emerges as a tool for dealing with them.

On a sliding scale, comprehension is a question of degree.

Dennett, for example, considers bacteria's quasi-comprehension in response to diverse stimuli and computers' quasi-comprehension in response to coded instructions to be on the low end of the range.

On the other end of the spectrum, he placed Jane Austen's comprehension of human social processes and Albert Einstein's understanding of relativity.

However, they are just changes in degree, not in type.

Natural selection has shaped both extremes of the spectrum.

Comprehension is not a separate mental process arising from the brain's varied abilities.

Rather, understanding is a collection of these skills.

Consciousness is an illusion to the extent that we recognize it as an additional element of the mind in the shape of either qualia or cognition.

In general, Dennett advises mankind to avoid positing understanding when basic competence would suffice.

Humans, on the other hand, often adopt what Dennett refers to as a "intentional position" toward other humans and, in some cases, animals.

When individuals perceive acts as the outcome of mind-directed thoughts, emotions, wants, or other mental states, they adopt the intentional viewpoint.

This is in contrast to the "physical posture" and the "design stance," according to him.

The physical stance is when anything is seen as the outcome of simply physical forces or natural principles.

Gravity causes a stone to fall when it is dropped, not any conscious purpose to return to the ground.

An action is seen as the mindless outcome of a preprogrammed, or predetermined, purpose in the design stance.

An alarm clock, for example, beeps at a certain time because it was built to do so, not because it chose to do so on its own.

In contrast to both the physical and design stances, the intentional stance considers behaviors and acts as though they are the consequence of the agent's deliberate decision.

It might be difficult to decide whether to apply the purposeful or design perspective to computers.

A chess-playing computer has been created with the goal of winning.

However, its movements are often indistinguishable from those of a human chess player who wants or intends to win.

In fact, having a purposeful posture toward the computer's behavior, rather than a design stance, improves human interpretation of its behavior and capacity to respond to it.

Dennett claims that the purposeful perspective is the greatest strategy to adopt toward both humans and computers since it works best in describing both human and computer behavior.

Furthermore, there is no need to differentiate them in any way.

Though the intentional attitude considers behavior as agent-driven, it is not required to take a position on what is truly going on inside the human or machine's internal workings.

This posture provides a neutral starting point from which to investigate cognitive competency without presuming a certain explanation of what's going on behind the scenes.

Dennett sees no reason why AI should be impossible in theory since human mental abilities have developed organically.

Furthermore, by abandoning the concept of qualia and adopting an intentional posture that relieves people of the responsibility of speculating about what is going on in the background of cognition, two major impediments to solving the hard issue of consciousness have been eliminated.

Dennett argues that since the human brain and computers are both machines, there is no good theoretical reason why humans should be capable of acquiring competence-driven understanding while AI should be intrinsically unable.

Consciousness in the traditional sense is illusory, hence it is not a need for Strong AI.

Dennett does not believe that Strong AI is theoretically impossible.

He feels that society's technical sophistication is still at least fifty years away from producing it.

Strong AI development, according to Dennett, is not desirable.

Humans should strive to build AI tools, but Dennett believes that attempting to make computer pals or colleagues would be a mistake.

Such robots, he claims, would lack human moral intuitions and understanding, and hence would not be able to integrate into human society.

Humans do not need robots to provide friendship since they have each other.

Robots, even AI-enhanced machines, should be seen as tools to be utilized by humans alone.


 


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Cognitive Computing; General and Narrow AI.


Further Reading:


Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, MA: MIT Press.

Dennett, Daniel C. 1993. Consciousness Explained. London: Penguin.

Dennett, Daniel C. 1998. Brainchildren: Essays on Designing Minds. Cambridge, MA: MIT Press.

Dennett, Daniel C. 2008. Kinds of Minds: Toward an Understanding of Consciousness. New York: Basic Books.

Dennett, Daniel C. 2017. From Bacteria to Bach and Back: The Evolution of Minds. New York: W. W. Norton.

Dennett, Daniel C. 2019. “What Can We Do?” In Possible Minds: Twenty-Five Ways of Looking at AI, edited by John Brockman, 41–53. London: Penguin Press.

Artificial Intelligence - Emotion Recognition And Emotional Intelligence.





A group of academics released a meta-analysis of studies in 2019 indicating that a person's mood may be determined from their facial movements. 

They came to the conclusion that there is no evidence that emotional state can be predicted from expression, regardless of whether the assessment is made by a person or by technology. 


The coauthors noted, "[Facial expressions] in issue are not 'fingerprints' or diagnostic displays that dependably and explicitly convey distinct emotional states independent of context, person, or culture."


  "It's impossible to deduce pleasure from a grin, anger from a scowl, or grief from a frown with certainty." 

This statement may be disputed by Alan Cowen. He's the creator of Hume AI, a new research lab and "empathetic AI" firm coming from stealth today. He's an ex-Google scientist. 


Hume claims to have created datasets and models that "react beneficially to [human] emotion signals," allowing clients ranging from huge tech firms to startups to recognize emotions based on a person's visual, vocal, and spoken expressions. 

"When I first entered the area of emotion science, the majority of researchers were focusing on a small number of posed emotional expressions in the lab. 

Cowen told, "I wanted to apply data science to study how individuals genuinely express emotion out in the world, spanning ethnicities and cultures." 

"I uncovered a new universe of nuanced and complicated emotional behaviors that no one had ever recorded before using new computational approaches, and I was quickly publishing in the top journals." That's when businesses started contacting me." 

Hume, which has 10 workers and just secured $5 million in investment, claims to train its emotion-recognizing algorithms using "huge, experimentally-controlled, culturally varied" datasets from individuals throughout North America, Africa, Asia, and South America. 

Regardless of the data's representativeness, some experts doubt the premise that emotion-detecting algorithms have a scientific base. 




"The kindest view I have is that there are some really well-intentioned folks who are naive enough that... the issue they're attempting to cure is caused by technology," 

~ Os Keyes, an AI ethics scientist at the University of Washington. 




"Their first offering raises severe ethical concerns... It's evident that they aren't addressing the topic as a problem to be addressed, interacting deeply with it, and contemplating the potential that they aren't the first to conceive of it." 

HireVue, Entropik Technology, Emteq, Neurodata Labs, Neilson-owned Innerscope, Realeyes, and Eyeris are among the businesses in the developing "emotional AI" sector. 

Entropik says that their technology can interpret emotions "through facial expressions, eye gazing, speech tone, and brainwaves," which it sells to companies wishing to track the effectiveness of their marketing efforts. 

Neurodata created a software that Russian bank Rosbank uses to assess the emotional state of clients phoning customer support centers. 



Emotion AI is being funded by more than just startups. 


Apple bought Emotient, a San Diego company that develops AI systems to assess face emotions, in 2016. 

When Alexa senses irritation in a user's voice, it apologizes and asks for clarification. 

Nuance, a speech recognition firm that Microsoft bought in April 2021, has shown off a device for automobiles that assesses driver emotions based on facial clues. 

In May, Swedish business Smart Eye bought Affectiva, an MIT Media Lab spin-off that claimed it could identify rage or dissatisfaction in speech in 1.2 seconds. 


According to Markets & Markets, the emotion AI market is expected to almost double in size from $19 billion in 2020 to $37.1 billion in 2026. 



Hundreds of millions of dollars have been invested in firms like Affectiva, Realeyes, and Hume by venture investors eager to get in on the first floor. 


According to the Financial Times, it is being used by film companies such as Disney and 20th Century Fox to gauge public response to new series and films. 

Meanwhile, marketing organizations have been putting the technology to the test for customers like Coca-Cola and Intel to examine how audiences react to commercials. 

The difficulty is that there are few – if any – universal indicators of emotion, which calls into doubt the accuracy of emotion AI. 

The bulk of emotion AI businesses are based on psychologist Paul Ekman's seven basic emotions (joy, sorrow, surprise, fear, anger, disgust, and contempt), which he introduced in the early 1970s. 

However, further study has validated the common sense assumption that individuals from diverse backgrounds express their emotions in quite different ways. 



Context, conditioning, relationality, and culture all have an impact on how individuals react to situations. 


For example, scowling, which is commonly linked with anger, has been observed to appear on the faces of furious persons fewer than 30% of the time. 

In Malaysia, the apparently universal expression for fear is the stereotype for a threat or anger. 


  • Later, Ekman demonstrated that there are disparities in how American and Japanese pupils respond to violent films, with Japanese students adopting "a whole distinct set of emotions" if another person is around, especially an authority figure. 
  • Gender and racial biases in face analysis algorithms have been extensively established, and are caused by imbalances in the datasets used to train the algorithm. 



In general, an AI system that has been trained on photographs of lighter-skinned humans may struggle with skin tones that are unknown to it. 


This isn't the only kind of prejudice that exists. 

Retorio, an AI employment tool, was seen to react differently to the identical applicant wearing glasses versus wearing a headscarf. 


  • Researchers from MIT, the Universitat Oberta de Catalunya in Barcelona, and the Universidad Autonoma de Madrid revealed in a 2020 study that algorithms may become biased toward specific facial expressions, such as smiling, lowering identification accuracy. 
  • Researchers from the University of Cambridge and the Middle East Technical University discovered that at least one of the public datasets often used to train emotion recognition systems was contaminated. 



There are substantially more Caucasian faces in AI systems than Asian or Black ones. 


  • Recent study has shown that major vendors' emotional analysis programs assign more negative feelings to Black men's faces than white men's looks, highlighting the repercussions. 
  • Persons with impairments, disorders like autism, and people who communicate in various languages and dialects, such as African-American Vernacular English, all have different voices (AAVE). 
  • A native French speaker doing an English survey could hesitate or enunciate a word with considerable trepidation, which an AI system might misinterpret as an emotion signal. 



Despite the faults in the technology, some businesses and governments are eager to use emotion AI to make high-stakes judgments. 


Employers use it to assess prospective workers by giving them a score based on their empathy or emotional intelligence. 

It's being used in schools to track pupils' participation in class — and even when they're doing homework at home. 

Emotion AI has also been tried at border checkpoints in the United States, Hungary, Latvia, and Greece to detect "risk persons." 

To reduce prejudice, Hume claims that "randomized studies" are used to collect "a vast variety" of facial and voice expressions from "people from a wide range of backgrounds." 

According to Cowen, the company has gathered over 1.1 million images and videos of facial expressions from over 30,000 people in the United States, China, Venezuela, India, South Africa, and Ethiopia, as well as over 900,000 audio recordings of people voicing their emotions labeled with people's self-reported emotional experiences. 

Hume's dataset is less than Affectiva's, which claimed to be the biggest of its sort at the time, with over 10 million people's expressions from 87 countries. 

Cowen, on the other hand, says that Hume's data can be used to train models to assess "an exceptionally broad spectrum of emotions," including over 28 facial expressions and 25 verbal expressions. 


"As demand for our empathetic AI models has grown, we've been prepared to provide access to them at a large scale." 


As a result, we'll be establishing a developer platform that will provide developers and researchers API documentation and a playground," Hume added. 

"We're also gathering data and developing training models for social interaction and conversational data, body language, and multi-modal expressions, which we expect will broaden our use cases and client base." 

Beyond Mursion, Hume claims it's collaborating with Hoomano, a firm that develops software for "social robots" like Softbank Robotics' Pepper, to build digital assistants that make better suggestions by taking into consideration the emotions of users. 

Hume also claims to have collaborated with Mount Sinai and University of California, San Francisco experts to investigate whether its models can detect depression and schizophrenia symptoms "that no prior methodologies have been able to capture." 


"A person's emotions have a big impact on their conduct, including what they pay attention to and click on." 


As a result, 'emotion AI' is already present in AI technologies such as search engines, social media algorithms, and recommendation systems. It's impossible to avoid. 

As a result, decision-makers must be concerned about how these technologies interpret and react to emotional signals, influencing their users' well-being in ways that their inventors are unaware of." Cowen remarked. 

"Hume AI provides the tools required to guarantee that technologies are built to increase the well-being of their users. There's no way of understanding how an AI system is interpreting these signals and altering people's emotions without means to assess them, and there's no way of designing the system to do so in a way that is compatible with people's well-being." 


Leaving aside the thorny issue of using artificial intelligence to diagnose mental disorder, Mike Cook, a Queen Mary University of London AI researcher, believes the company's message is "performative" and the language is questionable. 


"[T]hey've obviously gone to tremendous lengths to speak about diversity and inclusion and other such things, and I'm not going to whine about people creating datasets with greater geographic variety." "However, it seems a little like it was massaged by a PR person who knows how to make your organization appear to care," he remarked. 

Cowen claims that by forming The Hume Initiative, a nonprofit "committed to governing empathetic AI," Hume is taking a more rigorous look at the uses of emotion AI than rivals. 

The Hume Initiative, whose ethical committee includes Taniya Mishra, former director of AI at Affectiva, has established regulatory standards that the company claims it would follow when commercializing its innovations. 


The Hume Initiative's principles forbid uses like manipulation, fraud, "optimizing for diminished well-being," and "unbounded" emotion AI. 


It also establishes limitations for use cases such as platforms and interfaces, health and development, and education, such as mandating educators to utilize the output of an emotion AI model to provide constructive — but non-evaluative — input. 

Danielle Krettek Cobb, the creator of the Google Empathy Lab, Dacher Keltner, a professor of psychology at UC Berkeley, and Ben Bland, the head of the IEEE group establishing standards for emotion AI, are coauthors of the recommendations. 

"The Hume Initiative started by compiling a list of all known applications for empathetic AI. 

After that, they voted on the first set of specific ethical principles. 


The resultant principles are tangible and enforceable, unlike any prior attempt to AI ethics. 


They describe how empathetic AI may be used to increase mankind's finest traits of belonging, compassion, and well-being, as well as how it might be used to expose humanity to intolerable dangers," Cowen remarked. 

"Those who use Hume AI's data or AI models must agree to use them solely in accordance with The Hume Initiative's ethical rules, guaranteeing that any applications using our technology are intended to promote people's well-being." Companies have boasted about their internal AI ethical initiatives in the past, only to have such efforts fall by the wayside – or prove to be performative and ineffective. 


Google's AI ethics board was notoriously disbanded barely one week after it was established. 


Meta's (previously Facebook's) AI ethics unit has also been labeled as essentially useless in reports. 

It's referred to as "ethical washing" by some. 

Simply said, ethical washing is the practice of a firm inventing or inflating its interest in fair AI systems that benefit everyone. 



When a firm touts "AI for good" activities on the one hand while selling surveillance technology to governments and companies on the other, this is a classic example for tech titans. 


The coauthors of a report published by Trilateral Research, a London-based technology consultancy, claim that ethical principles and norms do not, by themselves, assist practitioners grapple with difficult concerns like fairness in emotion AI. 

They argue that these should be thoroughly explored to ensure that businesses do not deploy systems that are incompatible with societal norms and values. 


"Ethics is made ineffectual without a continual process of challenging what is or may be clear, of probing behind what seems to be resolved, of keeping this interrogation alive," they said. 


"As a result, the establishment of ethics into established norms and principles comes to an end." Cook identifies problems in The Hume Initiative's stated rules, especially in its use of ambiguous terminology. 

"A lot of the standards seem performatively written — if you believe manipulating the user is wrong, you'll read the guidelines and think to yourself, 'Yes, I won't do that.' And if you don't care, you'll read the rules and say, 'Yes, I can justify this,'" he explained. 

Cowen believes Hume is "open[ing] the door to optimize AI for human and societal well-being" rather than short-term corporate objectives like user engagement. 

"We don't have any actual competition since the other AI models for measuring emotional signals are so restricted." They concentrate on a small number of facial expressions, neglect the voice entirely, and have major demographic biases. 



These biases are often weaved into the data used to train AI systems. 


Furthermore, no other business has established explicit ethical criteria for the usage of empathetic AI," he said. 

"We're building a platform that will consolidate our model deployment and provide customers greater choice over how their data is utilized." 

Regardless of whether or not rules exist, politicians have already started to limit the use of emotion AI systems. 



The New York City Council recently established a regulation mandating companies to notify applicants when they are being evaluated by AI, as well as to audit the algorithms once a year. 


Candidates in Illinois must provide their agreement for video footage analysis, while Maryland has outlawed the use of face analysis entirely. 

Some firms have voluntarily ceased supplying emotion AI services or erected barriers around them. 

HireVue said that its algorithms will no longer use visual analysis. 

Microsoft's sentiment-detecting Face API, which once claimed it could detect emotions across cultures, now says in a caveat that "facial expressions alone do not reflect people's interior moods."

The Hume Initiative, according to Cook, "developed some ethical papers so people don't worry about what [Hume] is doing." 

"Perhaps the most serious problem I have is that I have no idea what they're doing." "Apart from whatever datasets they created, the part that's public doesn't appear to have anything on it," Cook added. 



Emotion recognition using AI. 


Emotion detection is a hot new field, with a slew of entrepreneurs marketing devices that promise to be able to read people's interior emotional states and AI academics attempting to increase computers' capacity to do so. 

Voice analysis, body language analysis, gait analysis, eye tracking, and remote assessment of physiological indications such as pulse and respiration rates are used to do this. 

The majority of the time, though, it's done by analyzing facial expressions. 

However, a recent research reveals that these items are constructed on a foundation of intellectual sand. 


The main issue is whether human emotions can be successfully predicted by looking at their faces. 


"Whether facial expressions of emotion are universal, whether you can look at someone's face and read emotion in their face," Lisa Feldman Barrett, a professor of psychology at Northeastern University and an expert on emotion, told me, "is a topic of great contention that scientists have been debating for at least 100 years." 


Despite this extensive history, she said that no full review of all emotion research conducted over the previous century had ever been completed. 


So, a few years ago, the Association for Psychological Science gathered five eminent scientists from opposing viewpoints to undertake a "systematic evaluation of the data challenging the popular opinion" that emotion can be consistently predicted by outward facial movements. 

According to Barrett, who was one of the five scientists, they "had extremely divergent theoretical ideas." "We arrived to the project with very different assumptions of what the data would reveal, and it was our responsibility to see if we could come to an agreement on what the data revealed and how to best interpret it." We weren't sure we could do it since it's such a divisive issue." The procedure, which was supposed to take a few months, took two years. 

Nonetheless, after evaluating over 1,000 scientific studies in the psychology literature, these experts arrived to an united conclusion: "a person's emotional state may be simply determined from his or her facial expressions" has no scientific basis. 


According to the researchers, there are three common misconceptions "about how emotions are communicated and interpreted in facial movements." 


The relationship between facial expressions and emotions is neither dependable, particular, or generalizable (i.e., the same emotions are not always exhibited in the same manner) (the effects of different cultures and contexts has not been sufficiently documented). 

"A scowling face may or may not be an indication of rage," Barrett said to me. 

People frown in rage at times, and you could grin, weep, or simply seethe with a neutral look at other moments. 

People grimace at other times as well, such as when they're perplexed, concentrating, or having gas." These results do not suggest that individuals move their faces at random or that [facial expressions] have no psychological significance, according to the researchers. 

Instead, they show that the facial configurations in issue aren't "fingerprints" or diagnostic displays that consistently and explicitly convey various emotional states independent of context, person, or culture. 

It's impossible to deduce pleasure from a grin, anger from a scowl, or sorrow from a frown, as most of today's technology seeks to accomplish when applying what are incorrectly considered to be scientific principles. 

Because an entire industry of automated putative emotion-reading devices is rapidly growing, this work is relevant. 


The market for emotion detection software is expected to reach at least $3.8 billion by 2025, according to our recent research on "Robot Surveillance." 


Emotion detection (also known as "affect recognition" or "affective computing") is already being used in devices for marketing, robotics, driving safety, and audio "aggression detectors," as we recently reported. 

Emotion identification is built on the same fundamental concept as polygraphs, or "lie detectors": that a person's internal mental state can be accurately associated with physical bodily motions and situations. 

They can't — and this includes face muscles in particular. 

It seems to reason that what is true of facial muscles would also be true of all other techniques of detecting emotion, such as body language and gait. 

However, the assumption that such mind reading is conceivable might cause serious damage. 


A jury's cultural misunderstanding of what a foreign defendant's facial expressions mean, for example, can lead to a death sentence rather than a prison sentence. 


When such mindset is translated into automated systems, it may lead to further problems. 

For example, a "smart" body camera that incorrectly informs a police officer that someone is hostile and angry might lead to an unnecessary shooting. 


"There is no automatic emotion identification. 

The top algorithms can confront a face — full frontal, no occlusions, optimal illumination — and are excellent at recognizing facial movements. 

They aren't able, however, to deduce what those facial gestures signify."


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See Also: 


AI Emotions, AI Emotion Recognition, AI Emotional Intelligence, Surveillance Technologies, Privacy and Technology, AI Bias, Human Rights.


Download PDF: 








What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...