Showing posts sorted by date for query Strong AI. Sort by relevance Show all posts
Showing posts sorted by date for query Strong AI. Sort by relevance Show all posts

Artificial Intelligence - What Is Swarm Intelligence and Distributed Intelligence?



From developing single autonomous agents to building groups of distributed autonomous agents that coordinate themselves, distributed intelligence is the obvious next step.

A multi-agent system is made up of many agents.

Communication is a prerequisite for cooperation.

The fundamental concept is to allow for distributed problem-solving rather than employing a collection of agents as a simple parallelization of the single-agent technique.

Agents effectively cooperate, exchange information, and assign duties to one another.

Sensor data, for example, is exchanged to learn about the current condition of the environment, and an agent is given a task based on who is in the best position to complete that job at the time.

Agents might be software or embodied agents in the form of robots, resulting in a multi-robot system.

RoboCup Soccer (Kitano et al.1997) is an example of this, in which two teams of robots compete in soccer.

Typical challenges include detecting the ball cooperatively and sharing that knowledge, as well as assigning tasks, such as who will go after the ball next.



Agents may have a complete global perspective or simply a partial picture of the surroundings.

The agent's and the entire approach's complexity may be reduced by restricting information to the local area.

Regardless of their local perspective, agents may communicate, disseminate, and transmit information across the agent group, resulting in a distributed collective vision of global situations.





A scalable decentralized system, a non-scalable decentralized system, and a decentralized system are three separate concepts of distributed intelligence that may be used to construct distributed intelligence.

Without a master-slave hierarchy or a central control element, all agents in scalable decentralized systems function in equal roles.

Because the system only allows for local agent-to-agent communication, there is no need for all agents to coordinate with each other.

This allows for potentially huge system sizes.

All-to-all communication is an important aspect of the coordination mechanism in non-scalable decentralized systems, but it may become a bottleneck in systems with too many agents.

A typical RoboCup-Soccer system, for example, requires all robots to cooperate with all other robots at all times.

Finally, in decentralized systems with central components, the agents may interact with one another through a central server (e.g., cloud) or be coordinated by a central control.

It is feasible to mix the decentralized and central approaches by delegating basic tasks to the agents, who will complete them independently and locally, while more difficult activities will be managed centrally.

Vehicle ad hoc networks are an example of a use case (Liang et al.2015).

Each agent is self-contained, yet collaboration aids in traffic coordination.

For example, intelligent automobiles may build dynamic multi-hop networks to notify others about an accident that is still hidden from view.

For a safer and more efficient traffic flow, cars may coordinate passing moves.

All of this may be accomplished by worldwide communication with a central server or, depending on the stability of the connection, through local car-to-car communication.

Natural swarm systems and artificial, designed distributed systems are combined in swarm intelligence research.

Extracting fundamental principles from decentralized biological systems and translating them into design principles for decentralized engineering systems is a core notion in swarm intelligence (scalable decentralized systems as defined above).

Swarm intelligence was inspired by flocks, swarms, and herds' collective activities.

Social insects such as ants, honeybees, wasps, and termites are a good example.

These swarm systems are built on self-organization and work in a fundamentally decentralized manner.

Crystallization, pattern creation in embryology, and synchronization in swarms are examples of self-organization, which is a complex interaction of positive (deviations are encouraged) and negative feedback (deviations are damped).

In swarm intelligence, four key features of systems are investigated: • The system is made up of a large number of autonomous agents that are homogeneous in terms of their capabilities and behaviors.

• Each agent follows a set of relatively simple rules compared to the task's complexity.

• The resulting system behavior is heavily reliant on agent interaction and collaboration.

Reynolds (1987) produced a seminal paper detailing flocking behavior in birds based on three basic local rules: alignment (align direction of movement with neighbors), cohesiveness (remain near to your neighbors), and separation (stay away from your neighbors) (keep a minimal distance to any agent).

As a consequence, a real-life mimicked self-organizing flocking behavior emerges.

By depending only on local interactions between agents, a high level of resilience may be achieved.

Any agent, at any moment, has only a limited understanding of the system's global state (swarm-level state) and relies on communication with nearby agents to complete its duty.

Because the swarm's knowledge is spread, a single point of failure is rare.

An perfectly homogenous swarm has a high degree of redundancy; that is, all agents have the same capabilities and can therefore be replaced by any other.

By depending only on local interactions between agents, a high level of scalability may be obtained.

Due to the dispersed data storage architecture, there is less requirement to synchronize or maintain data coherent.

Because the communication and coordination overhead for each agent is dictated by the size of its neighborhood, the same algorithms may be employed for systems of nearly any scale.

Ant Colony Optimization (ACO) and Particle Swarm Optimization are two well-known examples of swarm intelligence in engineered systems from the optimization discipline (PSO).

Both are metaheuristics, which means they may be used to solve a wide range of optimization problems.

Ants and their use of pheromones to locate the shortest pathways inspired ACO.

A graph must be used to depict the optimization issue.

A swarm of virtual ants travels from node to node, choosing which edge to use next based on the likelihood of how many other ants have used it before (through pheromone, implementing positive feedback) and a heuristic parameter, such as journey length (greedy search).

Evaporation of pheromones balances the exploration-exploitation trade-off (negative feedback).

The traveling salesman dilemma, automobile routing, and network routing are all examples of ACO applications.

Flocking is a source of inspiration for PSO.

Agents navigate search space using average velocity vectors that are impacted by global and local best-known solutions (positive feedback), the agent's past path, and a random direction.

While both ACO and PSO conceptually function in a completely distributed manner, they do not need parallel computing to be deployed.

They may, however, be parallelized with ease.

Swarm robotics is the application of swarm intelligence to embodied systems, while ACO and PSO are software-based methods.

Swarm robotics applies the concept of self-organizing systems based on local information to multi-robot systems with a high degree of resilience and scalability.

Following the example of social insects, the goal is to make each individual robot relatively basic in comparison to the task complexity while yet allowing them to collaborate to perform complicated problems.

A swarm robot can only communicate with other swarm robots since it can only function on local information.

Given a fixed swarm density, the applied control algorithms are meant to allow maximum scalability (i.e., constant number of robots per area).

The same control methods should perform effectively regardless of the system size whether the swarm size is grown or lowered by adding or deleting robots.

A super-linear performance improvement is often found, meaning that doubling the size of the swarm improves the swarm's performance by more than two.

As a result, each robot is more productive than previously.

Swarm robotics systems have been demonstrated to be effective for a wide range of activities, including aggregation and dispersion behaviors, as well as more complicated tasks like item sorting, foraging, collective transport, and collective decision-making.

Rubenstein et al. (2014) conducted the biggest scientific experiment using swarm robots to date, using 1024 miniature mobile robots to mimic self-assembly behavior by arranging the robots in predefined designs.

The majority of the tests were conducted in the lab, but new research has taken swarm robots to the field.

Duarte et al. (2016), for example, built a swarm of autonomous surface watercraft that cruise the ocean together.

Modeling the relationship between individual behavior and swarm behavior, creating advanced design principles, and deriving assurances of system attributes are all major issues in swarm intelligence.

The micro-macro issue is defined as the challenge of identifying the ensuing swarm behavior based on a given individual behavior and vice versa.

It has shown to be a difficult challenge that manifests itself in both mathematical modeling and the robot controller design process as an engineering difficulty.

The creation of complex tactics to design swarm behavior is not only crucial to swarm intelligence research, but it has also proved to be very difficult.

Similarly, due to the combinatorial explosion of action-to-agent assignments, multi-agent learning and evolutionary swarm robotics (i.e., application of evolutionary computation techniques to swarm robotics) do not scale well with task complexity.

Despite the benefits of robustness and scalability, obtaining strong guarantees for swarm intelligence systems is challenging.

Swarm systems' availability and reliability can only be assessed experimentally in general. 


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


AI and Embodiment.


Further Reading:


Bonabeau, Eric, Marco Dorigo, and Guy Theraulaz. 1999. Swarm Intelligence: From Natural to Artificial System. New York: Oxford University Press.

Duarte, Miguel, Vasco Costa, Jorge Gomes, Tiago Rodrigues, Fernando Silva, Sancho Moura Oliveira, Anders Lyhne Christensen. 2016. “Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots.” PloS One 11, no. 3: e0151834.

Hamann, Heiko. 2018. Swarm Robotics: A Formal Approach. New York: Springer.

Kitano, Hiroaki, Minoru Asada, Yasuo Kuniyoshi, Itsuki Noda, Eiichi Osawa, Hitoshi Matsubara. 1997. “RoboCup: A Challenge Problem for AI.” AI Magazine 18, no. 1: 73–85.

Liang, Wenshuang, Zhuorong Li, Hongyang Zhang, Shenling Wang, Rongfang Bie. 2015. “Vehicular Ad Hoc Networks: Architectures, Research Issues, Methodologies, Challenges, and Trends.” International Journal of Distributed Sensor Networks 11, no. 8: 1–11.

Reynolds, Craig W. 1987. “Flocks, Herds, and Schools: A Distributed Behavioral Model.” Computer Graphics 21, no. 4 (July): 25–34.

Rubenstein, Michael, Alejandro Cornejo, and Radhika Nagpal. 2014. “Programmable Self-Assembly in a Thousand-Robot Swarm.” Science 345, no. 6198: 795–99.




Artificial Intelligence - What Is Immortality in the Digital Age?




The act of putting a human's memories, knowledge, and/or personality into a long-lasting digital memory storage device or robot is known as digital immortality.

Human intelligence is therefore displaced by artificial intelligence that resembles the mental pathways or imprint of the brain in certain respects.

The National Academy of Engineering has identified reverse-engineering the brain to attain substrate independence—that is, copying the thinking and feeling mind and reproducing it on a range of physical or virtual media.

Whole Brain Emulation (also known as mind uploading) is a theoretical science that assumes the mind is a dynamic process independent of the physical biology of the brain and its unique sets or patterns of atoms.

Instead, the mind is a collection of information-processing functions that can be computed.

Whole Brain Emulation is presently assumed to be based on the neural networking discipline of computer science, which has as its own ambitious objective the programming of an operating system modeled after the human brain.

Artificial neural networks (ANNs) are statistical models built from biological neural networks in artificial intelligence research.

Through connections and weighting, as well as backpropagation and parameter adjustment in algorithms and rules, ANNs may process information in a nonlinear way.

Through his online "Mind Uploading Home Page," Joe Strout, a computational neurobiology enthusiast at the Salk Institute, facilitated debate of full brain emulation in the 1990s.

Strout argued for the material origins of consciousness, claiming that evidence from damage to actual people's brains indicates to neuronal, connectionist, and chemical beginnings.

Strout shared timelines of previous and contemporary technical advancements as well as suggestions for future uploading techniques through his website.

Mind uploading proponents believe that one of two methods will eventually be used: (1) gradual copy-and-transfer of neurons by scanning the brain and simulating its underlying information states, or (2) deliberate replacement of natural neurons with more durable artificial mechanical devices or manufactured biological products.

Strout gathered information on a variety of theoretical ways for achieving the objective of mind uploading.

One is a microtome method, which involves slicing a live brain into tiny slices and scanning it with a sophisticated electron microscope.

The brain is then reconstructed in a synthetic substrate using the picture data.

Nanoreplacement involves injecting small devices into the brain to monitor the input and output of neurons.

When these minuscule robots have a complete understanding of all biological interactions, they will eventually kill the neurons and replace them.

A robot with billions of appendages that delve deep into every section of the brain, as envisioned by Carnegie Mellon University roboticist Hans Moravec, is used in a variation of this process.

In this approach, the robot creates a virtual model of every portion and function of the brain, gradually replacing it.

Everything that the physical brain used to be is eventually replaced by a simulation.

In copy-and-transfer whole brain emulation, scanning or mapping neurons is commonly considered harmful.

The live brain is plasticized or frozen before being divided into parts , scanned and simulated on a computational media.

Philosophically, the technique creates a mental clone of a person, not the person who agrees to participate in the experiment.

Only a duplicate of that individual's personal identity survives the duplicating experiment; the original person dies.

Because, as philosopher John Locke reasoned, someone who recalls thinking about something in the past is the same person as the person who performed the thinking in the first place, the copy may be thought of as the genuine person.

Alternatively, it's possible that the experiment may turn the original and copy into completely different persons, or that they will soon diverge from one another through time and experience as a result of their lack of shared history beyond the experiment.

There have been many nondestructive approaches proposed as alternatives to damaging the brain during the copy-and-transfer process.

It is hypothesized that sophisticated types of gamma-ray holography, x-ray holography, magnetic resonance imaging (MRI), biphoton interferometry, or correlation mapping using probes might be used to reconstruct function.

With 3D reconstructions of atomic-level detail, the present limit of available technology, in the form of electron microscope tomography, has reached the sub-nanometer scale.

The majority of the remaining challenges are related to the geometry of tissue specimens and tomographic equipment's so-called tilt-range restrictions.

Advanced kinds of picture recognition, as well as neurocomputer manufacturing to recreate scans as information processing components, are in the works.

Professor of Electrical and Computer Engineering Alice Parker leads the BioRC Biomimetic Real-Time Cortex Project at the University of Southern California, which focuses on reverse-engineering the brain.

Parker is now building and producing a memory and carbon nanotube brain nanocircuit for a future synthetic cortex based on statistical predictions with nanotechnology professor Chongwu Zhou and her students.

Her neuromorphic circuits are designed to mimic the complexities of human neural computations, including glial cell connections (these are nonneuronal cells that form myelin, control homeostasis, and protect and support neurons).

Members of the BioRC Project are developing systems that scale to the size of human brains.

Parker is attempting to include dendritic plasticity into these systems, which will allow them to adapt and expand as they learn.

Carver Mead, a Caltech electrical engineer who has been working on electronic models of human neurological and biological components since the 1980s, is credited with the approach's roots.

The Terasem Movement, which began in 2002, aims to educate and urge the public to embrace technical advancements that advance the science of mind uploading and integrate science, religion, and philosophy.

The Terasem Movement, the Terasem Movement Foundation, and the Terasem Movement Transreligion are all incorporated entities that operate together.

Martine Rothblatt and Bina Aspen Rothblatt, serial entrepreneurs, founded the group.

The Rothblatts are inspired by the religion of Earthseed, which may be found in Octavia Butler's 1993 novel Parable of the Sower.

"Life is intentional, death is voluntary, God is technology, and love is fundamental," according to Rothblatt's trans-religious ideas (Roy 2014).

Terasem's CyBeRev (Cybernetic Beingness Revival) project collects all available data about a person's life—their personal history, recorded memories, photographs, and so on—and stores it in a separate data file in the hopes that their personality and consciousness can be pieced together and reanimated one day by advanced software.

The Terasem Foundation-sponsored Lifenaut research retains mindfiles with biographical information on individuals for free and keeps track of corresponding DNA samples (biofiles).

Bina48, a social robot created by the foundation, demonstrates how a person's consciousness may one day be transplanted into a lifelike android.

Numenta, an artificial intelligence firm based in Silicon Valley, is aiming to reverse-engineer the human neocortex.

Jeff Hawkins (creator of the portable PalmPilot personal digital assistant), Donna Dubinsky, and Dileep George are the company's founders.

Numenta's idea of the neocortex is based on Hawkins' and Sandra Blakeslee's theory of hierarchical temporal memory, which is outlined in their book On Intelligence (2004).

Time-based learning algorithms, which are capable of storing and recalling tiny patterns in data change over time, are at the heart of Numenta's emulation technology.

Grok, a commercial tool that identifies flaws in computer servers, was created by the business.

Other applications, such as detecting anomalies in stock market trading or abnormalities in human behavior, have been provided by the business.

Carboncopies is a non-profit that funds research and cooperation to capture and preserve unique configurations of neurons and synapses carrying human memories.

Computational modeling, neuromorphic hardware, brain imaging, nanotechnology, and philosophy of mind are all areas where the organization supports research.

Randal Koene, a computational neuroscientist educated at McGill University and head scientist at neuroprosthetic company Kernel, is the organization's creator.

Dmitry Itskov, a Russian new media millionaire, donated early funding for Carbon copies.

Itskov is also the founder of the 2045 Initiative, a non-profit organization dedicated to extreme life extension.

The purpose of the 2045 Initiative is to develop high-tech methods for transferring personalities into a "advanced nonbiological carrier." Global Future 2045, a meeting aimed to developing "a new evolutionary strategy for mankind," is organized by Koene and Itskov.

Proponents of digital immortality see a wide range of practical results as a result of their efforts.

For example, in the case of death by accident or natural causes, a saved backup mind may be used to reawaken into a new body.

(It's reasonable to assume that elderly brains would seek out new bodies long before aging becomes apparent.) This is also the basis of Arthur C.

Clarke's science fiction book City of the Stars (1956), which influenced Koene's decision to pursue a career in science at the age of thirteen.

Alternatively, mankind as a whole may be able to lessen the danger of global catastrophe by uploading their thoughts to virtual reality.

Civilization might be saved on a high-tech hard drive buried deep into the planet's core, safe from hostile extraterrestrials and incredibly strong natural gamma ray bursts.

Another potential benefit is the potential for life extension over lengthy periods of interstellar travel.

For extended travels throughout space, artificial brains might be implanted into metal bodies.

This is a notion that Clarke foreshadowed in the last pages of his science fiction classic Childhood's End (1953).

It's also the response offered by Manfred Clynes and Nathan Kline in their 1960 Astronautics article "Cyborgs and Space," which includes the first mention of astronauts with physical capacities that transcend beyond conventional limitations (zero gravity, space vacuum, cosmic radiation) thanks to mechanical help.

Under real mind uploading circumstances, it may be able to simply encode and send the human mind as a signal to a neighboring exoplanet that is the greatest possibility for alien life discovery.

The hazards to humans are negligible in each situation when compared to the present threats to astronauts, which include exploding rockets, high-speed impacts with micrometeorites, and faulty suits and oxygen tanks.

Another potential benefit of digital immortality is real restorative justice and rehabilitation through criminal mind retraining.

Or, alternatively, mind uploading might enable for penalties to be administered well beyond the normal life spans of those who have committed heinous crimes.

Digital immortality has far-reaching social, philosophical, and legal ramifications.

The concept of digital immortality has long been a hallmark of science fiction.

The short story "The Tunnel Under the World" (1955) by Frederik Pohl is a widely reprinted story about chemical plant workers who are killed in a chemical plant explosion, only to be rebuilt as miniature robots and subjected to advertising campaigns and jingles over the course of a long Truman Show-like repeating day.

The Silicon Man (1991) by Charles Platt relates the tale of an FBI agent who finds a hidden operation named LifeScan.

The project has found a technique to transfer human thought patterns to a computer dubbed MAPHIS, which is headed by an old millionaire and a mutinous crew of government experts (Memory Array and Processors for Human Intelligence Storage).

MAPHIS is capable of delivering any standard stimuli, including pseudomorphs, which are simulations of other persons.

The Autoverse is introduced in Greg Egan's hard science fiction Permutation City (1994), which mimics complex miniature biospheres and virtual worlds populated by artificial living forms.

Egan refers to human consciousnesses scanned into the Autoverse as copies.

The story is inspired by John Conway's Game of Life's cellular automata, quantum ontology (the link between the quantum universe and human perceptions of reality), and what Egan refers to as dust theory.

The premise that physics and arithmetic are same, and that individuals residing in whatever mathematical, physical, and spacetime systems (and all are feasible) are essentially data, processes, and interactions, is at the core of dust theory.

This claim is similar to MIT physicist Max Tegmark's Theory of Everything, which states that "all structures that exist mathematically exist also physically," meaning that "all structures that exist mathematically exist also physically," meaning that "all structures that exist mathematically exist also physically, by which we mean that in those complex enough to contain self-aware substructures (SASs), these SASs will subjectively perceive themselves as existing in a physically'real' world" (Tegmark 1998, 1).

Hans Moravec, a roboticist at Carnegie Mellon University, makes similar assertions in his article "Simulation, Consciousness, Existence" (1998).

Tron (1982), Freejack (1992), and The 6th Day are examples of mind uploading and digital immortality in movies (2000).

Kenneth D. Miller, a theoretical neurologist at Columbia University, is a notable skeptic.

While rebuilding an active, functional mind may be achievable, connectomics researchers (those working on a wiring schematic of the whole brain and nervous system) remain millennia away from finishing their job, according to Miller.

And, he claims, connectomics is just concerned with the first layer of brain activities that must be comprehended in order to replicate the complexity of the human brain.

Others have wondered what happens to personhood in situations where individuals are no longer constrained as physical organisms.

Is identity just a series of connections between neurons in the brain? What is going to happen to markets and economic forces? Is a body required for immortality? Professor Robin Hanson of George Mason University's nonfiction publication The Age of Em: Work, Love, and Life When Robots Rule the Earth provides an economic and social viewpoint on digital immortality (2016).

Hanson's hypothetical ems are scanned emulations of genuine humans who exist in both virtual reality environments and robot bodies.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Technological Singularity.


Further Reading:


Clynes, Manfred E., and Nathan S. Kline. 1960. “Cyborgs and Space.” Astronautics 14, no. 9 (September): 26–27, 74–76.

Farnell, Ross. 2000. “Attempting Immortality: AI, A-Life, and the Posthuman in Greg Egan’s ‘Permutation City.’” Science Fiction Studies 27, no. 1: 69–91.

Global Future 2045. http://gf2045.com/.

Hanson, Robin. 2016. The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford, UK: Oxford University Press.

Miller, Kenneth D. 2015. “Will You Ever Be Able to Upload Your Brain?” New York Times, October 10, 2015. https://www.nytimes.com/2015/10/11/opinion/sunday/will-you-ever-be-able-to-upload-your-brain.html.

Moravec, Hans. 1999. “Simulation, Consciousness, Existence.” Intercommunication 28 (Spring): 98–112.

Roy, Jessica. 2014. “The Rapture of the Nerds.” Time, April 17, 2014. https://time.com/66536/terasem-trascendence-religion-technology/.

Tegmark, Max. 1998. “Is ‘the Theory of Everything’ Merely the Ultimate Ensemble The￾ory?” Annals of Physics 270, no. 1 (November): 1–51.

2045 Initiative. http://2045.com/.


Artificial Intelligence - Who Is Daniel Dennett?

 



At Tufts University, Daniel Dennett(1942–) is the Austin B. Fletcher Professor of Philosophy and Co-Director of the Center for Cognitive Studies.

Philosophy of mind, free will, evolutionary biology, cognitive neuroscience, and artificial intelligence are his main areas of study and publishing.

He has written over a dozen books and hundreds of articles.

Much of this research has focused on the origins and nature of consciousness, as well as how naturalistically it may be described.

Dennett is also an ardent atheist, and one of the New Atheism's "Four Horsemen." Richard Dawkins, Sam Harris, and Christopher Hitchens are the others.

Dennett's worldview is naturalistic and materialistic throughout.

He opposes Cartesian dualism, which holds that the mind and body are two distinct things that merge.

Instead, he contends that the brain is a form of computer that has developed through time due to natural selection.

Dennett also opposes the homunculus theory of the mind, which holds that the brain has a central controller or "little man" who performs all of the thinking and emotion.

Dennett, on the other hand, argues for a viewpoint he refers to as the numerous drafts model.

According to his theory, which he lays out in his 1991 book Consciousness Explained, the brain is constantly sifting through, interpreting, and editing sensations and inputs, forming overlapping drafts of experience.

Dennett later used the metaphor of "fame in the brain" to describe how various aspects of ongoing neural processes are periodically emphasized at different times and under different circumstances.

Consciousness is a story made up of these varied interpretations of human events.

Dennett dismisses the assumption that these ideas coalesce or are structured in a central portion of the brain, which he mockingly refers to as "Cartesian theater." The brain's story is made up of a never-ending, un-centralized flow of bottom-up awareness that spans time and place.

Dennett denies the existence of qualia, which are subjective individual experiences such as how colors seem to the human eye or how food feels.

He does not deny that colors and tastes exist; rather, he claims that the sensation of color and taste does not exist as a separate thing in the human mind.

He claims that there is no difference between human and computer "sensation experiences." According to Dennett, just as some robots can discern between colors without people deciding that they have qualia, so can the human brain.

For Dennett, the color red is just the quality that brains sense and which is referred to as red in the English language.

It has no extra, indescribable quality.

This is a crucial consideration for artificial intelligence because the ability to experience qualia is frequently seen as a barrier to the development of Strong AI (AI that is functionally equivalent to that of a human) and as something that will invariably distinguish human and machine intelligence.

However, if qualia do not exist, as Dennett contends, it cannot constitute a stumbling block to the creation of machine intelligence comparable to that of humans.

Dennett compares our brains to termite colonies in another metaphor.

Termites do not join together and plot to form a mound, but their individual activities cause it to happen.

The mound is the consequence of natural selection producing uncomprehending expertise in cooperative mound-building rather than intellectual design by the termites.

To create a mound, termites don't need to comprehend what they're doing.

Likewise, comprehension is an emergent attribute of such abilities.

Brains, according to Dennett, are control centers that have evolved to respond swiftly and effectively to threats and opportunities in the environment.

As the demands of responding to the environment grow more complicated, understanding emerges as a tool for dealing with them.

On a sliding scale, comprehension is a question of degree.

Dennett, for example, considers bacteria's quasi-comprehension in response to diverse stimuli and computers' quasi-comprehension in response to coded instructions to be on the low end of the range.

On the other end of the spectrum, he placed Jane Austen's comprehension of human social processes and Albert Einstein's understanding of relativity.

However, they are just changes in degree, not in type.

Natural selection has shaped both extremes of the spectrum.

Comprehension is not a separate mental process arising from the brain's varied abilities.

Rather, understanding is a collection of these skills.

Consciousness is an illusion to the extent that we recognize it as an additional element of the mind in the shape of either qualia or cognition.

In general, Dennett advises mankind to avoid positing understanding when basic competence would suffice.

Humans, on the other hand, often adopt what Dennett refers to as a "intentional position" toward other humans and, in some cases, animals.

When individuals perceive acts as the outcome of mind-directed thoughts, emotions, wants, or other mental states, they adopt the intentional viewpoint.

This is in contrast to the "physical posture" and the "design stance," according to him.

The physical stance is when anything is seen as the outcome of simply physical forces or natural principles.

Gravity causes a stone to fall when it is dropped, not any conscious purpose to return to the ground.

An action is seen as the mindless outcome of a preprogrammed, or predetermined, purpose in the design stance.

An alarm clock, for example, beeps at a certain time because it was built to do so, not because it chose to do so on its own.

In contrast to both the physical and design stances, the intentional stance considers behaviors and acts as though they are the consequence of the agent's deliberate decision.

It might be difficult to decide whether to apply the purposeful or design perspective to computers.

A chess-playing computer has been created with the goal of winning.

However, its movements are often indistinguishable from those of a human chess player who wants or intends to win.

In fact, having a purposeful posture toward the computer's behavior, rather than a design stance, improves human interpretation of its behavior and capacity to respond to it.

Dennett claims that the purposeful perspective is the greatest strategy to adopt toward both humans and computers since it works best in describing both human and computer behavior.

Furthermore, there is no need to differentiate them in any way.

Though the intentional attitude considers behavior as agent-driven, it is not required to take a position on what is truly going on inside the human or machine's internal workings.

This posture provides a neutral starting point from which to investigate cognitive competency without presuming a certain explanation of what's going on behind the scenes.

Dennett sees no reason why AI should be impossible in theory since human mental abilities have developed organically.

Furthermore, by abandoning the concept of qualia and adopting an intentional posture that relieves people of the responsibility of speculating about what is going on in the background of cognition, two major impediments to solving the hard issue of consciousness have been eliminated.

Dennett argues that since the human brain and computers are both machines, there is no good theoretical reason why humans should be capable of acquiring competence-driven understanding while AI should be intrinsically unable.

Consciousness in the traditional sense is illusory, hence it is not a need for Strong AI.

Dennett does not believe that Strong AI is theoretically impossible.

He feels that society's technical sophistication is still at least fifty years away from producing it.

Strong AI development, according to Dennett, is not desirable.

Humans should strive to build AI tools, but Dennett believes that attempting to make computer pals or colleagues would be a mistake.

Such robots, he claims, would lack human moral intuitions and understanding, and hence would not be able to integrate into human society.

Humans do not need robots to provide friendship since they have each other.

Robots, even AI-enhanced machines, should be seen as tools to be utilized by humans alone.


 


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Cognitive Computing; General and Narrow AI.


Further Reading:


Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, MA: MIT Press.

Dennett, Daniel C. 1993. Consciousness Explained. London: Penguin.

Dennett, Daniel C. 1998. Brainchildren: Essays on Designing Minds. Cambridge, MA: MIT Press.

Dennett, Daniel C. 2008. Kinds of Minds: Toward an Understanding of Consciousness. New York: Basic Books.

Dennett, Daniel C. 2017. From Bacteria to Bach and Back: The Evolution of Minds. New York: W. W. Norton.

Dennett, Daniel C. 2019. “What Can We Do?” In Possible Minds: Twenty-Five Ways of Looking at AI, edited by John Brockman, 41–53. London: Penguin Press.

Artificial Intelligence - Iterative AI Ethics In Complex Socio-Technical Systems

 



Title: The Need For Iterative And Evolving AI Ethics Processes And Frameworks To Ensure Relevant, Fair, And Ethical Scalable Complex Socio-Technical Systems.

Author: Jai Krishna Ponnappan




Ethics has strong fangs, but they are seldom applied in AI ethics today, therefore it's no surprise that AI ethics is criticized for lacking efficacy. 


This essay claims that present AI ethics 'ethics' is generally useless, caught in a 'ethical principles' approach and hence particularly vulnerable to manipulation, particularly by industrial players. 

Using ethics as a replacement for the law puts it at danger of being abused and misapplied. 

This severely restricts what ethics can accomplish, and it is a big setback for the AI field and its implications for people and society. 

This paper examines these dangers before focusing on the efficacy of ethics and the critical contribution they can – and should – provide to AI ethics right now. 



Ethics is a potent weapon. 


Unfortunately, we seldom use them in AI ethics, thus it's no surprise that AI ethics is dubbed "ineffective." 

This paper examines the different ethical procedures that have arisen in recent years in response to the widespread deployment and usage of AI in society, as well as the hazards that come with it. 

Lists of principles, ethical codes, suggestions, and guidelines are examples of these procedures. 


However, as many have showed, although these ethical innovations are exciting, they are also problematic: their usefulness has yet to be proven, and they are particularly susceptible to manipulation, notably by industry. 


This is a setback for AI, as it severely restricts what ethics may do for society and people. 

However, as this paper demonstrates, the problem isn't that ethics is meaningless (or ineffective) in the face of current AI deployment; rather, ethics is being utilized (or manipulated) in such a manner that it is made ineffectual for AI ethics. 

The paper starts by describing the current state of AI ethics: AI ethics is essentially principled, that is, it adheres to a 'law' view of ethics. 

It then demonstrates how this ethical approach fails to accomplish what it claims to do. 

The second section of this paper focuses on the true worth of ethics – its 'efficacy,' which we describe as the capacity to notice the new as it develops on a continuous basis. 



We explain how, in today's AI ethics, the ability to resist cognitive and perceptual inertia, which makes us inactive in the face of new advancements, is crucial. 


Finally, although we acknowledge that the legalistic approach to ethics is not entirely incorrect, we argue that it is the end of ethics, not its beginning, and that it ignores the most valuable and crucial components of ethics. 

In many stakeholder quarters, there are several ongoing conversations and activities on AI ethics (policy, academia, industry and even the media). This is something we can all be happy about. 


Policymakers (e.g., the European Commission and the European Parliament) and business, in particular, are concerned about doing things right in order to promote ethical and responsible AI research and deployment in society. 


It is now widely acknowledged that if AI is adopted without adequate attention and thought for its potentially detrimental effects on people, particular groups, and society as a whole, things might go horribly wrong (including, for example, bias and discrimination, injustice, privacy infringements, increase in surveillance, loss of autonomy, overdependency on technology, etc.). 

The focus then shifts to ethics, with the goal of ensuring that AI is implemented in a way that respects deeply held social values and norms, placing them at the center of responsible technology development and deployment (Hagendorff, 2020; Jobin et al., 2019). 

The 'Ethical guidelines for trustworthy AI,' established by the European Commission's High-Level Expert Group on AI in 2018, is one example of contemporary ethics efforts (High-Level Expert Group on Artificial Intelligence, 2019). 

However, the present use of the term "ethics" in the subject of AI ethics is questionable. 

Today's AI ethics is dominated by what British philosopher G.E.M. Anscombe refers to as a 'law conception of ethics,' i.e., a perspective on ethics that treats it as if it were a kind of law (Anscombe, 1958). 

It's customary to think of ethics as a "softer" version of the law (Jobin et al., 2019: 389). 


However, this is simply one approach to ethics, and it is problematic, as Anscombe has shown. It is problematic in at least two respects in terms of AI ethics. 

For starters, it's troublesome since it has the potential to be misapplied as a substitute for regulation (whether through law, policies or standards). 

Over the previous several years, many authors have advocated the following point: Article 19, 2019; Greene et al., 2019; Hagendorff, 2020; Jobin et al., 2019; Klöver and Fanta, 2019; Mittelstadt, 2019; Wagner, 2018); Greene et al., 2019; Hagendorff, 2020; Jobin et al., 2019; Klöver and Fanta, 2019; Klöver and Fanta, 2019; Klöver and Fanta, 2019; Klöver and Fanta, Wagner cites the situation of a member of the Google DeepMind ethics team continuously asserting 'how ethically Google DeepMind was working, while simultaneously dodging any accountability for the data security crisis at Google DeepMind' at the Conference on World Affairs 2018. (Wagner, 2018). 

'Ethical AI' discourse, according to Ochigame (2019), was "aligned strategically with a Silicon Valley campaign attempting to circumvent legally enforceable prohibitions of problematic technology." Ethics falls short in this regard because it lacks the instruments to enforce conformity. 


Ethics, according to Hagendorff, "lacks means to support its own normative assertions" (2020: 99). 


If ethics is about enforcing rules, then it is true that ethics is ineffective. 

Although ethical programs "bring forward great intentions," according to the human rights organization Article 19, "their general lack of accountability and enforcement measures" renders them ineffectual (Article 19, 2019: 18). 

Finally, and predictably, ethics is attacked for being ineffective. 

However, it's important to note that the problem isn't that ethics is being asked to perform something for which it is too weak or soft. 

It's more like it's being asked to do something it wasn't supposed to accomplish. 


Criticizing ethics for not having efficacy to enforce compliance with whatever it requires is like to blaming a fork for not correctly cutting meat: that is not what it is supposed to achieve. 


The goal of ethics is not to prescribe certain behaviors and then guarantee that they are followed. 

The issue occurs when it is utilized in this manner. 

This is especially true in the field of AI ethics, where ethical principles, norms, or criteria are required to control AI and guarantee that it does not damage people or society as a whole (e.g. AI HLEG). 

Some suggest that this ethical lapse is deliberate, motivated by a desire to guarantee that AI is not governed by legislation, i.e. 

that greater flexibility is available and that no firm boundaries are drawn constraining industrial and economic interests associated to this technology (Klöver and Fanta, 2019). 

For example, this criticism has been directed against the AI HLEG guidelines. 

Industry was extensively represented during debates at the European High-Level Expert Group on Artificial Intelligence (EU-HLEG), while academia and civil society did not have the same luxury, according to Article 19. 


While several non-negotiable ethical standards were initially specified in the text, owing to corporate pressure, they were eliminated from the final version. 


(Article 19, page 18 of the 2019 edition) It is a significant and concerning abuse and misuse of ethics to use ethics to hinder the execution of vital legal regulation. 

The result is ethics washing, as well as its cousins: ethics shopping, ethics shirking, and so on (Floridi, 2019; Greene et al., 2019; Wagner, 2018). 

Second, since the AI ethics area is dominated by this 'legal notion of ethics,' it fails to fully use what ethics has to give, namely, its correct efficacy, despite the critical need for them. 

What exactly are these ethical efficacy, and what value might they provide to the field? The true fangs of ethics are a never-failing capacity to perceive the new (Laugier, 2013). 


Ethics is basically a state of mind, a constantly renewed and nimble response to reality as it changes. 


The ethics of care has emphasized attention as a critical component of ethics (Tronto, 1993: 127). 

In this way, ethics is a strong instrument against cognitive and perceptual inertia, which prevents us from seeing what is different from before or in new settings, cultures, or circumstances, and hence necessitates a change in behavior (regulation included). 

This is particularly important for AI, given the significant changes and implications it has had and continues to have on society, as well as our basic ways of being and functioning. 

This ability to observe the environment is what keeps us from being cooked alive like the frog: it allows us to detect subtle changes as they happen. 

An extension and deepening of monitoring by governments and commercial enterprises, a rising reliance on technology, and the deployment of biased systems that lead to discrimination against women and minorities are all contributing to the increasingly hot water in AI. 


The positive changes they bring to society must be carefully examined and opposed when their negative consequences exceed their advantages. 


In this way, ethics has a tight relationship with social sciences, as an attempt to perceive what we don't otherwise notice, and ethics aids us in looking concretely at how the world evolves. 

It aids in the cleaning of the lens through which we see the world so that we may be more aware of its changes (and AI does bring many of these). 

It is critical that ethics back us up in this respect. 

It enables us to be less passive in the face of these changes, allowing us to better direct them in ways that benefit people and society while also improving our quality of life. 


Hagendorff makes a similar point in his essay on the 'Ethics of AI Ethics,' disputing the prevalent deontological approach to ethics in AI ethics (what we've referred to as a legalistic approach to ethics in this article), whose primary goal is to 'limit, control, or direct' (2020: 112). 


He emphasizes the necessity for AI to adopt virtue ethics, which strives to 'broaden the scope of action, disclose blind spots, promote autonomy and freedom, and cultivate self-responsibility' (Hagendorff, 2020: 112). 

Other ethical theory frameworks that might be useful in today's AI ethics discussion include the Spinozist approach, which focuses on the growth or loss of agency and action capability. 

So, are we just misinterpreting AI ethics, which, as we've seen, is now dominated by a 'law-concept of ethics'? Is today's legalistic approach to ethics entirely incorrect? No, not at all. 



The problem is that principles, norms, and values — the legal definition of ethics that is so prevalent in AI ethics today – are more of a means to a goal than an end in themselves. 


The word "end" has two meanings in this context. 

First, it is an end of ethics in the sense that it is the last destination of ethics, i.e., moulding laws, choices, behaviors, and acts in ways that are consistent with society's ideals. 

Ethics may be defined as the creation of principles (as in the AI HLEG criteria) or the application of ethical principles, values, or standards to particular situations. 

This process of operationalization of ethical standards may be observed, for example, in the European Commission's research funding program's Ethics evaluation procedure5 or in ethics impact assessments, which look at how a new technique or technology could alter ethical norms and values. 

These are unquestionably worthwhile endeavors that have a beneficial influence on society and people. 


Ethics, as the development of principles, is also useful in shaping policies and regulatory frameworks. 


The AI HLEG guidelines are heavily influenced by current policy and legislative developments at the EU level, such as the European Commission's "White Paper on Artificial Intelligence" (February 2020) and the European Parliament's proposed "Framework of ethical aspects of artificial intelligence, robotics, and related technologies" (April 2020). 

Ethics clearly lays forth the rights and wrongs, as well as what should be done and what should be avoided. 

It's important to recall, however, that ethics as ethical principles is also an end of ethics in another meaning: where it comes to a halt, where the thought is paused, and where this never-ending attention comes to an end. 

As a result, when ethics is reduced to a collection of principles, norms, or criteria, it has achieved its conclusion. 

There is no need for ethics if we have attained a sufficient degree of certainty and confidence in what are the correct judgments and acts. 



Ethics is about navigating muddy and dangerous seas while being vigilant. 


In the realm of AI, for example, ethical standards do not, by themselves, assist in the practical exploration of difficult topics such as fairness in extremely complex socio-technical systems. 


These must be thoroughly studied to ensure that we are not putting in place systems that violate deeply held norms and beliefs. 

Ethics is made worthless without a continual process of challenging what is or may be clear, of probing behind what seems to be resolved, and of keeping this inquiry alive. 

As a result, the process of settling ethics into established norms and principles comes to an end. 

It is vital to maintain ethics nimble and alive in light of AI's profound, huge, and broad influence on society. 

The ongoing renewal process of examining the world and the glasses through which we experience it — intentionally, consistently, and iteratively – is critical to AI ethics.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


AI ethics, law of AI, regulation of AI, ethics washing, EU HLEG on AI, ethical principles



Download PDF: 




Further Reading:



  • Anscombe, GEM (1958) Modern moral philosophy. Philosophy 33(124): 1–19.
  • European Committee for Standardization (2017) CEN Workshop Agreement: Ethics assessment for research and innovation – Part 2: Ethical impact assessment framework (by the SATORI project). Available at: https://satoriproject.eu/media/CWA17145-23d2017 .
  • Boddington, P (2017) Towards a Code of Ethics for Artificial Intelligence. Cham: Springer.
  • European Parliament JURI (April 2020) Framework of ethical aspects of artificial intelligence, robotics and related technologies, draft report (2020/2012(INL)). Available at: https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?lang=&reference=2020/2012 .
  • Gilligan, C (1982) In a Different Voice: Psychological Theory and Women’s Development. Cambridge: Harvard University Press.
  • Floridi, L (2019) Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology 32: 185–193.
  • Greene, D, Hoffmann, A, Stark, L (2019) Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii international conference on system sciences, Maui, Hawaii, 2019, pp.2122–2131.
  • Gilligan, C (1982) In a Different Voice: Psychological Theory and Women’s Development. Cambridge: Harvard University Press.
  • Hagendorff, T (2020) The ethics of AI ethics. An evaluation of guidelines. Minds and Machines 30: 99–120.
  • Greene, D, Hoffmann, A, Stark, L (2019) Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii international conference on system sciences, Maui, Hawaii, 2019, pp.2122–2131.
  • Jansen P, Brey P, Fox A, Maas J, Hillas B, Wagner N, Smith P, Oluoch I, Lamers L, van Gein H, Resseguier A, Rodrigues R, Wright D, Douglas D (2019) Ethical analysis of AI and robotics technologies. August, SIENNA D4.4, https://www.sienna-project.eu/digitalAssets/801/c_801912-l_1-k_d4.4_ethical-analysis–ai-and-r–with-acknowledgements.pdf
  • High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  • Jansen P, Brey P, Fox A, Maas J, Hillas B, Wagner N, Smith P, Oluoch I, Lamers L, van Gein H, Resseguier A, Rodrigues R, Wright D, Douglas D (2019) Ethical analysis of AI and robotics technologies. August, SIENNA D4.4, https://www.sienna-project.eu/digitalAssets/801/c_801912-l_1-k_d4.4_ethical-analysis–ai-and-r–with-acknowledgements.pdf .
  • Jobin, A, Ienca, M, Vayena, E (2019) The global landscape of AI ethics guidelines. Nature Machine Intelligence 1(9): 389–399.
  • Laugier, S (2013) The will to see: Ethics and moral perception of sense. Graduate Faculty Philosophy Journal 34(2): 263–281.
  • Klöver, C, Fanta, A (2019) No red lines: Industry defuses ethics guidelines for artificial intelligence. Available at: https://algorithmwatch.org/en/industry-defuses-ethics-guidelines-for-artificial-intelligence/
  • López, JJ, Lunau, J (2012) ELSIfication in Canada: Legal modes of reasoning. Science as Culture 21(1): 77–99.
  • Rodrigues, R, Rességuier, A (2019) The underdog in the AI ethical and legal debate: Human autonomy. In: Ethics Dialogues. Available at: https://www.ethicsdialogues.eu/2019/06/12/the-underdog-in-the-ai-ethical-and-legal-debate-human-autonomy/
  • Ochigame, R (2019) The invention of “Ethical AI” how big tech manipulates academia to avoid regulation. The Intercept. Available at: https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/?comments=1
  • Mittelstadt, B (2019) Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1: 501–507.
  • Tronto, J (1993) Moral Boundaries: A Political Argument for an Ethic of Care. New York: Routledge.
  • Wagner, B (2018) Ethics as an escape from regulation: From ethics-washing to ethics-shopping. In: Bayamlioglu, E, Baraliuc, I, Janssens, L, et al. (eds) Being Profiled: Cogitas Ergo Sum: 10 Years of Profiling the European Citizen. Amsterdam: Amsterdam University Press, pp. 84–89.



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...