Optical Computing Systems To Speed Up AI And Machine Learning.

Artificial intelligence and machine learning are influencing our lives in a variety of minor but significant ways right now. 

For example, AI and machine learning programs propose content from streaming services like Netflix and Spotify that we would appreciate. 

These technologies are expected to have an even greater influence on society in the near future, via activities such as driving completely driverless cars, allowing sophisticated scientific research, and aiding medical breakthroughs. 

However, the computers that are utilized for AI and machine learning use a lot of power. 

The need for computer power associated with these technologies is now doubling every three to four months. 

Furthermore, cloud computing data centers employed by AI and machine learning applications use more electricity each year than certain small nations. 

It's clear that this level of energy usage cannot be sustained. 

A research team lead by the University of Washington has created new optical computing hardware for AI and machine learning that is far quicker and uses much less energy than traditional electronics. 

Another issue addressed in the study is the 'noise' inherent in optical computing, which may obstruct computation accuracy. 

The team showcases an optical computing system for AI and machine learning in a new research published Jan. 

Science Advances that not only mitigates noise but also utilizes part of it as input to assist increase the creative output of the artificial neural network inside the system. 

Changming Wu, a UW doctorate student in electrical and computer engineering, stated, "We've constructed an optical computer that is quicker than a typical digital computer." 

"Moreover, this optical computer can develop new objects based on random inputs provided by optical noise, which most researchers have attempted to avoid." 

Optical computing noise is primarily caused by stray light particles, or photons, produced by the functioning of lasers inside the device as well as background heat radiation. 

To combat noise, the researchers linked their optical computing core to a Generative Adversarial Network, a sort of machine learning network. 

The researchers experimented with a variety of noise reduction strategies, including utilizing part of the noise created by the optical computing core as random inputs for the GAN. 

The researchers, for example, gave the GAN the job of learning how to handwrite the number "7" in a human-like manner. 

The number could not simply be printed in a predetermined typeface on the optical computer. 

It had to learn the task in the same way that a kid would, by studying visual examples of handwriting and practicing until it could accurately write the number. 

Because the optical computer lacked a human hand for writing, its "handwriting" consisted of creating digital pictures with a style close to but not identical to the examples it had examined. 

"Instead of teaching the network to read handwritten numbers, we taught it to write numbers using visual examples of handwriting," said senior author Mo Li, an electrical and computer engineering professor at the University of Washington. 

"We also demonstrated that the GAN can alleviate the detrimental effect of optical computing hardware sounds by utilizing a training technique that is resilient to mistakes and noises, with the support of our Duke University computer science teammates. 

Furthermore, the network treats the sounds as random input, which is required for the network to create output instances." 

The GAN practiced writing "7" until it could do it effectively after learning from handwritten examples of the number seven from a normal AI-training picture collection. 

It developed its own writing style along the way and could write numbers from one to ten in computer simulations. 

The next stage will be to scale up the gadget using existing semiconductor manufacturing methods. 

To attain wafer-scale technology, the team wants to employ an industrial semiconductor foundry rather than build the next iteration of the device in a lab. 

A larger-scale gadget will boost performance even further, allowing the study team to undertake more sophisticated activities such as making artwork and even films in addition to handwriting production. 

"This optical system represents a computer hardware architecture that can enhance the creativity of artificial neural networks used in AI and machine learning," Li explained. 

"More importantly, it demonstrates the viability of this system at a large scale where noise and errors can be mitigated and even harnessed." "AI applications are using so much energy that it will be unsustainable in the future. 

This technique has the potential to minimize energy usage, making AI and machine learning more environmentally friendly—as well as incredibly quick, resulting in greater overall performance." Although many people are unaware of it, artificial intelligence (AI) and machine learning are now a part of our regular life online. 

Intelligent ranking algorithms, for example, help search engines like Google, video streaming services like Netflix utilize machine learning to customize movie suggestions, and cloud computing data centers employ AI and machine learning to help with a variety of services. 

The requirements for AI are many, diverse, and difficult. 

As these needs rise, so does the need to improve AI performance while also lowering its energy usage. 

The energy costs involved with AI and machine learning on a broad scale may be startling. 

Cloud computing data centers, for example, use an estimated 200 terawatt hours per year — enough energy to power a small nation — and this consumption is expected to expand enormously in the future years, posing major environmental risks. 

Now, a team lead by associate professor Mo Li of the University of Washington Department of Electrical and Computer Engineering (UW ECE) has developed a method in partnership with academics from the University of Maryland that might help speed up AI while lowering energy and environmental expenses. 

The researchers detailed an optical computing core prototype that employs phase-change material in a publication published in Nature Communications on January 4, 2021. 

(a substance similar to what CD-ROMs and DVDs use to record information). 

Their method is quick, energy-efficient, and capable of speeding up AI and machine learning neural networks. 

The technique is also scalable and immediately relevant to cloud computing, which employs AI and machine learning to power common software applications like search engines, streaming video, and a plethora of apps for phones, desktop computers, and other devices. 

"The technology we designed is geared to execute artificial neural network algorithms, which are a backbone method for AI and machine learning," Li said. 

"This breakthrough in research will make AI centers and cloud computing significantly more energy efficient and speedier." 

The team is one of the first in the world to employ phase-change material in optical computing to allow artificial neural networks to recognize images. 

Recognizing a picture in a photo is simple for humans, but it requires a lot of computing power for AI. 

Image recognition is a benchmark test of a neural network's computational speed and accuracy since it requires a lot of computation. 

This test was readily passed by the team's optical computing core, which was running an artificial neural network. 

"Optical computing initially surfaced as a concept in the 1980s, but it eventually died in the shadow of microelectronics," said Changming Wu, a graduate student in Li's group. 

"It has now been updated due to the end of Moore's law [the discovery that the number of transistors in a dense, integrated circuit doubles every two years], developments in integrated photonics, and the needs of AI computing."

 That's a lot of fun." Optical computing is quick because it transmits data at incredible rates using light created by lasers rather than the considerably slower electricity utilized in typical digital electronics. 

The prototype built by the study team was created to speed up the computational speed of an artificial neural network, which is measured in billions and trillions of operations per second. 

Future incarnations of their technology, according to Li, have the potential to move much quicker. 

"This is a prototype, and we're not utilizing the greatest speed possible with optics just yet," Li said. 

"Future generations have the potential to accelerate by at least an order of magnitude." Any program powered by optical computing over the cloud — such as search engines, video streaming, and cloud-enabled gadgets — will operate quicker, enhancing performance, in the ultimate real-world use of this technology. 

Li's research team took their prototype a step further by sensing light emitted via phase-change material to store data and conduct computer operations. 

Unlike transistors in digital electronics, which need a constant voltage to represent and maintain the zeros and ones required in binary computing, phase-change material does not require any energy. 

When phase-change material is heated precisely by lasers, it shifts between a crystalline and an amorphous state, much like a CD or DVD. 

The material then retains that condition, or "phase," as well as the information that phase conveys (a zero or one), until the laser heats it again. 

"There are other competing schemes to construct optical neural networks," Li explained, "but we believe that using phase-changing material has a unique advantage in terms of energy efficiency because the data is encoding in a non-volatile way, meaning that the device does not consume a constant amount of power to store the data." 

"Once the info is written there, it stays there indefinitely." You don't need to provide electricity to keep it in place." 

This energy savings is important because it is multiplied by millions of computer servers in hundreds of data centers throughout the globe, resulting in a huge decrease in energy consumption and environmental effect. 

By patterning the phase-change material used in their optical computing core into nanostructures, the team was able to improve it even further. 

These tiny structures increase the material's durability and stability, as well as its contrast (the ability to discriminate between zero and one in binary code) and computing capacity and accuracy. 

The optical computer core of the prototype was also completely integrated with phase-change material, thanks to Li's research team. 

"We're doing all we can to incorporate optics here," Wu said. 

"We layer the phase-change material on top of a waveguide, which is a thin wire that we cut into the silicon chip to channel light. 

You may conceive of it as a light-emitting electrical wire or an optical fiber etched into the chip." 

Li's research group claims that the technology they created is one of the most scalable methods to optical computing technologies now available, with the potential to be used to massive systems like networked cloud computing servers in data centers across the globe. 

"Our design architecture is scalable to a much, much bigger network," Li added, "and can tackle hard artificial intelligence tasks ranging from massive, high-resolution image identification to video processing and video image recognition."

"We feel our system is the most promising and scalable to that degree." 

Of course, this will need large-scale semiconductor production. 

Our design and the prototype's substance are both extremely compatible with semiconductor foundry procedures."

Looking forward, Li said he could see optical computing devices like the one his team produced boosting current technology's processing capacity and allowing the next generation of artificial intelligence. 

To take the next step in that direction, his research team will collaborate closely with UW ECE associate professor Arka Majumdar and assistant professor Sajjad Moazeni, both specialists in large-scale integrated photonics and microelectronics, to scale up the prototype they constructed. 

And, once the technology has been scaled up enough, it will lend itself to future integration with energy-intensive data centers, speeding up the performance of cloud-based software applications while lowering energy consumption. 

"The computers in today's data centers are already linked via optical fibers. 

This enables ultra-high bandwidth transmission, which is critical," Li said. 

"Because fiber optics infrastructure is already in place, it's reasonable to do optical computing in such a setup." It's fantastic, and I believe the moment has come for optical computing to resurface."

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Optical Computing, Optical Computing Core, AI, Machine Learning, AI Systems.

Further Reading:

Changming Wu et al, Harnessing optoelectronic noises in a photonic generative network, Science Advances (2022). DOI: 10.1126/sciadv.abm2956. www.science.org/doi/10.1126/sciadv.abm2956

Artificial Intelligence - What Is The Liability Of Self-Driving Vehicles?


Driverless cars may function completely or partly without the assistance of a human driver.

Driverless automobiles, like other AI products, confront difficulties with liability, responsibility, data protection, and customer privacy.

Driverless cars have the potential to eliminate human carelessness while also providing safe transportation for passengers.

They have been engaged in mishaps despite their potential.

The Autopilot software on a Tesla SUV may have failed to notice a huge vehicle crossing the highway in a well-publicized 2016 accident.

A Tesla Autopilot may have been involved in the death of a 49-year-old woman in 2018.

A class action lawsuit was filed against Tesla as a result of these occurrences, which the corporation resolved out of court.

Additional worries about autonomous cars have arisen as a result of bias and racial prejudice in machine vision and face recognition.

Current driverless cars may be better at spotting people with lighter skin, according to Georgia Institute of Technology researchers.

Product liability provides some much-needed solutions to such problems.

The Consumer Protection Act of 1987 governs product liability claims in the United Kingdom (CPA).

This act enacts the European Union's (EU) Product Liability Directive 85/374/EEC, which holds manufacturers liable for product malfunctions, i.e., items that are not as safe as they should be when bought.

This contrasts with U.S. law addressing product liability, which is fragmented and largely controlled by common law and a succession of state acts.

The Uniform Commercial Code (UCC) offers remedies where a product fails to fulfill stated statements, is not merchantable, or is inappropriate for its specific use.

In general, manufacturers are held accountable for injuries caused by their faulty goods, and this responsibility may be handled in terms of negligence or strict liability.

A defect in this situation could be a manufacturer defect, where the driverless vehicle does not satisfy the manufacturer’s specifications and standards; a design defect, which can result when an alternative design would have prevented an acci dent; or a warning defect, where there is a failure to provide enough warning as regards to a driverless car’s operations.

To evaluate product responsibility, the five stages of automation specified by the Society of Automotive Engineers (SAE) International should be taken into account: Level 0, full control of a vehicle by a driver; Level 1, a human driver assisted by an automated system; Level 2, an automated system partially conduct ing the driving while a human driver monitors the environment and performs most of the driving; Level 3, an automated system does the driving and monitor ing of the environment, but the human driver takes back control when signaled; Level 4, the driverless vehicle conducts driving and monitors the environment but is restricted in certain environment; and Level 5, a driverless vehicle without any restrictions does everything a human driver would.

In Levels 1–3 that involve human-machine interaction, where it is discovered that the driverless vehicle did not communicate or send out a signal to the human driver or that the autopilot software did not work, the manufacturer will be liable based on product liability.

At Level 4 and Level 5, liability for defective product will fully apply.

Manufacturers have a duty of care to ensure that any driverless vehicle they manufacture is safe when used in any foreseeable manner.

Failure to exercise this duty will make them liable for negligence.

In some other cases, even when manufacturers have exercised all reasonable care, they will still be liable for unintended defects as per the strict liability principle.

The liability for the driver, especially in Levels 1–3, could be based on tort principles, too.

The requirement of article 8 of the 1949 Vienna Convention on Road Traffic, which states that “[e]very vehicle or combination of vehicles proceeding as a unit shall have a driver,” may not be fulfilled in cases where a vehicle is fully automated.

In some U.S. states, namely, Nevada and Florida, the word driver has been changed to controller, and the latter means any person who causes the autonomous technology to engage; the person must not necessarily be present in the vehicle.

A driver or controller becomes responsible if it is proved that the obligation of reasonable care was not performed by the driver or controller or they were negligent in the observance of this duty.

In certain other cases, victims will only be reimbursed by their own insurance companies under no-fault responsibility.

Victims may also base their claims for damages on the strict responsibility concept without having to present proof of the driver’s fault.

In this situation, the driver may demand that the manufacturer be joined in a lawsuit for damages if the driver or the controller feels that the accident was the consequence of a flaw in the product.

In any case, proof of the driver's or controller's negligence will reduce the manufacturer's liability.

Third parties may sue manufacturers directly for injuries caused by faulty items under product liability.

According to MacPherson v. Buick Motor Co. (1916), where the court found that an automobile manufacturer's duty for a faulty product goes beyond the initial consumer, there is no privity of contract between the victim and the maker.

The question of product liability for self-driving vehicles is complex.

The transition from manual to smart automated control transfers responsibility from the driver to the manufacturer.

The complexity of driving modes, as well as the interaction between the human operator and the artificial agent, is one of the primary challenges concerning accident responsibility.

In the United States, the law of motor vehicle product liability relating to flaws in self-driving cars is still in its infancy.

While the Department of Transportation and, especially, the National Highway Traffic Safety Administration give some basic recommendations on automation in driverless vehicles, Congress has yet to adopt self-driving car law.

In the United Kingdom, the Automated and Electric Cars Act of 2018 makes insurers accountable by default for accidents using automated vehicles that result in death, bodily injury, or property damage, providing the vehicles were in self-driving mode and insured at the time of the accident.

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Accidents and Risk Assessment; Product Liability and AI; Trolley Problem.

Further Reading:

Geistfeld. Mark A. 2017. “A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation.” California Law Review 105: 1611–94.

Hevelke, Alexander, and Julian Nida-Rümelin. 2015. “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis.” Science and Engineering Ethics 21, no. 3 (June): 619–30.

Karanasiou, Argyro P., and Dimitris A. Pinotsis. 2017. “Towards a Legal Definition of Machine Intelligence: The Argument for Artificial Personhood in the Age of Deep Learning.” In ICAIL ’17: Proceedings of the 16th edition of the International Conference on Artificial Intelligence and Law, edited by Jeroen Keppens and Guido Governatori, 119–28. New York: Association for Computing Machinery.

Luetge, Christoph. 2017. “The German Ethics Code for Automated and Connected Driving.” Philosophy & Technology 30 (September): 547–58.

Rabin, Robert L., and Kenneth S. Abraham. 2019. “Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era.” Virginia Law Review 105, no. 1 (March): 127–71.

Wilson, Benjamin, Judy Hoffman, and Jamie Morgenstern. 2019. “Predictive Inequity in Object Detection.” https://arxiv.org/abs/1902.11097.

Artificial Intelligence - How Do Autonomous Vehicles Leverage AI?

Using a virtual driver system, driverless automobiles and trucks, also known as self-driving or autonomous vehicles, are capable of moving through settings with little or no human control.

A virtual driver system is a set of characteristics and capabilities that augment or replicate the actions of an absent driver to the point that, at the maximum degree of autonomy, the driver may not even be present.

Diverse technology uses, restricting circumstances, and categorization methods make reaching an agreement on what defines a driverless car difficult.

A semiautonomous system, in general, is one in which the human performs certain driving functions (such as lane maintaining) while others are performed autonomously (such as acceleration and deceleration).

All driving activities are autonomous only under certain circumstances in a conditionally autonomous system.

All driving duties are automated in a fully autonomous system.

Automobile manufacturers, technology businesses, automotive suppliers, and universities are all testing and developing applications.

Each builder's car or system, as well as the technical road that led to it, demonstrates a diverse range of technological answers to the challenge of developing a virtual driving system.

Ambiguities exist at the level of defining circumstances, so that a same technological system may be characterized in a variety of ways depending on factors such as location, speed, weather, traffic density, human attention, and infrastructure.

When individual driving duties are operationalized for feature development and context plays a role in developing solutions, more complexity is generated (such as connected vehicles, smart cities, and regulatory environment).

Because of this complication, producing driverless cars often necessitates collaboration across several roles and disciplines of study, such as hardware and software engineering, ergonomics, user experience, legal and regulatory, city planning, and ethics.

The development of self-driving automobiles is both a technical and a socio-cultural enterprise.

The distribution of mobility tasks across an array of equipment to collectively perform a variety of activities such as assessing driver intent, sensing the environment, distinguishing objects, mapping and wayfinding, and safety management are among the technical problems of engineering a virtual driver system.

LIDAR, radar, computer vision, global positioning, odometry, and sonar are among the hardware and software components of a virtual driving system.

There are many approaches to solving discrete autonomous movement problems.

With cameras, maps, and sensors, sensing and processing can be centralized in the vehicle, or it can be distributed throughout the environment and across other vehicles, as with intelligent infrastructure and V2X (vehicle to everything) capability.

The burden and scope of this processing—and the scale of the problems to be solved—are closely related to the expected level of human attention and intervention, and as a result, the most frequently referenced classification of driverless capability is formally structured along the lines of human attentional demands and intervention requirements by the Society of Automotive Engineers, and has been adopted in 2 years.

These companies use various levels of driver automation, ranging from Level 0 to Level 5.

Level 0 refers to no automation, which means the human driver is solely responsible for longitudinal and latitudinal control (steering) (acceleration and deceleration).

On Level 0, the human driver is in charge of keeping an eye on the environment and reacting to any unexpected safety hazards.

Automated systems that take control of longitudinal or latitudinal control are classified as Level 1, or driver aid.

The driver is in charge of observation and intervention.

Level 2 denotes partial automation, in which the virtual driver system is in charge of both lateral and longitudinal control.

The human driver is deemed to be in the loop, which means that they are in charge of monitoring the environment and acting in the event of a safety-related emergency.

Level 2 capability has not yet been achieved by commercially available systems.

The monitoring capabilities of the virtual driving system distinguishes Level 3 conditional autonomy from Level 2.

At this stage, the human driver may be disconnected from the surroundings and depend on the autonomous system to keep track of it.

The person is required to react to calls for assistance in a range of situations, such as during severe weather or in construction zones.

A navigation system (e.g., GPS) is not required at this level.

To operate at Level 2 or Level 3, a vehicle does not need a map or a specific destination.

A human driver is not needed to react to a request for intervention at Level 4, often known as high automation.

The virtual driving system is in charge of navigation, locomotion, and monitoring.

When a specific condition cannot be satisfied, such as when a navigation destination is obstructed, it may request that a driver intervene.

If the human driver does not choose to interfere, the system may safely stop or redirect based on the engineering approach.

The classification of this situation is based on standards of safe driving, which are established not only by technical competence and environmental circumstances, but also by legal and regulatory agreements and lawsuit tolerance.

Level 5 autonomy, often known as complete automation, refers to a vehicle that is capable of doing all driving activities in any situation that a human driver could handle.

Although Level 4 and Level 5 systems do not need the presence of a person, they still necessitate substantial technological and social cooperation.

While efforts to construct autonomous vehicles date back to the 1920s, Leonardo Da Vinci is credited with the concept of a self-propelled cart.

In his 1939 New York World's Fair Futurama display, Norman Bel Geddes envisaged a smart metropolis of the future inhabited by self-driving automobiles.

Automobiles, according to Bel Geddes, will be outfitted with "technology that would rectify the mistakes of human drivers" by 1960.

General Motors popularized the concept of smart infrastructure in the 1950s by building a "automated highway" with steering-assist circuits.

In 1960, the business tested a working prototype car, but owing to the expensive expense of infrastructure, it quickly moved its focus from smart cities to smart autos.

A team lead by Sadayuki Tsugawa of Tsukuba Mechanical Engineering Laboratory in Japan created an early prototype of an autonomous car.

Their 1977 vehicle operated under predefined environmental conditions dictated by lateral guiding rails.

The truck used cameras to track the rails, and most of the processing equipment was aboard.

The EUREKA (European Research Organization) pooled money and experience in the 1980s to enhance the state-of-the-art in cameras and processing for autonomous cars.

Simultaneously, Carnegie Mellon University in Pittsburgh, Pennsylvania pooled their resources for research on autonomous navigation utilizing GPS data.

Since then, automakers including General Motors, Tesla, and Ford Motor Company, as well as technology firms like ARGO AI and Waymo, have been working on autonomous cars or critical components.

The technology is becoming less dependent on very limited circumstances and more adaptable to real-world scenarios.

Manufacturers are currently producing Level 4 autonomous test cars, and testing are being undertaken in real-world traffic and weather situations.

Commercially accessible Level 4 self-driving cars are still a long way off.

There are supporters and opponents of autonomous driving.

Supporters point to a number of benefits that address social problems, environmental concerns, efficiency, and safety.

The provision of mobility services and a degree of autonomy to those who do not already have access, such as those with disabilities (e.g., blindness or motor function impairment) or those who are unable to drive, such as the elderly and children, is one such social benefit.

The capacity to decrease fuel economy by managing acceleration and braking has environmental benefits.

Because networked cars may go bumper to bumper and are routed according to traffic optimization algorithms, congestion is expected to be reduced.

Finally, self-driving vehicles have the potential to be safer.

They may be able to handle complicated information more quickly and thoroughly than human drivers, resulting in fewer collisions.

Self-driving car negative repercussions may be included in any of these areas.

In terms of society, driverless cars may limit access to mobility and municipal services.

Autonomous mobility may be heavily regulated, costly, or limited to places that are inaccessible to low-income commuters.

Non-networked or manually operated cars might be kept out of intelligent geo-fenced municipal infrastructure.

Furthermore, if no adult or responsible human party is present during transportation, autonomous automobiles may pose a safety concern for some susceptible passengers, such as children.

Greater convenience may have environmental consequences.

Drivers may sleep or work while driving autonomously, which may have the unintended consequence of extending commutes and worsening traffic congestion.

Another security issue is widespread vehicle hacking, which could bring individual automobiles and trucks, or even a whole city, to a halt. 

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Accidents and Risk Assessment; Autonomous and Semiautonomous Systems; Autonomy and Complacency; Intelligent Transportation; Trolley Problem.

Further Reading:

Antsaklis, Panos J., Kevin M. Passino, and Shyh J. Wang. 1991. “An Introduction to Autonomous Control Systems.” IEEE Control Systems Magazine 11, no. 4: 5–13.

Bel Geddes, Norman. 1940. Magic Motorways. New York: Random House.

Bimbraw, Keshav. 2015. “Autonomous Cars: Past, Present, and Future—A Review of the Developments in the Last Century, the Present Scenario, and the Expected Future of Autonomous Vehicle Technology.” In ICINCO: 2015—12th International Conference on Informatics in Control, Automation and Robotics, vol. 1, 191–98. Piscataway, NJ: IEEE.

Kröger, Fabian. 2016. “Automated Driving in Its Social, Historical and Cultural Contexts.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner, 41–68. Berlin: Springer.

Lin, Patrick. 2016. “Why Ethics Matters for Autonomous Cars.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner, 69–85. Berlin: Springer.

Weber, Marc. 2014. “Where To? A History of Autonomous Vehicles.” Computer History Museum. https://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/.

Artificial Intelligence - What Is Swarm Intelligence and Distributed Intelligence?

From developing single autonomous agents to building groups of distributed autonomous agents that coordinate themselves, distributed intelligence is the obvious next step.

A multi-agent system is made up of many agents.

Communication is a prerequisite for cooperation.

The fundamental concept is to allow for distributed problem-solving rather than employing a collection of agents as a simple parallelization of the single-agent technique.

Agents effectively cooperate, exchange information, and assign duties to one another.

Sensor data, for example, is exchanged to learn about the current condition of the environment, and an agent is given a task based on who is in the best position to complete that job at the time.

Agents might be software or embodied agents in the form of robots, resulting in a multi-robot system.

RoboCup Soccer (Kitano et al.1997) is an example of this, in which two teams of robots compete in soccer.

Typical challenges include detecting the ball cooperatively and sharing that knowledge, as well as assigning tasks, such as who will go after the ball next.

Agents may have a complete global perspective or simply a partial picture of the surroundings.

The agent's and the entire approach's complexity may be reduced by restricting information to the local area.

Regardless of their local perspective, agents may communicate, disseminate, and transmit information across the agent group, resulting in a distributed collective vision of global situations.

A scalable decentralized system, a non-scalable decentralized system, and a decentralized system are three separate concepts of distributed intelligence that may be used to construct distributed intelligence.

Without a master-slave hierarchy or a central control element, all agents in scalable decentralized systems function in equal roles.

Because the system only allows for local agent-to-agent communication, there is no need for all agents to coordinate with each other.

This allows for potentially huge system sizes.

All-to-all communication is an important aspect of the coordination mechanism in non-scalable decentralized systems, but it may become a bottleneck in systems with too many agents.

A typical RoboCup-Soccer system, for example, requires all robots to cooperate with all other robots at all times.

Finally, in decentralized systems with central components, the agents may interact with one another through a central server (e.g., cloud) or be coordinated by a central control.

It is feasible to mix the decentralized and central approaches by delegating basic tasks to the agents, who will complete them independently and locally, while more difficult activities will be managed centrally.

Vehicle ad hoc networks are an example of a use case (Liang et al.2015).

Each agent is self-contained, yet collaboration aids in traffic coordination.

For example, intelligent automobiles may build dynamic multi-hop networks to notify others about an accident that is still hidden from view.

For a safer and more efficient traffic flow, cars may coordinate passing moves.

All of this may be accomplished by worldwide communication with a central server or, depending on the stability of the connection, through local car-to-car communication.

Natural swarm systems and artificial, designed distributed systems are combined in swarm intelligence research.

Extracting fundamental principles from decentralized biological systems and translating them into design principles for decentralized engineering systems is a core notion in swarm intelligence (scalable decentralized systems as defined above).

Swarm intelligence was inspired by flocks, swarms, and herds' collective activities.

Social insects such as ants, honeybees, wasps, and termites are a good example.

These swarm systems are built on self-organization and work in a fundamentally decentralized manner.

Crystallization, pattern creation in embryology, and synchronization in swarms are examples of self-organization, which is a complex interaction of positive (deviations are encouraged) and negative feedback (deviations are damped).

In swarm intelligence, four key features of systems are investigated: • The system is made up of a large number of autonomous agents that are homogeneous in terms of their capabilities and behaviors.

• Each agent follows a set of relatively simple rules compared to the task's complexity.

• The resulting system behavior is heavily reliant on agent interaction and collaboration.

Reynolds (1987) produced a seminal paper detailing flocking behavior in birds based on three basic local rules: alignment (align direction of movement with neighbors), cohesiveness (remain near to your neighbors), and separation (stay away from your neighbors) (keep a minimal distance to any agent).

As a consequence, a real-life mimicked self-organizing flocking behavior emerges.

By depending only on local interactions between agents, a high level of resilience may be achieved.

Any agent, at any moment, has only a limited understanding of the system's global state (swarm-level state) and relies on communication with nearby agents to complete its duty.

Because the swarm's knowledge is spread, a single point of failure is rare.

An perfectly homogenous swarm has a high degree of redundancy; that is, all agents have the same capabilities and can therefore be replaced by any other.

By depending only on local interactions between agents, a high level of scalability may be obtained.

Due to the dispersed data storage architecture, there is less requirement to synchronize or maintain data coherent.

Because the communication and coordination overhead for each agent is dictated by the size of its neighborhood, the same algorithms may be employed for systems of nearly any scale.

Ant Colony Optimization (ACO) and Particle Swarm Optimization are two well-known examples of swarm intelligence in engineered systems from the optimization discipline (PSO).

Both are metaheuristics, which means they may be used to solve a wide range of optimization problems.

Ants and their use of pheromones to locate the shortest pathways inspired ACO.

A graph must be used to depict the optimization issue.

A swarm of virtual ants travels from node to node, choosing which edge to use next based on the likelihood of how many other ants have used it before (through pheromone, implementing positive feedback) and a heuristic parameter, such as journey length (greedy search).

Evaporation of pheromones balances the exploration-exploitation trade-off (negative feedback).

The traveling salesman dilemma, automobile routing, and network routing are all examples of ACO applications.

Flocking is a source of inspiration for PSO.

Agents navigate search space using average velocity vectors that are impacted by global and local best-known solutions (positive feedback), the agent's past path, and a random direction.

While both ACO and PSO conceptually function in a completely distributed manner, they do not need parallel computing to be deployed.

They may, however, be parallelized with ease.

Swarm robotics is the application of swarm intelligence to embodied systems, while ACO and PSO are software-based methods.

Swarm robotics applies the concept of self-organizing systems based on local information to multi-robot systems with a high degree of resilience and scalability.

Following the example of social insects, the goal is to make each individual robot relatively basic in comparison to the task complexity while yet allowing them to collaborate to perform complicated problems.

A swarm robot can only communicate with other swarm robots since it can only function on local information.

Given a fixed swarm density, the applied control algorithms are meant to allow maximum scalability (i.e., constant number of robots per area).

The same control methods should perform effectively regardless of the system size whether the swarm size is grown or lowered by adding or deleting robots.

A super-linear performance improvement is often found, meaning that doubling the size of the swarm improves the swarm's performance by more than two.

As a result, each robot is more productive than previously.

Swarm robotics systems have been demonstrated to be effective for a wide range of activities, including aggregation and dispersion behaviors, as well as more complicated tasks like item sorting, foraging, collective transport, and collective decision-making.

Rubenstein et al. (2014) conducted the biggest scientific experiment using swarm robots to date, using 1024 miniature mobile robots to mimic self-assembly behavior by arranging the robots in predefined designs.

The majority of the tests were conducted in the lab, but new research has taken swarm robots to the field.

Duarte et al. (2016), for example, built a swarm of autonomous surface watercraft that cruise the ocean together.

Modeling the relationship between individual behavior and swarm behavior, creating advanced design principles, and deriving assurances of system attributes are all major issues in swarm intelligence.

The micro-macro issue is defined as the challenge of identifying the ensuing swarm behavior based on a given individual behavior and vice versa.

It has shown to be a difficult challenge that manifests itself in both mathematical modeling and the robot controller design process as an engineering difficulty.

The creation of complex tactics to design swarm behavior is not only crucial to swarm intelligence research, but it has also proved to be very difficult.

Similarly, due to the combinatorial explosion of action-to-agent assignments, multi-agent learning and evolutionary swarm robotics (i.e., application of evolutionary computation techniques to swarm robotics) do not scale well with task complexity.

Despite the benefits of robustness and scalability, obtaining strong guarantees for swarm intelligence systems is challenging.

Swarm systems' availability and reliability can only be assessed experimentally in general. 

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

AI and Embodiment.

Further Reading:

Bonabeau, Eric, Marco Dorigo, and Guy Theraulaz. 1999. Swarm Intelligence: From Natural to Artificial System. New York: Oxford University Press.

Duarte, Miguel, Vasco Costa, Jorge Gomes, Tiago Rodrigues, Fernando Silva, Sancho Moura Oliveira, Anders Lyhne Christensen. 2016. “Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots.” PloS One 11, no. 3: e0151834.

Hamann, Heiko. 2018. Swarm Robotics: A Formal Approach. New York: Springer.

Kitano, Hiroaki, Minoru Asada, Yasuo Kuniyoshi, Itsuki Noda, Eiichi Osawa, Hitoshi Matsubara. 1997. “RoboCup: A Challenge Problem for AI.” AI Magazine 18, no. 1: 73–85.

Liang, Wenshuang, Zhuorong Li, Hongyang Zhang, Shenling Wang, Rongfang Bie. 2015. “Vehicular Ad Hoc Networks: Architectures, Research Issues, Methodologies, Challenges, and Trends.” International Journal of Distributed Sensor Networks 11, no. 8: 1–11.

Reynolds, Craig W. 1987. “Flocks, Herds, and Schools: A Distributed Behavioral Model.” Computer Graphics 21, no. 4 (July): 25–34.

Rubenstein, Michael, Alejandro Cornejo, and Radhika Nagpal. 2014. “Programmable Self-Assembly in a Thousand-Robot Swarm.” Science 345, no. 6198: 795–99.

Artificial Intelligence - What Is Immortality in the Digital Age?

The act of putting a human's memories, knowledge, and/or personality into a long-lasting digital memory storage device or robot is known as digital immortality.

Human intelligence is therefore displaced by artificial intelligence that resembles the mental pathways or imprint of the brain in certain respects.

The National Academy of Engineering has identified reverse-engineering the brain to attain substrate independence—that is, copying the thinking and feeling mind and reproducing it on a range of physical or virtual media.

Whole Brain Emulation (also known as mind uploading) is a theoretical science that assumes the mind is a dynamic process independent of the physical biology of the brain and its unique sets or patterns of atoms.

Instead, the mind is a collection of information-processing functions that can be computed.

Whole Brain Emulation is presently assumed to be based on the neural networking discipline of computer science, which has as its own ambitious objective the programming of an operating system modeled after the human brain.

Artificial neural networks (ANNs) are statistical models built from biological neural networks in artificial intelligence research.

Through connections and weighting, as well as backpropagation and parameter adjustment in algorithms and rules, ANNs may process information in a nonlinear way.

Through his online "Mind Uploading Home Page," Joe Strout, a computational neurobiology enthusiast at the Salk Institute, facilitated debate of full brain emulation in the 1990s.

Strout argued for the material origins of consciousness, claiming that evidence from damage to actual people's brains indicates to neuronal, connectionist, and chemical beginnings.

Strout shared timelines of previous and contemporary technical advancements as well as suggestions for future uploading techniques through his website.

Mind uploading proponents believe that one of two methods will eventually be used: (1) gradual copy-and-transfer of neurons by scanning the brain and simulating its underlying information states, or (2) deliberate replacement of natural neurons with more durable artificial mechanical devices or manufactured biological products.

Strout gathered information on a variety of theoretical ways for achieving the objective of mind uploading.

One is a microtome method, which involves slicing a live brain into tiny slices and scanning it with a sophisticated electron microscope.

The brain is then reconstructed in a synthetic substrate using the picture data.

Nanoreplacement involves injecting small devices into the brain to monitor the input and output of neurons.

When these minuscule robots have a complete understanding of all biological interactions, they will eventually kill the neurons and replace them.

A robot with billions of appendages that delve deep into every section of the brain, as envisioned by Carnegie Mellon University roboticist Hans Moravec, is used in a variation of this process.

In this approach, the robot creates a virtual model of every portion and function of the brain, gradually replacing it.

Everything that the physical brain used to be is eventually replaced by a simulation.

In copy-and-transfer whole brain emulation, scanning or mapping neurons is commonly considered harmful.

The live brain is plasticized or frozen before being divided into parts , scanned and simulated on a computational media.

Philosophically, the technique creates a mental clone of a person, not the person who agrees to participate in the experiment.

Only a duplicate of that individual's personal identity survives the duplicating experiment; the original person dies.

Because, as philosopher John Locke reasoned, someone who recalls thinking about something in the past is the same person as the person who performed the thinking in the first place, the copy may be thought of as the genuine person.

Alternatively, it's possible that the experiment may turn the original and copy into completely different persons, or that they will soon diverge from one another through time and experience as a result of their lack of shared history beyond the experiment.

There have been many nondestructive approaches proposed as alternatives to damaging the brain during the copy-and-transfer process.

It is hypothesized that sophisticated types of gamma-ray holography, x-ray holography, magnetic resonance imaging (MRI), biphoton interferometry, or correlation mapping using probes might be used to reconstruct function.

With 3D reconstructions of atomic-level detail, the present limit of available technology, in the form of electron microscope tomography, has reached the sub-nanometer scale.

The majority of the remaining challenges are related to the geometry of tissue specimens and tomographic equipment's so-called tilt-range restrictions.

Advanced kinds of picture recognition, as well as neurocomputer manufacturing to recreate scans as information processing components, are in the works.

Professor of Electrical and Computer Engineering Alice Parker leads the BioRC Biomimetic Real-Time Cortex Project at the University of Southern California, which focuses on reverse-engineering the brain.

Parker is now building and producing a memory and carbon nanotube brain nanocircuit for a future synthetic cortex based on statistical predictions with nanotechnology professor Chongwu Zhou and her students.

Her neuromorphic circuits are designed to mimic the complexities of human neural computations, including glial cell connections (these are nonneuronal cells that form myelin, control homeostasis, and protect and support neurons).

Members of the BioRC Project are developing systems that scale to the size of human brains.

Parker is attempting to include dendritic plasticity into these systems, which will allow them to adapt and expand as they learn.

Carver Mead, a Caltech electrical engineer who has been working on electronic models of human neurological and biological components since the 1980s, is credited with the approach's roots.

The Terasem Movement, which began in 2002, aims to educate and urge the public to embrace technical advancements that advance the science of mind uploading and integrate science, religion, and philosophy.

The Terasem Movement, the Terasem Movement Foundation, and the Terasem Movement Transreligion are all incorporated entities that operate together.

Martine Rothblatt and Bina Aspen Rothblatt, serial entrepreneurs, founded the group.

The Rothblatts are inspired by the religion of Earthseed, which may be found in Octavia Butler's 1993 novel Parable of the Sower.

"Life is intentional, death is voluntary, God is technology, and love is fundamental," according to Rothblatt's trans-religious ideas (Roy 2014).

Terasem's CyBeRev (Cybernetic Beingness Revival) project collects all available data about a person's life—their personal history, recorded memories, photographs, and so on—and stores it in a separate data file in the hopes that their personality and consciousness can be pieced together and reanimated one day by advanced software.

The Terasem Foundation-sponsored Lifenaut research retains mindfiles with biographical information on individuals for free and keeps track of corresponding DNA samples (biofiles).

Bina48, a social robot created by the foundation, demonstrates how a person's consciousness may one day be transplanted into a lifelike android.

Numenta, an artificial intelligence firm based in Silicon Valley, is aiming to reverse-engineer the human neocortex.

Jeff Hawkins (creator of the portable PalmPilot personal digital assistant), Donna Dubinsky, and Dileep George are the company's founders.

Numenta's idea of the neocortex is based on Hawkins' and Sandra Blakeslee's theory of hierarchical temporal memory, which is outlined in their book On Intelligence (2004).

Time-based learning algorithms, which are capable of storing and recalling tiny patterns in data change over time, are at the heart of Numenta's emulation technology.

Grok, a commercial tool that identifies flaws in computer servers, was created by the business.

Other applications, such as detecting anomalies in stock market trading or abnormalities in human behavior, have been provided by the business.

Carboncopies is a non-profit that funds research and cooperation to capture and preserve unique configurations of neurons and synapses carrying human memories.

Computational modeling, neuromorphic hardware, brain imaging, nanotechnology, and philosophy of mind are all areas where the organization supports research.

Randal Koene, a computational neuroscientist educated at McGill University and head scientist at neuroprosthetic company Kernel, is the organization's creator.

Dmitry Itskov, a Russian new media millionaire, donated early funding for Carbon copies.

Itskov is also the founder of the 2045 Initiative, a non-profit organization dedicated to extreme life extension.

The purpose of the 2045 Initiative is to develop high-tech methods for transferring personalities into a "advanced nonbiological carrier." Global Future 2045, a meeting aimed to developing "a new evolutionary strategy for mankind," is organized by Koene and Itskov.

Proponents of digital immortality see a wide range of practical results as a result of their efforts.

For example, in the case of death by accident or natural causes, a saved backup mind may be used to reawaken into a new body.

(It's reasonable to assume that elderly brains would seek out new bodies long before aging becomes apparent.) This is also the basis of Arthur C.

Clarke's science fiction book City of the Stars (1956), which influenced Koene's decision to pursue a career in science at the age of thirteen.

Alternatively, mankind as a whole may be able to lessen the danger of global catastrophe by uploading their thoughts to virtual reality.

Civilization might be saved on a high-tech hard drive buried deep into the planet's core, safe from hostile extraterrestrials and incredibly strong natural gamma ray bursts.

Another potential benefit is the potential for life extension over lengthy periods of interstellar travel.

For extended travels throughout space, artificial brains might be implanted into metal bodies.

This is a notion that Clarke foreshadowed in the last pages of his science fiction classic Childhood's End (1953).

It's also the response offered by Manfred Clynes and Nathan Kline in their 1960 Astronautics article "Cyborgs and Space," which includes the first mention of astronauts with physical capacities that transcend beyond conventional limitations (zero gravity, space vacuum, cosmic radiation) thanks to mechanical help.

Under real mind uploading circumstances, it may be able to simply encode and send the human mind as a signal to a neighboring exoplanet that is the greatest possibility for alien life discovery.

The hazards to humans are negligible in each situation when compared to the present threats to astronauts, which include exploding rockets, high-speed impacts with micrometeorites, and faulty suits and oxygen tanks.

Another potential benefit of digital immortality is real restorative justice and rehabilitation through criminal mind retraining.

Or, alternatively, mind uploading might enable for penalties to be administered well beyond the normal life spans of those who have committed heinous crimes.

Digital immortality has far-reaching social, philosophical, and legal ramifications.

The concept of digital immortality has long been a hallmark of science fiction.

The short story "The Tunnel Under the World" (1955) by Frederik Pohl is a widely reprinted story about chemical plant workers who are killed in a chemical plant explosion, only to be rebuilt as miniature robots and subjected to advertising campaigns and jingles over the course of a long Truman Show-like repeating day.

The Silicon Man (1991) by Charles Platt relates the tale of an FBI agent who finds a hidden operation named LifeScan.

The project has found a technique to transfer human thought patterns to a computer dubbed MAPHIS, which is headed by an old millionaire and a mutinous crew of government experts (Memory Array and Processors for Human Intelligence Storage).

MAPHIS is capable of delivering any standard stimuli, including pseudomorphs, which are simulations of other persons.

The Autoverse is introduced in Greg Egan's hard science fiction Permutation City (1994), which mimics complex miniature biospheres and virtual worlds populated by artificial living forms.

Egan refers to human consciousnesses scanned into the Autoverse as copies.

The story is inspired by John Conway's Game of Life's cellular automata, quantum ontology (the link between the quantum universe and human perceptions of reality), and what Egan refers to as dust theory.

The premise that physics and arithmetic are same, and that individuals residing in whatever mathematical, physical, and spacetime systems (and all are feasible) are essentially data, processes, and interactions, is at the core of dust theory.

This claim is similar to MIT physicist Max Tegmark's Theory of Everything, which states that "all structures that exist mathematically exist also physically," meaning that "all structures that exist mathematically exist also physically," meaning that "all structures that exist mathematically exist also physically, by which we mean that in those complex enough to contain self-aware substructures (SASs), these SASs will subjectively perceive themselves as existing in a physically'real' world" (Tegmark 1998, 1).

Hans Moravec, a roboticist at Carnegie Mellon University, makes similar assertions in his article "Simulation, Consciousness, Existence" (1998).

Tron (1982), Freejack (1992), and The 6th Day are examples of mind uploading and digital immortality in movies (2000).

Kenneth D. Miller, a theoretical neurologist at Columbia University, is a notable skeptic.

While rebuilding an active, functional mind may be achievable, connectomics researchers (those working on a wiring schematic of the whole brain and nervous system) remain millennia away from finishing their job, according to Miller.

And, he claims, connectomics is just concerned with the first layer of brain activities that must be comprehended in order to replicate the complexity of the human brain.

Others have wondered what happens to personhood in situations where individuals are no longer constrained as physical organisms.

Is identity just a series of connections between neurons in the brain? What is going to happen to markets and economic forces? Is a body required for immortality? Professor Robin Hanson of George Mason University's nonfiction publication The Age of Em: Work, Love, and Life When Robots Rule the Earth provides an economic and social viewpoint on digital immortality (2016).

Hanson's hypothetical ems are scanned emulations of genuine humans who exist in both virtual reality environments and robot bodies.

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Technological Singularity.

Further Reading:

Clynes, Manfred E., and Nathan S. Kline. 1960. “Cyborgs and Space.” Astronautics 14, no. 9 (September): 26–27, 74–76.

Farnell, Ross. 2000. “Attempting Immortality: AI, A-Life, and the Posthuman in Greg Egan’s ‘Permutation City.’” Science Fiction Studies 27, no. 1: 69–91.

Global Future 2045. http://gf2045.com/.

Hanson, Robin. 2016. The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford, UK: Oxford University Press.

Miller, Kenneth D. 2015. “Will You Ever Be Able to Upload Your Brain?” New York Times, October 10, 2015. https://www.nytimes.com/2015/10/11/opinion/sunday/will-you-ever-be-able-to-upload-your-brain.html.

Moravec, Hans. 1999. “Simulation, Consciousness, Existence.” Intercommunication 28 (Spring): 98–112.

Roy, Jessica. 2014. “The Rapture of the Nerds.” Time, April 17, 2014. https://time.com/66536/terasem-trascendence-religion-technology/.

Tegmark, Max. 1998. “Is ‘the Theory of Everything’ Merely the Ultimate Ensemble The￾ory?” Annals of Physics 270, no. 1 (November): 1–51.

2045 Initiative. http://2045.com/.

What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...