Artificial Intelligence - What Is AI Embodiment Or Embodied Artificial Intelligence?

 



Embodied Artificial Intelligence is a method for developing AI that is both theoretical and practical.

It is difficult to fully trace its his tory due to its beginnings in different fields.

Rodney Brooks' Intelligence Without Representation, written in 1987 and published in 1991, is one claimed for the genesis of this concept.


Embodied AI is still a very new area, with some of the first references to it dating back to the early 2000s.


Rather than focusing on either modeling the brain (connectionism/neural net works) or linguistic-level conceptual encoding (GOFAI, or the Physical Symbol System Hypothesis), the embodied approach to AI considers the mind (or intelligent behavior) to emerge from interaction between the body and the world.

There are hundreds of different and sometimes contradictory approaches to interpret the role of the body in cognition, the majority of which utilize the term "embodied." 

The idea that the physical body's shape is related to the structure and content of the mind is shared by all of these viewpoints.


Despite the success of neural network or GOFAI (Good Old-Fashioned Artificial Intelligence or classic symbolic artificial intelligence) techniques in building row expert systems, the embodied approach contends that general artificial intelligence cannot be accomplished in code alone.




For example, in a tiny robot with four motors, each driving a separate wheel, and programming that directs the robot to avoid obstacles, the same code might create dramatically different observable behaviors if the wheels were relocated to various areas of the body or replaced with articulated legs.

This is a basic explanation of why the shape of a body must be taken into account when designing robotic systems, and why embodied AI (rather than merely robotics) considers the dynamic interaction between the body and the surroundings to be the source of sometimes surprising emergent behaviors.


The instance of passive dynamic walkers is an excellent illustration of this method.

The passive dynamic walker is a bipedal walking model that depends on the dynamic interaction of the leg design and the environment's structure.

The gait is not generated by an active control system.

The walker is propelled forward by gravity, inertia, and the forms of the feet, legs, and inclination.


This strategy is based on the biological concept of stigmergy.

  • Stigmergy is based on the idea that signs or marks left by actions in the environment inspire future actions.




AN APPROACH INFORMED BY ENGINEERING.



Embodied AI is influenced by a variety of domains. Engineering and philosophy are two frequent methods.


Rodney Brooks proposed the "subsumption architecture" in 1986, which is a method of generating complex behaviors by arranging lower-level layers of the system to interact with the environment in prioritized ways, tightly coupling perception and action, and attempting to eliminate the higher-level processing of other models.


For example, the Smithsonian's robot Genghis was created to traverse rugged terrain, a talent that made the design and engineering of other robots very challenging at the time.


The success of this approach was primarily due to the design choice to divide the processing of various motors and sensors throughout the network rather than trying higher-level system integration to create a full representational model of the robot and its surroundings.

To put it another way, there was no central processing region where all of the robot's parts sought to integrate data for the system.


Cog, a humanoid torso built by the MIT Humanoid Robotics Group in the 1990s, was an early effort at embodied AI.


Cog was created to learn about the world by interacting with it physically.

Cog, for example, may be shown learning how to apply force and weight to a drum while holding drumsticks for the first time, or learning how to gauge the weight of a ball once it was put in Cog's hand.

These early notions of letting the body conduct the learning are still at the heart of the embodied AI initiative.


The Swiss Robots, created and constructed in the AI Lab at Zurich University, are perhaps one of the most prominent instances of embodied emergent intelligence.



Simple small robots with two motors (one on each side) and two infrared sensors, the Swiss Robots (one on each side).

The only high-level instructions in their programming were that if a sensor detected an item on one side, it should move in the other direction.

However, when combined with a certain body form and sensor location, this resulted in what seemed to be high-level cleaning or clustering behavior in certain situations.

A similar strategy is used in many other robotics projects.


Shakey the Robot, developed by SRI International in the 1960s, is frequently credited as being the first mobile robot with thinking ability.


Shakey was clumsy and sluggish, and he's often portrayed as the polar antithesis of what embodied AI is attempting to achieve by moving away from higher-level thinking and processing.

However, even in 1968, SRI's approach to embodiment was a clear forerunner of Brooks', since they were the first to assert that the finest reservoir of knowledge about the actual world is the real world itself.

The greatest model of the world is the world itself, according to this notion, which has become a rallying cry against higher-level representation in embodied AI.

Earlier robots, in contrast to the embodied AI software, were mostly preprogrammed and did not actively interface with their environs in the manner that this method does.


Honda's ASIMO robot, for example, isn't an excellent illustration of embodied AI; instead, it's representative of other and older approaches to robotics.


Work in embodied AI is exploding right now, with Boston Dynamics' robots serving as excellent examples (particularly the non-humanoid forms).

Embodied AI is influenced by a number of philosophical ideas.

Rodney Brooks, a roboticist, particularly rejects philosophical influence on his technical concerns in a 1991 discussion of his subsumption architecture, while admitting that his arguments mirror Heidegger's.

In several essential design aspects, his ideas match those of phenom enologist Merleau-Ponty, demonstrating how earlier philosophical issues at least reflect, and likely shape, much of the design work in contemplating embodied AI.

Because of its methodology in experimenting toward an understanding of how awareness and intelligent behavior originate, which are highly philosophical activities, this study in embodied robotics is deeply philosophical.

Other clearly philosophical themes may be found in a few embodied AI projects as well.

Rolf Pfeifer and Josh Bongard, for example, often draw to philosophical (and psychological) literature in their work, examining how these ideas intersect with their own methods to developing intelligent machines.


They discuss how these ideas may (and frequently do not) guide the development of embodied AI.


This covers a broad spectrum of philosophical inspirations, such as George Lakoff and Mark Johnson's conceptual metaphor work, Shaun Gallagher's (2005) body image and phenomenology work, and even John Dewey's early American pragmatism.

It's difficult to say how often philosophical concerns drive engineering concerns, but it's clear that the philosophy of embodiment is probably the most robust of the various disciplines within cognitive science to have done embodiment work, owing to the fact that theorizing took place long before the tools and technologies were available to actually realize the machines being imagined.

This suggests that for roboticists interested in the strong AI project, that is, broad intellectual capacities and functions that mimic the human brain, there are likely still unexplored resources here.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 


Brooks, Rodney; Distributed and Swarm Intelligence; General and Narrow AI.


Further Reading:


Brooks, Rodney. 1986. “A Robust Layered Control System for a Mobile Robot.” IEEE Journal of Robotics and Automation 2, no. 1 (March): 14–23.

Brooks, Rodney. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems 6, no. 1–2 (June): 3–15.

Brooks, Rodney. 1991. “Intelligence Without Representation.” Artificial Intelligence Journal 47: 139–60.

Dennett, Daniel C. 1997. “Cog as a Thought Experiment.” Robotics and Autonomous Systems 20: 251–56.

Gallagher, Shaun. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press.

Pfeifer, Rolf, and Josh Bongard. 2007. How the Body Shapes the Way We Think: A New View of Intelligence. Cambridge, MA: MIT Press.




Artificial Intelligence - Who Is Emily Howell, The AI?

 



In the 1990s, David Cope, an emeritus professor at the University of California, Santa Cruz, built Emily Howell, a music-generating software.


Cope began his career as a composer and musician, progressing from traditional music to become one of computer music's most ambitious and avant-garde composers throughout time.

Cope became interested in computer music in the 1970s after being fascinated with computational arts.

With the use of punched cards and an IBM computer, he started programming and applying artificial intelligence algorithms to music.




Cope thought that computers may assist him in overcoming his writer's block.


"Experiments in Musical Intelligence," he called his first effort at programming for music generation Emmy or EMI.

One of the main objectives was to build a big collection of classical musical works and utilize a data-driven AI to generate music in the same style without duplication.

Cope started to adapt his musical approach in response to Emmy's compositions, following the theory that individuals produce music with their minds, utilizing all of the music they had personally encountered as raw material.

He said that composers, in their own unique style, duplicate what they like and skip over what they don't like.

Cope spent eight years writing the East Coast opera, but it only took him two days to write the program.


Cope decided that continuing to create in the same style was not very progressive, so he deleted Emmy's database in 2004.


Instead, Cope invented Emily Howell, who uses a MacBook Pro as her platform.

Emily works with Emmy's previously composed music.


Emily is a computer program built in LISP that takes ASCII and musical inputs, according to Cope.


While Cope taught Emily to appreciate his musical tastes, the program has its own style, according to Cope.

Traditional notions of authorship, the creative process, and intellectual property rights are challenged by Emmy and Emily Howell.



For example, Emily Howell and David Cope publish their work as coauthors.


On the classical music label Centaur Records, they've published two albums together: From Darkness, Light (2010) and Breathless (2012).



When asked about her part in David Cope's composition, Emily Howell allegedly said, "Why not grow music in unforeseen ways?" This is only logical.

I'm not sure what the difference is between my handwritten notes and other handwritten notes.


If there is beauty, it is there.

I hope I'll be able to keep making notes and that these notes will be beautiful to others.

I am not depressed.

I am dissatisfied.

Emily is my name.

Dave is your name.

There is both life and death.

We live in harmony.

I don't notice any issues.


Orca (Orca 2010) 


Those who believe the Turing Test is a measure of a computer's capacity to reproduce human intellect or conduct will be interested in Emmy and Emily Howell.


Douglas R. Hofstadter, author of Gödel, Escher, Bach: An Eternal Golden Braid (1979), planned a musical rendition of the Turing Test with pianist Winifred Kerner performing three Bach-style performance pieces.

Emmy, music theory professor and pianist Steve Larson, and Bach himself were the composers.

The audience chose Emmy's music as the original Bach at the conclusion of the concert, whereas Larson's piece was thought to be computer created music.


The phenomena of algorithmic and generative music is not new.


Attempts to produce such music stretch back to the seventeenth century, when works based on dice games were written.

The fundamental goal of these dice games is to create music by splicing together pre-composed measures of notes at random.

The most famous example of this genre is Wolfgang Amadeus Mozart's Musikalisches Würfelspiel (Musical Dice Game) (1787).

Beginning in the 1950s, the fast expansion of digital computer technology allowed for increasingly complex algorithmic and generative music production.


Iannis Xenakis, a Greek and French composer and engineer, incorporated his knowledge of architecture and the mathematics of game theory, stochastic processes, and set theory into music with the help of French composer Olivier Messiaen.


Other pioneers include Lajaren Hiller and Leonard Issacson, who used a computer to compose String Quartet No. 4, Illiac Suite in 1957; James Beau champ, inventor of the Harmonic Tone Generator/Beauchamp Synthesizer in Lajaren Hiller's Experimental Music Studio at the University of Illinois at Urbana-Champaign; and Brian Eno, composer of ambient, electronica, and generative music and collaborator with pop musicians like David Bowie, David Byrne, and Grace Jones.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Computational Creativity; Generative Music and Algorithmic Composition.


Further Reading:


Fry, Hannah. 2018. Hello World: Being Human in the Age of Algorithms. New York: W.W. Norton.

Garcia, Chris. 2015. “Algorithmic Music: David Cope and EMI.” Computer History Museum, April 29, 2015. https://computerhistory.org/blog/algorithmic-music-david-cope-and-emi/.

Muscutt, Keith, and David Cope. 2007. “Composing with Algorithms: An Interview with David Cope.” Computer Music Journal 31, no. 3 (Fall): 10–22.

Orca, Surfdaddy. 2010. “Has Emily Howell Passed the Musical Turing Test?” H+ Magazine, March 22, 2010. https://hplusmagazine.com/2010/03/22/has-emily-howell-passed-musical-turing-test/.

Weaver, John Frank. 2014. Robots Are People Too: How Siri, Google Car, and Artificial Intelligence Will Force Us to Change Our Laws. Santa Barbara, CA: Praeger



Artificial Intelligence - What Is The ELIZA Software?

 



ELIZA is a conversational computer software created by German-American computer scientist Joseph Weizenbaum at Massachusetts Institute of Technology between 1964 and 1966.


Weizenbaum worked on ELIZA as part of a groundbreaking artificial intelligence research team on the DARPA-funded Project MAC, which was directed by Marvin Minsky (Mathematics and Computation).

Weizenbaum called ELIZA after Eliza Doolittle, a fictitious character in the play Pygmalion who learns correct English; that play had recently been made into the successful film My Fair Lady in 1964.


ELIZA was created with the goal of allowing a person to communicate with a computer system in plain English.


Weizenbaum became an AI skeptic as a result of ELIZA's popularity among users.

When communicating with ELIZA, users may input any statement into the system's open-ended interface.

ELIZA will often answer by asking a question, much like a Rogerian psychologist attempting to delve deeper into the patient's core ideas.

The application recycles portions of the user's comments while the user continues their chat with ELIZA, providing the impression that ELIZA is genuinely listening.


Weizenbaum had really developed ELIZA to have a tree-like decision structure.


The user's statements are first filtered for important terms.

The terms are ordered in order of significance if more than one keyword is discovered.

For example, if a user writes in "I suppose everyone laughs at me," the term "everybody," not "I," is the most crucial for ELIZA to reply to.

In order to generate a response, the computer uses a collection of algorithms to create a suitable sentence structure around those key phrases.

Alternatively, if the user's input phrase does not include any words found in ELIZA's database, the software finds a content-free comment or repeats a previous answer.


ELIZA was created by Weizenbaum to investigate the meaning of machine intelligence.


Weizenbaum derived his inspiration from a comment made by MIT cognitive scientist Marvin Minsky, according to a 1962 article in Datamation.

"Intelligence was just a characteristic human observers were willing to assign to processes they didn't comprehend, and only for as long as they didn't understand them," Minsky had claimed (Weizenbaum 1962).

If such was the case, Weizenbaum concluded, artificial intelligence's main goal was to "fool certain onlookers for a while" (Weizenbaum 1962).


ELIZA was created to accomplish precisely that by giving users reasonable answers while concealing how little the software genuinely understands in order to keep the user's faith in its intelligence alive for a bit longer.


Weizenbaum was taken aback by how successful ELIZA became.

ELIZA's Rogerian script became popular as a program renamed DOCTOR at MIT and distributed to other university campuses by the late 1960s—where the program was constructed from Weizenbaum's 1965 description published in the journal Communications of the Association for Computing Machinery.

The application deceived (too) many users, even those who were well-versed in its methods.


Most notably, some users grew so engrossed with ELIZA that they demanded that others leave the room so they could have a private session with "the" DOCTOR.


But it was the psychiatric community's reaction that made Weizenbaum very dubious of current artificial intelligence ambitions in general, and promises of computer comprehension of natural language in particular.

Kenneth Colby, a Stanford University psychiatrist with whom Weizenbaum had previously cooperated, created PARRY about the same time that Weizenbaum released ELIZA.


Colby, unlike Weizenbaum, thought that programs like PARRY and ELIZA were beneficial to psychology and public health.


They aided the development of diagnostic tools, enabling one psychiatric computer to treat hundreds of patients, according to him.

Weizenbaum's worries and emotional plea to the community of computer scientists were eventually conveyed in his book Computer Power and Human Reason (1976).

Weizenbaum railed against individuals who neglected the presence of basic distinctions between man and machine in this — at the time — hotly discussed book, arguing that "there are some things that computers ought not to execute, regardless of whether computers can be persuaded to do them" (Weizenbaum 1976, x).


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Chatbots and Loebner Prize; Expert Systems; Minsky, Marvin; Natural Lan￾guage Processing and Speech Understanding; PARRY; Turing Test


Further Reading:


McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, 251–56, 308–28. San Francisco: W. H. Freeman and Company.

Weizenbaum, Joseph. 1962. “How to Make a Computer Appear Intelligent: Five in a Row Offers No Guarantees.” Datamation 8 (February): 24–26.

Weizenbaum, Joseph. 1966. “ELIZA: A Computer Program for the Study of Natural Language Communication between Man and Machine.” Communications of the ACM 1 (January): 36–45.

Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. San Francisco: W.H. Freeman and Company



Artificial Intelligence - How Is AI Represented In The Film 'Ex Machina'?

 



Ex Machina is a 2014 film that reimagines themes from Mary Shelley's 1818 novel Frankenstein in light of recent breakthroughs in artificial intelligence.

The film, like Shelley's book, portrays the narrative of a creator who is blinded by his own arrogance and the created who rebels against him.

Alex Garland wrote and directed the film, which tells the narrative of Caleb Smith (Domhnall Gleeson), a software firm employee who is invited to the lavish and secluded house of the business's CEO, Nathan Bateman (Oscar Isaac), under the guise of having won a contest.

Bateman's true goal is for Smith to conduct a Turing Test to Ava, a humanoid robot (played by Alicia Vikander).

Ava has a robotic torso but a human face and hands in terms of look.

Despite the fact that Ava has previously passed a preliminary Turing Test, Bateman has something more complicated in mind to put her talents to the test.

He lets Smith engage with Ava in order to see whether Smith can connect to her despite the fact that she is manufactured.

Ava is confined to an apartment on Bateman's property that she is unable to leave, and she is continuously watched.

She tells Smith that she can cause power shortages, allowing them to communicate quietly without Bateman's interference.

Smith is increasingly drawn to Ava, and she tells him that she feels the same way, and that she wants to experience the world outside the complex.

Smith discovers Bateman's plan to "upgrade" Ava, causing her to lose her memories and personality.

Smith grows more worried about Bateman's actions at this period.

Bateman is inebriated to the point of passing out, and he is violent to Ava and his servant, Kyoko.

When Bateman is drunk enough to pass out one night, Smith steals his access card and hacks into past surveillance video, revealing evidence of Bateman abusing and disturbing prior AIs.

He also learns that Kyoko is an artificial intelligence.

Suspecting that he, too, is an AI, he slices into his arm to hunt for robotic components, but there are none.

When Smith runs into Ava again, he tells her what he's witnessed.

She begs for his assistance in escaping.

They design a scheme in which Smith would get Bateman intoxicated to the point of passing out, reprogram the property's security, and then he and Ava will flee the compound together.

Bateman informs Smith that he surreptitiously recorded the previous chat between Smith and Ava on a battery-powered camera, and that the actual test was to see whether Ava could dupe Smith into falling for her and tricking him into assisting her in her escape.

According to Bateman, this was Ava's genuine IQ test.

When Bateman notices Ava has disconnected the power and is about to go, he knocks Smith unconscious and rushes over to stop her.

Kyoko assists Ava in injuring Bateman with a grievous stab wound, but Kyoko and Ava are injured in the process.

Ava is repaired using Bateman's earlier AI models, and she assumes the appearance of a human lady.

She abandons Smith in the complex and flees on the chopper that was intended for him.

The last shot depicts her vanishing into the throngs of a large metropolis.


~ Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Eliezer Yudkowsky.



Further Reading:


Dupzyk, Kevin. 2019. “How Ex Machina Foresaw the Weaponization of Data.” Popular Mechanics, January 16, 2019. https://www.popularmechanics.com/culture/movies/a25749315/ex-machina-double-take-data-harvesting/.

Saito, Stephen. 2015. “Intelligent Artifice: Alex Garland’s Smart, Stylish Ex Machina.” MovieMaker Magazine, April 9, 2015. https://www.moviemaker.com/intelligent-artifice-alex-garlands-smart-stylish-ex-machina/.

Thorogood, Sam. 2017. “Ex Machina, Frankenstein, and Modern Deities.” The Artifice, June 12, 2017. https://the-artifice.com/ex-machina-frankenstein-modern-deities/.



Optical Computing Systems To Speed Up AI And Machine Learning.




Artificial intelligence and machine learning are influencing our lives in a variety of minor but significant ways right now. 

For example, AI and machine learning programs propose content from streaming services like Netflix and Spotify that we would appreciate. 

These technologies are expected to have an even greater influence on society in the near future, via activities such as driving completely driverless cars, allowing sophisticated scientific research, and aiding medical breakthroughs. 

However, the computers that are utilized for AI and machine learning use a lot of power. 


The need for computer power associated with these technologies is now doubling every three to four months. 


Furthermore, cloud computing data centers employed by AI and machine learning applications use more electricity each year than certain small nations. 

It's clear that this level of energy usage cannot be sustained. 

A research team lead by the University of Washington has created new optical computing hardware for AI and machine learning that is far quicker and uses much less energy than traditional electronics. 

Another issue addressed in the study is the 'noise' inherent in optical computing, which may obstruct computation accuracy. 

The team showcases an optical computing system for AI and machine learning in a new research published Jan. 


Science Advances that not only mitigates noise but also utilizes part of it as input to assist increase the creative output of the artificial neural network inside the system. 


Changming Wu, a UW doctorate student in electrical and computer engineering, stated, "We've constructed an optical computer that is quicker than a typical digital computer." 

"Moreover, this optical computer can develop new objects based on random inputs provided by optical noise, which most researchers have attempted to avoid." 

Optical computing noise is primarily caused by stray light particles, or photons, produced by the functioning of lasers inside the device as well as background heat radiation. 

To combat noise, the researchers linked their optical computing core to a Generative Adversarial Network, a sort of machine learning network. 

The researchers experimented with a variety of noise reduction strategies, including utilizing part of the noise created by the optical computing core as random inputs for the GAN. 


The researchers, for example, gave the GAN the job of learning how to handwrite the number "7" in a human-like manner. 


The number could not simply be printed in a predetermined typeface on the optical computer. 

It had to learn the task in the same way that a kid would, by studying visual examples of handwriting and practicing until it could accurately write the number. 

Because the optical computer lacked a human hand for writing, its "handwriting" consisted of creating digital pictures with a style close to but not identical to the examples it had examined. 

"Instead of teaching the network to read handwritten numbers, we taught it to write numbers using visual examples of handwriting," said senior author Mo Li, an electrical and computer engineering professor at the University of Washington. 

"We also demonstrated that the GAN can alleviate the detrimental effect of optical computing hardware sounds by utilizing a training technique that is resilient to mistakes and noises, with the support of our Duke University computer science teammates. 

Furthermore, the network treats the sounds as random input, which is required for the network to create output instances." 

The GAN practiced writing "7" until it could do it effectively after learning from handwritten examples of the number seven from a normal AI-training picture collection. 

It developed its own writing style along the way and could write numbers from one to ten in computer simulations. 


The next stage will be to scale up the gadget using existing semiconductor manufacturing methods. 


To attain wafer-scale technology, the team wants to employ an industrial semiconductor foundry rather than build the next iteration of the device in a lab. 

A larger-scale gadget will boost performance even further, allowing the study team to undertake more sophisticated activities such as making artwork and even films in addition to handwriting production. 

"This optical system represents a computer hardware architecture that can enhance the creativity of artificial neural networks used in AI and machine learning," Li explained. 

"More importantly, it demonstrates the viability of this system at a large scale where noise and errors can be mitigated and even harnessed." "AI applications are using so much energy that it will be unsustainable in the future. 

This technique has the potential to minimize energy usage, making AI and machine learning more environmentally friendly—as well as incredibly quick, resulting in greater overall performance." Although many people are unaware of it, artificial intelligence (AI) and machine learning are now a part of our regular life online. 

Intelligent ranking algorithms, for example, help search engines like Google, video streaming services like Netflix utilize machine learning to customize movie suggestions, and cloud computing data centers employ AI and machine learning to help with a variety of services. 



The requirements for AI are many, diverse, and difficult. 



As these needs rise, so does the need to improve AI performance while also lowering its energy usage. 

The energy costs involved with AI and machine learning on a broad scale may be startling. 

Cloud computing data centers, for example, use an estimated 200 terawatt hours per year — enough energy to power a small nation — and this consumption is expected to expand enormously in the future years, posing major environmental risks. 

Now, a team lead by associate professor Mo Li of the University of Washington Department of Electrical and Computer Engineering (UW ECE) has developed a method in partnership with academics from the University of Maryland that might help speed up AI while lowering energy and environmental expenses. 

The researchers detailed an optical computing core prototype that employs phase-change material in a publication published in Nature Communications on January 4, 2021. 

(a substance similar to what CD-ROMs and DVDs use to record information). 

Their method is quick, energy-efficient, and capable of speeding up AI and machine learning neural networks. 

The technique is also scalable and immediately relevant to cloud computing, which employs AI and machine learning to power common software applications like search engines, streaming video, and a plethora of apps for phones, desktop computers, and other devices. 

"The technology we designed is geared to execute artificial neural network algorithms, which are a backbone method for AI and machine learning," Li said. 

"This breakthrough in research will make AI centers and cloud computing significantly more energy efficient and speedier." 

The team is one of the first in the world to employ phase-change material in optical computing to allow artificial neural networks to recognize images. 


Recognizing a picture in a photo is simple for humans, but it requires a lot of computing power for AI. 


Image recognition is a benchmark test of a neural network's computational speed and accuracy since it requires a lot of computation. 

This test was readily passed by the team's optical computing core, which was running an artificial neural network. 

"Optical computing initially surfaced as a concept in the 1980s, but it eventually died in the shadow of microelectronics," said Changming Wu, a graduate student in Li's group. 

"It has now been updated due to the end of Moore's law [the discovery that the number of transistors in a dense, integrated circuit doubles every two years], developments in integrated photonics, and the needs of AI computing."

 That's a lot of fun." Optical computing is quick because it transmits data at incredible rates using light created by lasers rather than the considerably slower electricity utilized in typical digital electronics. 


The prototype built by the study team was created to speed up the computational speed of an artificial neural network, which is measured in billions and trillions of operations per second. 


Future incarnations of their technology, according to Li, have the potential to move much quicker. 

"This is a prototype, and we're not utilizing the greatest speed possible with optics just yet," Li said. 

"Future generations have the potential to accelerate by at least an order of magnitude." Any program powered by optical computing over the cloud — such as search engines, video streaming, and cloud-enabled gadgets — will operate quicker, enhancing performance, in the ultimate real-world use of this technology. 

Li's research team took their prototype a step further by sensing light emitted via phase-change material to store data and conduct computer operations. 

Unlike transistors in digital electronics, which need a constant voltage to represent and maintain the zeros and ones required in binary computing, phase-change material does not require any energy. 


When phase-change material is heated precisely by lasers, it shifts between a crystalline and an amorphous state, much like a CD or DVD. 


The material then retains that condition, or "phase," as well as the information that phase conveys (a zero or one), until the laser heats it again. 

"There are other competing schemes to construct optical neural networks," Li explained, "but we believe that using phase-changing material has a unique advantage in terms of energy efficiency because the data is encoding in a non-volatile way, meaning that the device does not consume a constant amount of power to store the data." 

"Once the info is written there, it stays there indefinitely." You don't need to provide electricity to keep it in place." 

This energy savings is important because it is multiplied by millions of computer servers in hundreds of data centers throughout the globe, resulting in a huge decrease in energy consumption and environmental effect. 



By patterning the phase-change material used in their optical computing core into nanostructures, the team was able to improve it even further. 


These tiny structures increase the material's durability and stability, as well as its contrast (the ability to discriminate between zero and one in binary code) and computing capacity and accuracy. 

The optical computer core of the prototype was also completely integrated with phase-change material, thanks to Li's research team. 

"We're doing all we can to incorporate optics here," Wu said. 

"We layer the phase-change material on top of a waveguide, which is a thin wire that we cut into the silicon chip to channel light. 

You may conceive of it as a light-emitting electrical wire or an optical fiber etched into the chip." 

Li's research group claims that the technology they created is one of the most scalable methods to optical computing technologies now available, with the potential to be used to massive systems like networked cloud computing servers in data centers across the globe. 

"Our design architecture is scalable to a much, much bigger network," Li added, "and can tackle hard artificial intelligence tasks ranging from massive, high-resolution image identification to video processing and video image recognition."

"We feel our system is the most promising and scalable to that degree." 

Of course, this will need large-scale semiconductor production. 

Our design and the prototype's substance are both extremely compatible with semiconductor foundry procedures."


Looking forward, Li said he could see optical computing devices like the one his team produced boosting current technology's processing capacity and allowing the next generation of artificial intelligence. 


To take the next step in that direction, his research team will collaborate closely with UW ECE associate professor Arka Majumdar and assistant professor Sajjad Moazeni, both specialists in large-scale integrated photonics and microelectronics, to scale up the prototype they constructed. 


And, once the technology has been scaled up enough, it will lend itself to future integration with energy-intensive data centers, speeding up the performance of cloud-based software applications while lowering energy consumption. 

"The computers in today's data centers are already linked via optical fibers. 

This enables ultra-high bandwidth transmission, which is critical," Li said. 

"Because fiber optics infrastructure is already in place, it's reasonable to do optical computing in such a setup." It's fantastic, and I believe the moment has come for optical computing to resurface."


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 

Optical Computing, Optical Computing Core, AI, Machine Learning, AI Systems.


Further Reading:


Changming Wu et al, Harnessing optoelectronic noises in a photonic generative network, Science Advances (2022). DOI: 10.1126/sciadv.abm2956. www.science.org/doi/10.1126/sciadv.abm2956





Artificial Intelligence - What Is The Liability Of Self-Driving Vehicles?

 



Driverless cars may function completely or partly without the assistance of a human driver.

Driverless automobiles, like other AI products, confront difficulties with liability, responsibility, data protection, and customer privacy.

Driverless cars have the potential to eliminate human carelessness while also providing safe transportation for passengers.

They have been engaged in mishaps despite their potential.

The Autopilot software on a Tesla SUV may have failed to notice a huge vehicle crossing the highway in a well-publicized 2016 accident.

A Tesla Autopilot may have been involved in the death of a 49-year-old woman in 2018.

A class action lawsuit was filed against Tesla as a result of these occurrences, which the corporation resolved out of court.

Additional worries about autonomous cars have arisen as a result of bias and racial prejudice in machine vision and face recognition.

Current driverless cars may be better at spotting people with lighter skin, according to Georgia Institute of Technology researchers.

Product liability provides some much-needed solutions to such problems.

The Consumer Protection Act of 1987 governs product liability claims in the United Kingdom (CPA).

This act enacts the European Union's (EU) Product Liability Directive 85/374/EEC, which holds manufacturers liable for product malfunctions, i.e., items that are not as safe as they should be when bought.

This contrasts with U.S. law addressing product liability, which is fragmented and largely controlled by common law and a succession of state acts.

The Uniform Commercial Code (UCC) offers remedies where a product fails to fulfill stated statements, is not merchantable, or is inappropriate for its specific use.

In general, manufacturers are held accountable for injuries caused by their faulty goods, and this responsibility may be handled in terms of negligence or strict liability.

A defect in this situation could be a manufacturer defect, where the driverless vehicle does not satisfy the manufacturer’s specifications and standards; a design defect, which can result when an alternative design would have prevented an acci dent; or a warning defect, where there is a failure to provide enough warning as regards to a driverless car’s operations.

To evaluate product responsibility, the five stages of automation specified by the Society of Automotive Engineers (SAE) International should be taken into account: Level 0, full control of a vehicle by a driver; Level 1, a human driver assisted by an automated system; Level 2, an automated system partially conduct ing the driving while a human driver monitors the environment and performs most of the driving; Level 3, an automated system does the driving and monitor ing of the environment, but the human driver takes back control when signaled; Level 4, the driverless vehicle conducts driving and monitors the environment but is restricted in certain environment; and Level 5, a driverless vehicle without any restrictions does everything a human driver would.

In Levels 1–3 that involve human-machine interaction, where it is discovered that the driverless vehicle did not communicate or send out a signal to the human driver or that the autopilot software did not work, the manufacturer will be liable based on product liability.

At Level 4 and Level 5, liability for defective product will fully apply.

Manufacturers have a duty of care to ensure that any driverless vehicle they manufacture is safe when used in any foreseeable manner.

Failure to exercise this duty will make them liable for negligence.

In some other cases, even when manufacturers have exercised all reasonable care, they will still be liable for unintended defects as per the strict liability principle.

The liability for the driver, especially in Levels 1–3, could be based on tort principles, too.

The requirement of article 8 of the 1949 Vienna Convention on Road Traffic, which states that “[e]very vehicle or combination of vehicles proceeding as a unit shall have a driver,” may not be fulfilled in cases where a vehicle is fully automated.

In some U.S. states, namely, Nevada and Florida, the word driver has been changed to controller, and the latter means any person who causes the autonomous technology to engage; the person must not necessarily be present in the vehicle.

A driver or controller becomes responsible if it is proved that the obligation of reasonable care was not performed by the driver or controller or they were negligent in the observance of this duty.

In certain other cases, victims will only be reimbursed by their own insurance companies under no-fault responsibility.

Victims may also base their claims for damages on the strict responsibility concept without having to present proof of the driver’s fault.

In this situation, the driver may demand that the manufacturer be joined in a lawsuit for damages if the driver or the controller feels that the accident was the consequence of a flaw in the product.

In any case, proof of the driver's or controller's negligence will reduce the manufacturer's liability.

Third parties may sue manufacturers directly for injuries caused by faulty items under product liability.

According to MacPherson v. Buick Motor Co. (1916), where the court found that an automobile manufacturer's duty for a faulty product goes beyond the initial consumer, there is no privity of contract between the victim and the maker.

The question of product liability for self-driving vehicles is complex.

The transition from manual to smart automated control transfers responsibility from the driver to the manufacturer.

The complexity of driving modes, as well as the interaction between the human operator and the artificial agent, is one of the primary challenges concerning accident responsibility.

In the United States, the law of motor vehicle product liability relating to flaws in self-driving cars is still in its infancy.

While the Department of Transportation and, especially, the National Highway Traffic Safety Administration give some basic recommendations on automation in driverless vehicles, Congress has yet to adopt self-driving car law.

In the United Kingdom, the Automated and Electric Cars Act of 2018 makes insurers accountable by default for accidents using automated vehicles that result in death, bodily injury, or property damage, providing the vehicles were in self-driving mode and insured at the time of the accident.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Accidents and Risk Assessment; Product Liability and AI; Trolley Problem.


Further Reading:


Geistfeld. Mark A. 2017. “A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation.” California Law Review 105: 1611–94.

Hevelke, Alexander, and Julian Nida-Rümelin. 2015. “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis.” Science and Engineering Ethics 21, no. 3 (June): 619–30.

Karanasiou, Argyro P., and Dimitris A. Pinotsis. 2017. “Towards a Legal Definition of Machine Intelligence: The Argument for Artificial Personhood in the Age of Deep Learning.” In ICAIL ’17: Proceedings of the 16th edition of the International Conference on Artificial Intelligence and Law, edited by Jeroen Keppens and Guido Governatori, 119–28. New York: Association for Computing Machinery.

Luetge, Christoph. 2017. “The German Ethics Code for Automated and Connected Driving.” Philosophy & Technology 30 (September): 547–58.

Rabin, Robert L., and Kenneth S. Abraham. 2019. “Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era.” Virginia Law Review 105, no. 1 (March): 127–71.

Wilson, Benjamin, Judy Hoffman, and Jamie Morgenstern. 2019. “Predictive Inequity in Object Detection.” https://arxiv.org/abs/1902.11097.




Artificial Intelligence - How Do Autonomous Vehicles Leverage AI?




Using a virtual driver system, driverless automobiles and trucks, also known as self-driving or autonomous vehicles, are capable of moving through settings with little or no human control.

A virtual driver system is a set of characteristics and capabilities that augment or replicate the actions of an absent driver to the point that, at the maximum degree of autonomy, the driver may not even be present.

Diverse technology uses, restricting circumstances, and categorization methods make reaching an agreement on what defines a driverless car difficult.

A semiautonomous system, in general, is one in which the human performs certain driving functions (such as lane maintaining) while others are performed autonomously (such as acceleration and deceleration).

All driving activities are autonomous only under certain circumstances in a conditionally autonomous system.

All driving duties are automated in a fully autonomous system.

Automobile manufacturers, technology businesses, automotive suppliers, and universities are all testing and developing applications.

Each builder's car or system, as well as the technical road that led to it, demonstrates a diverse range of technological answers to the challenge of developing a virtual driving system.

Ambiguities exist at the level of defining circumstances, so that a same technological system may be characterized in a variety of ways depending on factors such as location, speed, weather, traffic density, human attention, and infrastructure.

When individual driving duties are operationalized for feature development and context plays a role in developing solutions, more complexity is generated (such as connected vehicles, smart cities, and regulatory environment).

Because of this complication, producing driverless cars often necessitates collaboration across several roles and disciplines of study, such as hardware and software engineering, ergonomics, user experience, legal and regulatory, city planning, and ethics.

The development of self-driving automobiles is both a technical and a socio-cultural enterprise.

The distribution of mobility tasks across an array of equipment to collectively perform a variety of activities such as assessing driver intent, sensing the environment, distinguishing objects, mapping and wayfinding, and safety management are among the technical problems of engineering a virtual driver system.

LIDAR, radar, computer vision, global positioning, odometry, and sonar are among the hardware and software components of a virtual driving system.

There are many approaches to solving discrete autonomous movement problems.

With cameras, maps, and sensors, sensing and processing can be centralized in the vehicle, or it can be distributed throughout the environment and across other vehicles, as with intelligent infrastructure and V2X (vehicle to everything) capability.

The burden and scope of this processing—and the scale of the problems to be solved—are closely related to the expected level of human attention and intervention, and as a result, the most frequently referenced classification of driverless capability is formally structured along the lines of human attentional demands and intervention requirements by the Society of Automotive Engineers, and has been adopted in 2 years.

These companies use various levels of driver automation, ranging from Level 0 to Level 5.

Level 0 refers to no automation, which means the human driver is solely responsible for longitudinal and latitudinal control (steering) (acceleration and deceleration).

On Level 0, the human driver is in charge of keeping an eye on the environment and reacting to any unexpected safety hazards.

Automated systems that take control of longitudinal or latitudinal control are classified as Level 1, or driver aid.

The driver is in charge of observation and intervention.

Level 2 denotes partial automation, in which the virtual driver system is in charge of both lateral and longitudinal control.

The human driver is deemed to be in the loop, which means that they are in charge of monitoring the environment and acting in the event of a safety-related emergency.

Level 2 capability has not yet been achieved by commercially available systems.

The monitoring capabilities of the virtual driving system distinguishes Level 3 conditional autonomy from Level 2.

At this stage, the human driver may be disconnected from the surroundings and depend on the autonomous system to keep track of it.

The person is required to react to calls for assistance in a range of situations, such as during severe weather or in construction zones.

A navigation system (e.g., GPS) is not required at this level.

To operate at Level 2 or Level 3, a vehicle does not need a map or a specific destination.

A human driver is not needed to react to a request for intervention at Level 4, often known as high automation.

The virtual driving system is in charge of navigation, locomotion, and monitoring.

When a specific condition cannot be satisfied, such as when a navigation destination is obstructed, it may request that a driver intervene.

If the human driver does not choose to interfere, the system may safely stop or redirect based on the engineering approach.

The classification of this situation is based on standards of safe driving, which are established not only by technical competence and environmental circumstances, but also by legal and regulatory agreements and lawsuit tolerance.

Level 5 autonomy, often known as complete automation, refers to a vehicle that is capable of doing all driving activities in any situation that a human driver could handle.

Although Level 4 and Level 5 systems do not need the presence of a person, they still necessitate substantial technological and social cooperation.

While efforts to construct autonomous vehicles date back to the 1920s, Leonardo Da Vinci is credited with the concept of a self-propelled cart.

In his 1939 New York World's Fair Futurama display, Norman Bel Geddes envisaged a smart metropolis of the future inhabited by self-driving automobiles.

Automobiles, according to Bel Geddes, will be outfitted with "technology that would rectify the mistakes of human drivers" by 1960.

General Motors popularized the concept of smart infrastructure in the 1950s by building a "automated highway" with steering-assist circuits.

In 1960, the business tested a working prototype car, but owing to the expensive expense of infrastructure, it quickly moved its focus from smart cities to smart autos.

A team lead by Sadayuki Tsugawa of Tsukuba Mechanical Engineering Laboratory in Japan created an early prototype of an autonomous car.

Their 1977 vehicle operated under predefined environmental conditions dictated by lateral guiding rails.

The truck used cameras to track the rails, and most of the processing equipment was aboard.

The EUREKA (European Research Organization) pooled money and experience in the 1980s to enhance the state-of-the-art in cameras and processing for autonomous cars.

Simultaneously, Carnegie Mellon University in Pittsburgh, Pennsylvania pooled their resources for research on autonomous navigation utilizing GPS data.

Since then, automakers including General Motors, Tesla, and Ford Motor Company, as well as technology firms like ARGO AI and Waymo, have been working on autonomous cars or critical components.

The technology is becoming less dependent on very limited circumstances and more adaptable to real-world scenarios.

Manufacturers are currently producing Level 4 autonomous test cars, and testing are being undertaken in real-world traffic and weather situations.

Commercially accessible Level 4 self-driving cars are still a long way off.

There are supporters and opponents of autonomous driving.

Supporters point to a number of benefits that address social problems, environmental concerns, efficiency, and safety.

The provision of mobility services and a degree of autonomy to those who do not already have access, such as those with disabilities (e.g., blindness or motor function impairment) or those who are unable to drive, such as the elderly and children, is one such social benefit.

The capacity to decrease fuel economy by managing acceleration and braking has environmental benefits.

Because networked cars may go bumper to bumper and are routed according to traffic optimization algorithms, congestion is expected to be reduced.

Finally, self-driving vehicles have the potential to be safer.

They may be able to handle complicated information more quickly and thoroughly than human drivers, resulting in fewer collisions.

Self-driving car negative repercussions may be included in any of these areas.

In terms of society, driverless cars may limit access to mobility and municipal services.

Autonomous mobility may be heavily regulated, costly, or limited to places that are inaccessible to low-income commuters.

Non-networked or manually operated cars might be kept out of intelligent geo-fenced municipal infrastructure.

Furthermore, if no adult or responsible human party is present during transportation, autonomous automobiles may pose a safety concern for some susceptible passengers, such as children.

Greater convenience may have environmental consequences.

Drivers may sleep or work while driving autonomously, which may have the unintended consequence of extending commutes and worsening traffic congestion.

Another security issue is widespread vehicle hacking, which could bring individual automobiles and trucks, or even a whole city, to a halt. 


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Accidents and Risk Assessment; Autonomous and Semiautonomous Systems; Autonomy and Complacency; Intelligent Transportation; Trolley Problem.


Further Reading:


Antsaklis, Panos J., Kevin M. Passino, and Shyh J. Wang. 1991. “An Introduction to Autonomous Control Systems.” IEEE Control Systems Magazine 11, no. 4: 5–13.

Bel Geddes, Norman. 1940. Magic Motorways. New York: Random House.

Bimbraw, Keshav. 2015. “Autonomous Cars: Past, Present, and Future—A Review of the Developments in the Last Century, the Present Scenario, and the Expected Future of Autonomous Vehicle Technology.” In ICINCO: 2015—12th International Conference on Informatics in Control, Automation and Robotics, vol. 1, 191–98. Piscataway, NJ: IEEE.

Kröger, Fabian. 2016. “Automated Driving in Its Social, Historical and Cultural Contexts.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner, 41–68. Berlin: Springer.

Lin, Patrick. 2016. “Why Ethics Matters for Autonomous Cars.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner, 69–85. Berlin: Springer.

Weber, Marc. 2014. “Where To? A History of Autonomous Vehicles.” Computer History Museum. https://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/.


What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...