Showing posts with label Autonomous and Semiautonomous Systems. Show all posts
Showing posts with label Autonomous and Semiautonomous Systems. Show all posts

Artificial Intelligence - What Is RoboThespian?

 




RoboThespian is an interactive robot created by Engineered Arts in England.

It is described as a humanoid, which means it was meant to look like a person.

The initial version of the robot was released in 2005, with improvements following in 2007, 2010, and 2014.

The robot is human-sized, with a plastic face, metal arms, and legs that can move in a variety of directions.

With its digital voice, the robot's video camera eyes can track a person's movements and infer his or her age and mood.

All RoboThespians, according to Engineered Arts' website, come with a touchscreen that enables users to personalize and manage their experience with the robot, including the ability to animate it and modify its language.

Users may also operate it remotely via a tablet, however since the robot can be preprogrammed, no live operator is necessary.

RoboThespian was created to engage with people in public places including colleges, museums, hotels, trade events, and exhibits.

The robot is utilized as a tour guide in venues like science museums.

It can scan QR codes, identify facial expressions, react to gestures, and communicate with people through a touchscreen kiosk.

RoboThespian may also amuse in addition to these practical uses.

It's jam-packed with songs, gestures, welcomes, and first impressions.

RoboThespian has also performed in front of an audience.

It has the ability to sing, dance, perform, read from a script, and communicate with emotion.

It can respond to audiences and forecast their emotions since it is equipped with cameras and face recognition.

According to Engineered Arts, it may have a "vast variety of facial expression" as an actor and "can be precisely displayed with the delicate subtlety, generally only achieved by human performers" (Engineered Arts 2017).

During the Edinburgh Festival Fringe in 2015, the drama Spillikin had its world debut at the Pleasance Theatre.

In a love tale about a husband who constructs a robot for his wife to keep her company after he dies, RoboThespian appeared with four human performers.

The play toured the United Kingdom from 2016 to 2017, receiving critical praise.

Companies who purchase a RoboThespian may tailor the robot's content to meet their specific requirements.

The appearance of the robot's face and other design elements may be changed.

It can feature a projected face, grippable hands, and moveable legs.

RoboThespians are now placed at NASA Kennedy Center in the United States, the National Science and Technology Museum in Spain, and the Copernicus Science Centre in Poland, among others.

University of Central Florida, University of North Carolina at Chapel Hill, University College London, and University of Barcelona are among the academic institutions where the robot may be found.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Autonomous and Semiautonomous Systems; Ishiguro, Hiroshi.


References & Further Reading:


Engineered Arts. 2017. “RoboThespian.” Engineered Arts Limited. www.engineeredarts.co.uk.

Hickey, Shane. 2014. “RoboThespian: The First Commercial Robot That Behaves Like a Person.” The Guardian, August 17, 2014. www.theguardian.com/technology/2014/aug/17/robothespian-engineered-arts-robot-human-behaviour.





Artificial Intelligence - Who Was Raj Reddy Or Dabbala Rajagopal "Raj" Reddy?

 


 


Dabbala Rajagopal "Raj" Reddy (1937–) is an Indian American who has made important contributions to artificial intelligence and has won the Turing Award.

He holds the Moza Bint Nasser Chair and University Professor of Computer Science and Robotics at Carnegie Mellon University's School of Computer Science.

He worked on the faculties of Stanford and Carnegie Mellon universities, two of the world's leading colleges for artificial intelligence research.

In the United States and in India, he has received honors for his contributions to artificial intelligence.

In 2001, the Indian government bestowed upon him the Padma Bhushan Award (the third highest civilian honor).

In 1984, he was also given the Legion of Honor, France's highest honor, which was created in 1802 by Napoleon Bonaparte himself.

In 1958, Reddy obtained his bachelor's degree from the University of Madras' Guindy Engineering College, and in 1960, he received his master's degree from the University of New South Wales in Australia.

In 1966, he came to the United States to get his doctorate in computer science at Stanford University.

He was the first in his family to get a university degree, which is typical of many rural Indian households.

He went to the academy in 1966 and joined the faculty of Stanford University as an Assistant Professor of Computer Science, where he stayed until 1969, after working in the industry as an Applied Science Representative at IBM Australia from 1960 to 1963.

He began working at Carnegie Mellon as an Associate Professor of Computer Science in 1969 and will continue to do so until 2020.

He rose up the ranks at Carnegie Mellon, eventually becoming a full professor in 1973 and a university professor in 1984.

In 1991, he was appointed as the head of the School of Computer Science, a post he held until 1999.

Many schools and institutions were founded as a result of Reddy's efforts.

In 1979, he launched the Robotics Institute and served as its first director, a position he held until 1999.

He was a driving force behind the establishment of the Language Technologies Institute, the Human Computer Interaction Institute, the Center for Automated Learning and Discovery (now the Machine Learning Department), and the Institute for Software Research at CMU during his stint as dean.

From 1999 to 2001, Reddy was a cochair of the President's Information Technology Advisory Committee (PITAC).

The President's Council of Advisors on Science and Technology (PCAST) took over PITAC in 2005.

Reddy was the president of the American Association for Artificial Intelligence (AAAI) from 1987 to 1989.

The AAAI has been renamed the Association for the Advancement of Artificial Intelligence, recognizing the worldwide character of the research community, which began with pioneers like Reddy.

The former logo, acronym (AAAI), and purpose have been retained.

Artificial intelligence, or the study of giving intelligence to computers, was the subject of Reddy's research.

He worked on voice control for robots, speech recognition without relying on the speaker, and unlimited vocabulary dictation, which allowed for continuous speech dictation.

Reddy and his collaborators have made significant contributions to computer analysis of natural sceneries, job oriented computer architectures, universal access to information (a project supported by UNESCO), and autonomous robotic systems.

Reddy collaborated on Hearsay II, Dragon, Harpy, and Sphinx I/II with his coworkers.

The blackboard model, one of the fundamental concepts that sprang from this study, has been extensively implemented in many fields of AI.

Reddy was also interested in employing technology for the sake of society, and he worked as the Chief Scientist at the Centre Mondial Informatique et Ressource Humaine in France.

He aided the Indian government in the establishment of the Rajiv Gandhi University of Knowledge Technologies, which focuses on low-income rural youth.

He is a member of the International Institute of Information Technology (IIIT) in Hyderabad's governing council.

IIIT is a non-profit public-private partnership (N-PPP) that focuses on technological research and applied research.

He was on the board of directors of the Emergency Management and Research Institute, a nonprofit public-private partnership that offers public emergency medical services.

EMRI has also aided in the emergency management of its neighboring nation, Sri Lanka.

In addition, he was a member of the Health Care Management Research Institute (HMRI).

HMRI provides non-emergency health-care consultation to rural populations, particularly in Andhra Pradesh, India.

In 1994, Reddy and Edward A. Feigenbaum shared the Turing Award, the top honor in artificial intelligence, and Reddy became the first person of Indian/Asian descent to receive the award.

In 1991, he received the IBM Research Ralph Gomory Fellow Award, the Okawa Foundation's Okawa Prize in 2004, the Honda Foundation's Honda Prize in 2005, and the Vannevar Bush Award from the United States National Science Board in 2006.

Reddy has received fellowships from the Institute of Electronic and Electrical Engineers (IEEE), the Acoustical Society of America, and the American Association for Artificial Intelligence, among other prestigious organizations.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Autonomous and Semiautonomous Systems; Natural Language Processing and Speech Understanding.


References & Further Reading:


Reddy, Raj. 1988. “Foundations and Grand Challenges of Artificial Intelligence.” AI Magazine 9, no. 4 (Winter): 9–21.

Reddy, Raj. 1996. “To Dream the Possible Dream.” Communications of the ACM 39, no. 5 (May): 105–12.






Artificial Intelligence - How Do Autonomous Vehicles Leverage AI?




Using a virtual driver system, driverless automobiles and trucks, also known as self-driving or autonomous vehicles, are capable of moving through settings with little or no human control.

A virtual driver system is a set of characteristics and capabilities that augment or replicate the actions of an absent driver to the point that, at the maximum degree of autonomy, the driver may not even be present.

Diverse technology uses, restricting circumstances, and categorization methods make reaching an agreement on what defines a driverless car difficult.

A semiautonomous system, in general, is one in which the human performs certain driving functions (such as lane maintaining) while others are performed autonomously (such as acceleration and deceleration).

All driving activities are autonomous only under certain circumstances in a conditionally autonomous system.

All driving duties are automated in a fully autonomous system.

Automobile manufacturers, technology businesses, automotive suppliers, and universities are all testing and developing applications.

Each builder's car or system, as well as the technical road that led to it, demonstrates a diverse range of technological answers to the challenge of developing a virtual driving system.

Ambiguities exist at the level of defining circumstances, so that a same technological system may be characterized in a variety of ways depending on factors such as location, speed, weather, traffic density, human attention, and infrastructure.

When individual driving duties are operationalized for feature development and context plays a role in developing solutions, more complexity is generated (such as connected vehicles, smart cities, and regulatory environment).

Because of this complication, producing driverless cars often necessitates collaboration across several roles and disciplines of study, such as hardware and software engineering, ergonomics, user experience, legal and regulatory, city planning, and ethics.

The development of self-driving automobiles is both a technical and a socio-cultural enterprise.

The distribution of mobility tasks across an array of equipment to collectively perform a variety of activities such as assessing driver intent, sensing the environment, distinguishing objects, mapping and wayfinding, and safety management are among the technical problems of engineering a virtual driver system.

LIDAR, radar, computer vision, global positioning, odometry, and sonar are among the hardware and software components of a virtual driving system.

There are many approaches to solving discrete autonomous movement problems.

With cameras, maps, and sensors, sensing and processing can be centralized in the vehicle, or it can be distributed throughout the environment and across other vehicles, as with intelligent infrastructure and V2X (vehicle to everything) capability.

The burden and scope of this processing—and the scale of the problems to be solved—are closely related to the expected level of human attention and intervention, and as a result, the most frequently referenced classification of driverless capability is formally structured along the lines of human attentional demands and intervention requirements by the Society of Automotive Engineers, and has been adopted in 2 years.

These companies use various levels of driver automation, ranging from Level 0 to Level 5.

Level 0 refers to no automation, which means the human driver is solely responsible for longitudinal and latitudinal control (steering) (acceleration and deceleration).

On Level 0, the human driver is in charge of keeping an eye on the environment and reacting to any unexpected safety hazards.

Automated systems that take control of longitudinal or latitudinal control are classified as Level 1, or driver aid.

The driver is in charge of observation and intervention.

Level 2 denotes partial automation, in which the virtual driver system is in charge of both lateral and longitudinal control.

The human driver is deemed to be in the loop, which means that they are in charge of monitoring the environment and acting in the event of a safety-related emergency.

Level 2 capability has not yet been achieved by commercially available systems.

The monitoring capabilities of the virtual driving system distinguishes Level 3 conditional autonomy from Level 2.

At this stage, the human driver may be disconnected from the surroundings and depend on the autonomous system to keep track of it.

The person is required to react to calls for assistance in a range of situations, such as during severe weather or in construction zones.

A navigation system (e.g., GPS) is not required at this level.

To operate at Level 2 or Level 3, a vehicle does not need a map or a specific destination.

A human driver is not needed to react to a request for intervention at Level 4, often known as high automation.

The virtual driving system is in charge of navigation, locomotion, and monitoring.

When a specific condition cannot be satisfied, such as when a navigation destination is obstructed, it may request that a driver intervene.

If the human driver does not choose to interfere, the system may safely stop or redirect based on the engineering approach.

The classification of this situation is based on standards of safe driving, which are established not only by technical competence and environmental circumstances, but also by legal and regulatory agreements and lawsuit tolerance.

Level 5 autonomy, often known as complete automation, refers to a vehicle that is capable of doing all driving activities in any situation that a human driver could handle.

Although Level 4 and Level 5 systems do not need the presence of a person, they still necessitate substantial technological and social cooperation.

While efforts to construct autonomous vehicles date back to the 1920s, Leonardo Da Vinci is credited with the concept of a self-propelled cart.

In his 1939 New York World's Fair Futurama display, Norman Bel Geddes envisaged a smart metropolis of the future inhabited by self-driving automobiles.

Automobiles, according to Bel Geddes, will be outfitted with "technology that would rectify the mistakes of human drivers" by 1960.

General Motors popularized the concept of smart infrastructure in the 1950s by building a "automated highway" with steering-assist circuits.

In 1960, the business tested a working prototype car, but owing to the expensive expense of infrastructure, it quickly moved its focus from smart cities to smart autos.

A team lead by Sadayuki Tsugawa of Tsukuba Mechanical Engineering Laboratory in Japan created an early prototype of an autonomous car.

Their 1977 vehicle operated under predefined environmental conditions dictated by lateral guiding rails.

The truck used cameras to track the rails, and most of the processing equipment was aboard.

The EUREKA (European Research Organization) pooled money and experience in the 1980s to enhance the state-of-the-art in cameras and processing for autonomous cars.

Simultaneously, Carnegie Mellon University in Pittsburgh, Pennsylvania pooled their resources for research on autonomous navigation utilizing GPS data.

Since then, automakers including General Motors, Tesla, and Ford Motor Company, as well as technology firms like ARGO AI and Waymo, have been working on autonomous cars or critical components.

The technology is becoming less dependent on very limited circumstances and more adaptable to real-world scenarios.

Manufacturers are currently producing Level 4 autonomous test cars, and testing are being undertaken in real-world traffic and weather situations.

Commercially accessible Level 4 self-driving cars are still a long way off.

There are supporters and opponents of autonomous driving.

Supporters point to a number of benefits that address social problems, environmental concerns, efficiency, and safety.

The provision of mobility services and a degree of autonomy to those who do not already have access, such as those with disabilities (e.g., blindness or motor function impairment) or those who are unable to drive, such as the elderly and children, is one such social benefit.

The capacity to decrease fuel economy by managing acceleration and braking has environmental benefits.

Because networked cars may go bumper to bumper and are routed according to traffic optimization algorithms, congestion is expected to be reduced.

Finally, self-driving vehicles have the potential to be safer.

They may be able to handle complicated information more quickly and thoroughly than human drivers, resulting in fewer collisions.

Self-driving car negative repercussions may be included in any of these areas.

In terms of society, driverless cars may limit access to mobility and municipal services.

Autonomous mobility may be heavily regulated, costly, or limited to places that are inaccessible to low-income commuters.

Non-networked or manually operated cars might be kept out of intelligent geo-fenced municipal infrastructure.

Furthermore, if no adult or responsible human party is present during transportation, autonomous automobiles may pose a safety concern for some susceptible passengers, such as children.

Greater convenience may have environmental consequences.

Drivers may sleep or work while driving autonomously, which may have the unintended consequence of extending commutes and worsening traffic congestion.

Another security issue is widespread vehicle hacking, which could bring individual automobiles and trucks, or even a whole city, to a halt. 


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Accidents and Risk Assessment; Autonomous and Semiautonomous Systems; Autonomy and Complacency; Intelligent Transportation; Trolley Problem.


Further Reading:


Antsaklis, Panos J., Kevin M. Passino, and Shyh J. Wang. 1991. “An Introduction to Autonomous Control Systems.” IEEE Control Systems Magazine 11, no. 4: 5–13.

Bel Geddes, Norman. 1940. Magic Motorways. New York: Random House.

Bimbraw, Keshav. 2015. “Autonomous Cars: Past, Present, and Future—A Review of the Developments in the Last Century, the Present Scenario, and the Expected Future of Autonomous Vehicle Technology.” In ICINCO: 2015—12th International Conference on Informatics in Control, Automation and Robotics, vol. 1, 191–98. Piscataway, NJ: IEEE.

Kröger, Fabian. 2016. “Automated Driving in Its Social, Historical and Cultural Contexts.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner, 41–68. Berlin: Springer.

Lin, Patrick. 2016. “Why Ethics Matters for Autonomous Cars.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner, 69–85. Berlin: Springer.

Weber, Marc. 2014. “Where To? A History of Autonomous Vehicles.” Computer History Museum. https://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/.


Artificial Intelligence - Autonomy And Complacency In AI Systems.

 




The concepts of machine autonomy and human autonomy and complacency are intertwined.

Artificial intelligences are undoubtedly getting more independent as they are trained to learn from their own experiences and data intake.

Machines that gain more skills than humans tend to become increasingly dependent on them to make judgments and react correctly to unexpected events.

This dependence on AI systems' decision-making processes might lead to a loss of human agency and complacency.

This complacency may result in the AI's system or decision-making processes failing to respond to major faults.

Autonomous machines are ones that can function in unsupervised settings, adapt to new situations and experiences, learn from previous errors, and decide the best potential outcomes in each case without the need for fresh programming input.

To put it another way, these robots learn from their experiences and are capable of going beyond their original programming in certain respects.

The concept is that programmers won't be able to foresee every circumstance that an AI-enabled machine could experience based on its activities, thus it must be able to adapt.

This is not widely recognized, since others say that these systems' adaptability is inherent in their programming, as their programs are designed to be adaptable.

The disagreement over whether any agent, including humans, can express free will and act autonomously exacerbates these debates.

With the advancement of technology, the autonomy of AI programs is not the only element of autonomy that is being explored.

Worries have also been raised concerning the influence on human autonomy, as well as concerns about machine complacency.

People who gain from the machine's choice being irrelevant since they no longer have to make decisions as AI systems grow increasingly tuned to anticipate people's wishes and preferences.

The interaction of human employees and automated systems has gotten a lot of attention.

According to studies, humans are more prone to overlook flaws in these procedures, particularly when they get routinized, which leads to a positive expectation of success rather than a negative expectation of failure.

This sense of accomplishment causes the operators or supervisors of automated processes to place their confidence in inaccurate readouts or machine judgments, which may lead to mistakes and accidents.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Accidents and Risk Assessment; Autonomous and Semiautonomous Systems.



Further Reading


André, Quentin, Ziv Carmon, Klaus Wertenbroch, Alia Crum, Frank Douglas, William 
Goldstein, Joel Huber, Leaf Van Boven, Bernd Weber, and Haiyang Yang. 2018. 

“Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data.” Customer Needs and Solutions 5, no. 1–2: 28–37.

Bahner, J. Elin, Anke-Dorothea Hüper, and Dietrich Manzey. 2008. “Misuse of Auto￾mated Decision Aids: Complacency, Automation Bias, and the Impact of Training 
Experience.” International Journal of Human-Computer Studies 66, no. 9: 
688–99.

Lawless, W. F., Ranjeev Mittu, Donald Sofge, and Stephen Russell, eds. 2017. Autonomy 
and Intelligence: A Threat or Savior? Cham, Switzerland: Springer.

Parasuraman, Raja, and Dietrich H. Manzey. 2010. “Complacency and Bias in Human 
Use of Automation: An Attentional Integration.” Human Factors 52, no. 3: 
381–410.





What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...