Artificial Intelligence - What Are Robot Caregivers?

 


Personal support robots, or caregiver robots, are meant to help individuals who, for a number of reasons, need assistive technology for long-term care, disability, or monitoring.

Although not widely used, caregiver robots are seen as useful in countries with rapidly rising older populations or in situations when a significant number of individuals are afflicted at the same time with a severe sickness.


Caregiver robots have elicited a wide variety of reactions, from terror to comfort.


As they attempt to eliminate the toil from caring rituals, some ethicists have claimed that robotics researchers misunderstand or underappreciate the role of compassionate caretakers.

The majority of caregiver robots are personal robots for use at home, however some are used in institutions including hospitals, nursing homes, and schools.

Some of them are geriatric care robots.

Others, dubbed "robot nannies," are meant to do childcare tasks.

Many have been dubbed "social robots." Interest in caregiving robots has risen in tandem with the world's aging population.

Japan has one of the largest percentage of old people in the world and is a pioneer in the creation of caregiver robots.

According to the United Nations, by 2050, one-third of the island nation's population would be 65 or older, much outnumbering the natural supply of nursing care employees.

The Ministry of Health, Labor, and Welfare of the nation initiated a pilot demonstration project in 2013 to bring bionic nursing robots into eldercare facilities.

By 2050, the number of eligible retirees in the United States will have doubled, and those beyond the age of 85 will have tripled.

In the same year, there will be 1.5 billion persons over the age of 65 all throughout the globe (United Nations 2019).

For a number of reasons, people are becoming more interested in caregiver robot technology.


The physical difficulties of caring for the elderly, infirm, and children are often mentioned as a driving force for the creation of assistive robots.


The caregiver position may be challenging, especially when the client has a severe or long-term illness such as Alzheimer's disease, dementia, or schizoid disorder.

A partial answer to family economic misery has also been proposed: caregiver robots.

Robots may one day be able to take the place of human relatives who must work.

They've also been suggested as a possible solution to nursing home and other care facility staffing shortages.

In addition to technological advancements, societal and cultural factors are driving the creation of caregiver robots.

Because of unfavorable attitudes of outsiders, robot caregivers are favored in Japan than overseas health-care employees.

The demand for independence and the dread of losing behavioral, emotional, and cognitive autonomy are often acknowledged by the elderly themselves.

In the literature, several robot caregiver functions have been recognized.

Some robots are thought to be capable of minimizing human carers' mundane work.

Others are better at more difficult jobs.

Intelligent service robots have been designed to help with feeding, cleaning of houses and bodies, and mobility support, all of which save time and effort (including lifting and turning).



Safety monitoring, data collecting, and surveillance are some of the other functions of these assistive technologies.


Clients with severe to profound impairments may benefit from robot carers for coaching and stimulation.

For patients who require frequent reminders to accomplish chores or take medication, these robots might be used as cognitive prosthesis or mobile memory aides.

These caregiver robots may also include telemedicine capabilities, allowing them to call doctors or nurses for routine or emergency consultations.


Robot caretakers have been offered as a source of social connection and companionship, which has sparked debate.

Although social robots have a human-like appearance, they are often interactive smart toys or artificial pets.

In Japan, robots are referred to as iyashi, a term that also refers to a style of anime and manga that focuses on emotional rehabilitation.

As huggable friends, Japanese children and adults may choose from a broad range of soft-tronic robots.

Matsushita Electric Industrial (MEI) created Wandakun, a fluffy koala bear-like robot, in the 1990s.

When petted, the bear wiggled, sang, and responded to touch with a few Japanese sentences.


Babyloid is a plush mechanical baby beluga whale created by Masayoshi Kano at Chukyo University to help elderly patients with despair.


Babyloid is only seventeen inches long, yet his eyes flicker and he "naps" when rocked.

When it is "glad," LED lights imbedded in its cheeks shine.

When the robot is in a bad mood, it may also drop blue LED tears.

Babyloid can produce almost a hundred distinct noises.

It is hardly a toy, since each one costs more than $1,000.

The infant harp seal is a replica.

The National Institute of Advanced Industrial Science and Technology (AIST) in Japan invented Paro to provide consolation to individuals suffering from dementia, anxiety, or sadness.

Thirteen surface and whisker sensors, three microphones, two vision sensors, and seven actuators for the neck, fins, and eyelids are all included in the eighth-generation Paro.

When patients with dementia use Paro, the robot's developer, Taka nori Shibata of AIST's Intelligent System Research Institute, reports that they experience less hostility and roaming, as well as increased social interaction.

In the United States, Paro is classified as a Class II medical equipment, which puts it in the same danger category as electric wheelchairs and X-ray machines.


Taizou, a twenty-eight-inch robot that can duplicate the motions of thirty various workouts, was developed by AIST.


In Japan, Taizou is utilized to encourage older adults to exercise and keep in shape.

Sony Corporation's well-known AIBO is a robotic therapy dog as well as a very expensive toy.

In 2018, Sony's Life Care Design division started introducing a new generation of dog robots into the company's retirement homes.

The humanoid QRIO robot, AIBO's successor, has been suggested as a platform for basic childcare activities including interactive games and sing-alongs.

Palro, another Fujisoft robot for eldercare treatment, is already in use in over 1,000 senior citizen institutions.

Since its original release in 2010, its artificial intelligence software has been modified multiple times.

Both are used to alleviate dementia symptoms and provide enjoyment.

A bigger section of users of so-called partner-type personal robots has also been promoted by Japanese firms.

These robots are designed to encourage human-machine connection and to alleviate feelings of loneliness and mild melancholy.


In the late 1990s, NEC Corporation started developing the adorable PaPeRo (Partner-Type Personal Robot).


PaPeRo communications robots have the ability to look, listen, communicate, and move in a variety of ways.

Current versions include twin camera eyes that can recognize faces and are intended to allow family members who live in different houses keep an eye on one other.

PaPeRo's Childcare Version interacts with youngsters and serves as a temporary babysitter.

In 2005, Toyota debuted its humanoid Partner Robots family.

The company's robots are intended for a broad range of applications, including human assistance and rehabilitation, as well as socializing and innovation.


In 2012, Toyota launched the Partner Robots line with a customized Human Support Robot (HSR).


HSR robots are designed to help older adults maintain their independence.

In Japan, prototypes are currently being used in eldercare facilities and handicapped people's homes.

HSR robots are capable of picking up and retrieving things as well as avoiding obstacles.

They may also be controlled remotely by a human caregiver and offer internet access and communication.

Japanese roboticists are likewise taking a more focused approach to automated caring.


The RI-MAN robot, developed by the RIKEN Collaboration Center for Human-Interactive Robot Research, is an autonomous humanoid patient-lifting robot.


The forearms, upper arms, and torso of the robot are made of a soft sili cone skin layer and are equipped with touch sensors for safe lifting.

RI-MAN has odor detectors and can follow human faces.

RIBA (Robot for Interactive Body Assistance) is a second-generation RIKEN lifting robot that securely moves patients from bed to wheelchair while responding to simple voice instructions.

Capacitance-type tactile sensors made completely of rubber monitor patient weight in the RIBA-II.


RIKEN's current-generation hydraulic patient life-and-transfer equipment is called Robear.

The robot, which has the look of an anthropomorphic robotic bear, is lighter than its predecessors.

Toshiharu Mukai, RIKEN's inventor and lab leader, invented the lifting robots.


SECOM's MySpoon, Cyberdine's Hybrid Assistive Limb (HAL), and Panasonic's Resyone robotic care bed are examples of narrower approaches to caregiver robots in Japan.

MySpoon is a meal-assistance robot that allows customers to feed themselves using a joystick as a replacement for a human arm and eating utensil.

People with physical limitations may employ the Cyberdine Hybrid Assistive Limb (HAL), a powered robotic exoskeleton outfit.

For patients who would ordinarily need daily lift help, the Panasonic Resyone robotic care bed merges bed and wheelchair.

Projects to develop caregiver robots are also ongoing in Australia and New Zealand.

The Australian Research Council's Centre of Excellence for Autonomous Systems (CAS) was established in the early 2000s as a collaboration between the University of Technology Sydney, the University of Sydney, and the University of New South Wales.

The center's mission was to better understand and develop robotics in order to promote the widespread and ubiquitous use of autonomous systems in society.

The work of CAS has now been separated and placed on an independent footing at the University of Technology Sydney's Centre for Autonomous Systems and the University of Sydney's Australian Centre for Field Robotics.

Bruce Mac Donald of the University of Auckland is leading the creation of Healthbot, a socially assistive robot.

Healthbot is a mobile health robot that reminds seniors to take their meds, check vitals and monitor their physical condition, and call for aid in an emergency.

In the European Union, a number of caregiver robots are being developed.

The GiraffPlus (Giraff+) project, which was just finished at rebro University in Sweden, intends to develop an intelligent system for monitoring the blood pressure, temperature, and movements of elderly individuals at home (to detect falls and other health emergencies).

Giraff may also be utilized as a telepresence robot for virtual visits with family members and health care providers.

The robot is roughly five and a half feet tall and has basic controls as well as a night-vision camera.


The European Mobiserv project's interdisciplinary, collaborative goal is to develop a robot that reminds elderly customers to take their prescriptions, consume meals, and keep active.


Mobiserv is part of a smart home ecosystem that includes sensors, optical sensors, and other automated devices.

Mobiserv is a mobile application that works with smart clothing that collects health-related data.

Mobiserv is a collaboration between Systema Technologies and nine European partners that represent seven different nations.

The EU CompanionAble Project, which involves fifteen institutions and is led by the University of Reading, aims to develop a transportable robotic companion to illustrate the benefits of information and communication technology in aged care.

In the early stages of dementia, the CompanionAble robot tries to solve emergency and security issues, offer cognitive stimulation and reminders, and call human caregiver support.

In a smart home scenario, CompanionAble also interacts with a range of sensors and devices.

The QuoVADis Project at Brova Hospital Paris, a public university hospital specializing in geriatrics, has a similar goal: to develop a robot for at-home care of cognitively challenged old persons.

The Fraunhofer Institute for Manufacturing Engineering and Automation is still designing and manufacturing Care-O-Bots, which are modular robots.

It's designed for hospitals, hotels, and nursing homes.

With its long arms and rotating, bending hip joint, the Care-O-Bot 4 service robot can reach from the floor to a shelf.

The robot is intended to be regarded as friendly, helpful, courteous, and intelligent.


ROBOSWARM and IWARD, intelligent and programmable hospital robot swarms developed by the European Union, provide a fresh approach.


ROBOSWARM is a distributed agent cleaning system for hospitals.

Cleaning, patient monitoring and guiding, environmental monitoring, medicine distribution, and patient surveillance are all covered by the more flexible IWARD.

Because the AI systems incorporated in these systems display adaptive and self-organizing characteristics, multi-institutional partners determined that certifying that they would operate adequately under real-world conditions would be challenging.

They also discovered that onlookers sometimes questioned the robots' motions, asking whether they were doing the proper tasks.


The Ludwig humanoid robot, developed at the University of Toronto, is intended to assist caretakers in dealing with aging-related issues in their clients.


The robot converses with elderly people suffering from dementia or Alzheimer's disease.

Goldie Nejat, AGE-WELL Investigator and Canada Research Chair in Robots for Society and Director of the University of Toronto's Institute for Robots and Mechatronics, is employing robotics technology to assist individuals by guiding them through ordinary everyday chores.

Brian, the university's robot, is sociable and reacts to emotional human interaction.


HomeLab is creating assistive robots for use in health-care delivery at the Toronto Rehabilitation Institute (iDAPT), Canada's biggest academic rehabilitation research facility.


Ed the Robot, created by HomeLab, is a low-cost robot built using the iRobot Create toolset.

The robot, like Brian, is designed to remind dementia sufferers of the appropriate steps to take while doing everyday tasks.


In the United States, caregiver robot technology is also on the rise.

The Acrotek Actron MentorBot surveillance and security robot, which was created in the early 2000s, could follow a human client using visual and aural cues, offer food or medicine reminders, inform family members about concerns, and call emergency services.


Bandit is a socially supportive robot created by Maja Matari of the Robotics and Autonomous Systems Center at the University of Southern California.


The robot is employed in therapeutic settings with patients who have had catastrophic injuries or strokes, as well as those who have aging disorders, autism, or who are obese.

Stroke sufferers react swiftly to imitation exercise movements produced by clever robots in rehabilitation sessions, according to the institute.

Robotic-assisted rehabilitative exercises were also effective in prompting and cueing tasks for youngsters with autism spectrum disorders.

Through the business Embodied Inc., Matari is currently attempting to bring cheap social robots to market.


Nursebots Flo and Pearl, assistive robots for the care of the elderly and infirm, were developed in collaboration between the University of Pittsburgh, Carnegie Mellon University, and the University of Michigan.


The National Science Foundation-funded Nursebot project created a platform for intelligent reminders, telepresence, data gathering and monitoring, mobile manipulation, and social engagement.

Today, Carnegie Mellon is home to the Quality of Life Technology (QoLT) Center, a National Science Foundation Engineering Research Center (ERC) whose objective is to use intelligent technologies to promote independence and improve the functional capabilities of the elderly and handicapped.

The transdisciplinary AgeLab at the Massachusetts Institute of Technology was founded in 1999 to aid in the development of marketable ideas and assistive technology for the aged.

Joe Coughlin, the creator and director of AgeLab, has concentrated on developing the technological requirements for conversational robots for senior care that have the difficult-to-define attribute of likeability.

Walter Dan Stiehl and associates in the Media Lab created The HuggableTM teddy bear robotic companion at MIT.

A video camera eye, 1,500 sensors, silent actuators, an inertial measurement unit, a speaker, and an internal personal computer with wireless networking capabilities are all included in the bear.

Virtual agents are used in other forms of caregiving technology.

Softbots are a term used to describe these agents.

The MIT Media Lab's CASPER affect management agent, created by Jonathan Klein, Youngme Moon, and Rosalind Picard in the early 2000s, is an example of a virtual agent designed to relieve unpleasant emotional states, notably impatience.

To reply to a user who is sharing their ideas and emotions with the computer, the human-computer interaction (HCI) agent employs text-only social-affective feedback mechanisms.



The MIT FITrack exercise advisor agent uses a browser-based client with a relational database and text-to-speech engine on the backend.



The goal of FITrack is to create an interactive simulation of a professional fitness trainer called Laura working with a client.

Amanda Sharkey and Noel Sharkey, computer scientists at the University of Sheffield, are often mentioned in studies on the ethics of caregiver robot technology.

The Shar keys are concerned about robotic carers and the loss of human dignity they may cause.

They claim that such technology has both advantages and disadvantages.

On the one hand, care provider robots have the potential to broaden the variety of options accessible to graying populations, and these features of technology should be promoted.

The technologies, on the other hand, might be used to mislead or deceive society's most vulnerable people, or to further isolate the elderly from frequent companionship and social engagement.

The Sharkeys point out that robotic caretakers may someday outperform humans in certain areas, such as when speed, power, or accuracy are required.


Robots might be trained to avoid or lessen eldercare abuse, impatience, or ineptitude, all of which are typical complaints among the elderly.


Indeed, if societal institutions for caregiver assistance are weak or defective, an ethical obligation to utilize caregiver robots may apply.

Robots, on the other hand, can not comprehend complicated human constructions like loyalty or adapt perfectly to the delicate, tailored demands of specific consumers.

"The old may find themselves in a barren world of machines, a world of automated care: a factory for the aged," the Sharkeys wrote if they don't plan ahead (Sharkey and Sharkey 2012, 282).

In her groundbreaking book Alone Together: Why We Expect More From Technology and Less From Each Other (2011), Sherry Turkle includes a chapter to caregiver robots.

She points out that researchers in robotics and artificial intelligence are driven by the need to make the elderly feel desired via their work, assuming that older folks are often lonely or abandoned.

In aging populations, it is true that attention and labor are in short supply.


Robots are used as a kind of entertainment.


They make everyday living and household routines easier and safer.

Turkle admits that robots never get tired and can even function from a neutral stance in customer interactions.

Humans, on the other hand, can have reasons that go against even the most basic or traditional norms of caring.


"One may argue that individuals can act as though they care," Turkle observes.

"A robot is unconcerned. As a result, a robot cannot act since it can only act" (Turkle 2011, 124).


Turkle, on the other hand, is a critical critic of caregiving technology.

Most importantly, caring conduct and caring feelings are often misconstrued.

In her opinion, interactions between people and robots do not constitute true dialogues.

They may even cause consternation among vulnerable and reliant groups.

The risk of privacy invasion from caregiver robot monitoring is significant, and automated help might potentially sabotage human experience and memory development.


The emergence of a generation of older folks and youngsters who prefer machines to intimate human ties poses a significant threat.


On suitable behaviors and manufactured compassion, several philosophers and ethicists have chimed in.

Human touch is very important in healing rituals, according to Sparrow and Sparrow (2006), robots may increase loss of control, and robot caring is false caregiving since robots are incapable of genuine concern.

Borenstein and Pearson (2011) and Van Wynsberghe (2013) believe that caregiver robots infringe on human dignity and senior rights, impeding freedom of choice.

Van Wynsberghe, in particular, advocates for value-sensitive robot designs that align with Joan Tronto's ethic of care, which includes attentiveness, responsibility, competence, and reciprocity, as well as broader concerns for respect, trust, empathy, and compassion, according to University of Minnesota professor Joan Tronto.

Vallor (2011) questioned the underlying assumptions of robot care by questioning the premise that caring for others is only a problem or a burden.

It's possible that excellent care is individualized to the individual, something that personable but mass-produced robots could fail to provide.


Robot caregiving will very certainly be frowned upon by many faiths and cultures.


By providing incorrect and unsuitable social connections, caregiver robots may potentially cause reactive attachment disorder in children.

The International Organization for Standardization (ISO) has defined rules for the creation of personal robots, but who is to blame when a robot is neglected? The courts are undecided, and robot caregiver legislation is still in its early stages.

According to Sharkey and Sharkey (2010), caregiver robots might be held accountable for breaches of privacy, injury caused by illegal constraint, misleading activities, psychological harm, and accountability failings.

Future robot ethical frameworks must prioritize the needs of patients above the wishes of caretakers.

In interviews with the elderly, Wu et al. (2010) discovered six themes connected to patient requirements.

Thirty people in their sixties and seventies agreed that assistive technology should initially aid them with simple, daily chores.

Other important needs included maintaining good health, stimulating memory and concentration, living alone "for as long as I wish without worrying my family circle" (Wu et al. 2010, 36), maintaining curiosity and growing interest in new activities, and communicating with relatives on a regular basis.


In popular culture, robot maids, nannies, and caregiver technologies are all prominent clichés.


Several early instances may be seen in the television series The Twilight Zone.

In "The Lateness of the Hour," a man develops a whole family of robot slaves (1960).

In "I Sing the Body Electric," Grandma is a robot babysitter (1962).


From the animated television series The Jetsons (1962–1963), Rosie the robotic maid is a notable character.

In the animated movie Wall-E (2008) and Big Hero 6 (2014), as well as the science fiction thriller I Am Mother, caregiver robots are a central narrative component (2019).

They're also commonly seen in manga and anime.

Roujin Z (1991), Kurogane Communication (1997), and The Umbrella Academy are just a few examples (2019).


In popular culture, Jake Schreier's 2012 science fiction film Robot and Frank dramatizes the limits and potential of caregiver robot technology.

A gruff former jewel thief with deteriorating mental health seeks to make his robotic sidekick into a criminal accomplice in the film.

The film delves into a number of ethical concerns including not just the care of the elderly, but also the rights of robots in slavery.

"We are psychologically evolved not merely to nurture what we love, but to love what we nurture," says MIT social scientist Sherry Turkle (Turkle 2011, 11).


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Ishiguro, Hiroshi; Robot Ethics; Turkle, Sherry.


Further Reading


Borenstein, Jason, and Yvette Pearson. 2011. “Robot Caregivers: Ethical Issues across the Human Lifespan.” In Robot Ethics: The Ethical and Social Implications ofRobotics, edited by Patrick Lin, Keith Abney, and George A. Bekey, 251–65. Cambridge, MA: MIT Press.

Sharkey, Noel, and Amanda Sharkey. 2010. “The Crying Shame of Robot Nannies: An Ethical Appraisal.” Interaction Studies 11, no. 2 (January): 161–90.

Sharkey, Noel, and Amanda Sharkey. 2012. “The Eldercare Factory.” Gerontology 58, no. 3: 282–88.

Sparrow, Robert, and Linda Sparrow. 2006 “In the Hands of Machines? The Future of Aged Care.” Minds and Machines 16, no. 2 (May): 141–61.

Turkle, Sherry. 2011. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books.

United Nations. 2019. World Population Ageing Highlights. New York: Department of Economic and Social Affairs. Population Division.

Vallor, Shannon. 2011. “Carebots and Caregivers: Sustaining the Ethical Ideal of Care in the Twenty-First Century.” Philosophy & Technology 24, no. 3 (September): 251–68.

Van Wynsberghe, Aimee. 2013. “Designing Robots for Care: Care Centered Value Sensitive Design.” Science and Engineering Ethics 19, no. 2 (June): 407–33.

Wu, Ya-Huei, Véronique Faucounau, Mélodie Boulay, Marina Maestrutti, and Anne Sophie Rigaud. 2010. “Robotic Agents for Supporting Community-Dwelling Elderly People with Memory Complaints: Perceived Needs and Preferences.” Health Informatics Journal 17, no. 1: 33–40.


Artificial Intelligence - What Is The Stop Killer Robots Campaign?

 



The Campaign to Stop Killer Robots is a non-profit organization devoted to mobilize and campaign against the development and deployment of deadly autonomous weapon systems (LAWS).

The campaign's main issue is that armed robots making life-or-death decisions undercut legal and ethical restraints on violence in human conflicts.

Advocates for LAWS argue that these technologies are compatible with current weapons and regulations, such as cruise missiles that are planned and fired by humans to hunt out and kill a specific target.

Advocates also say that robots are completely reliant on people, that they are bound by their design and must perform the behaviors that have been assigned to them, and that with appropriate monitoring, they may save lives by substituting humans in hazardous situations.


The Campaign to Stop Killer Robots dismisses responsible usage as a viable option, stating fears that the development of LAWS could result in a new arms race.


The advertisement underlines the danger of losing human control over the use of lethal force in situations when armed robots identify and remove a threat before human intervention is feasible.

Human Rights Watch, an international nongovernmental organization (NGO) that promotes fundamental human rights and investigates violations of those rights, organized and managed the campaign, which was officially launched on April 22, 2013, in London, England.


Many member groups make up the Campaign to Stop Killer Robots, including the International Committee for Robot Arms Control and Amnesty International.


A steering group and a worldwide coordinator are in charge of the campaign's leadership.

As of 2018, the steering committee consists of eleven non-governmental organizations.

Mary Wareham, who formerly headed international efforts to ban land mines and cluster bombs, is the campaign's worldwide coordinator.

Efforts to ban armed robots, like those to ban land mines and cluster bombs, concentrate on their potential to inflict needless suffering and indiscriminate damage to humans.


The United Nations Convention on Certain Conventional Weapons (CCW), which originally went into force in 1983, coordinates the worldwide ban of weapons.




Because the CCW has yet to agree on a ban on armed robots, and because the CCW lacks any mechanism for enforcing agreed-upon restrictions, the Campaign to Stop Killer Robots calls for the inclusion of LAWS in the CCW.

The Campaign to Stop Killer Robots also promotes the adoption of new international treaties to implement more preemptive restrictions.

The Campaign to Stop Killer Robots offers tools for educating and mobilizing the public, including multimedia databases, campaign reports, and a mailing list, in addition to lobbying governing authorities for treaty and convention prohibitions.

The Campaign also seeks the participation of technological businesses, requesting that they refuse to participate in the creation of LAWS on their own will.

The @BanKillerRobots account on Twitter is where the Campaign keeps track of and broadcasts the names of companies that have pledged not to engage in the creation or marketing of intelligent weapons.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Autonomous Weapons Systems, Ethics of; Battlefield AI and Robotics; Lethal Autonomous Weapons Systems.


Further Reading


Baum, Seth. 2015. “Stopping Killer Robots and Other Future Threats.” Bulletin of the Atomic Scientists, February 22, 2015. https://thebulletin.org/2015/02/stopping-killer-robots-and-other-future-threats/.

Campaign to Stop Killer Robots. 2020. https://www.stopkillerrobots.org/.

Carpenter, Charli. 2016. “Rethinking the Political / -Science- / Fiction Nexus: Global Policy Making and the Campaign to Stop Killer Robots.” Perspectives on Politics 14, no. 1 (March): 53–69.

Docherty, Bonnie. 2012. Losing Humanity: The Case Against Killer Robots. New York: Human Rights Watch.

Garcia, Denise. 2015. “Killer Robots: Why the US Should Lead the Ban.” Global Policy6, no. 1 (February): 57–63.


Artificial Intelligence - Who Is Ryan Calo?

 



Michael Ryan Calo (1977–) is a thought leader in the area of artificial intelligence and robotics' legal and policy ramifications.

Calo was instrumental in establishing a network of legal experts dedicated to robots and AI; he foresaw the harm AI may pose to consumer privacy and autonomy, and he produced an early and widely distributed primer on AI law and policy.

Calo has forged methodological and practice innovations for early stage tech policy work, demonstrating the importance and efficacy of legal scholars working side by side with technologists and designers to anticipate futures and meaningful policy responses, in addition to these and other contributions.

Calo was born and raised in the cities of Syracuse, New York, and Florence, Italy.

His parents got him a great remote-controlled base coupled to an inflatable robot when he was a child, and it was his first interaction with robots.

Calo studied philosophy as a student at Dartmouth University, where he studied the ethics of computer pioneer James Moor, among others.

Calo graduated from the University of Michigan with a law degree in 2005.

He became a fellow and subsequently research director at Stanford's Center for Internet and Society after law school, a federal appellate clerkship, and two years in private practice (CIS).

Calo was a pioneer in bringing robotics law and policy into the mainstream at Stanford, co-founding the Legal Aspects of Autonomous Driving effort with Sven Beiker at Stanford's Center for Automotive Research (CARS).

Calo met Ian Kerr, a Canadian law professor and philosopher of technology, and Michael Froomkin, a cyberlaw pioneer, along the road.

The We Robot conference was created by Froomkin, Kerr, and Calo in 2012.

Calo praises Kerr for inspiring him to explore robotics and artificial intelligence as a field of study.

Calo now codirects the University of Washington's Tech Policy Lab, an interdisciplinary research unit that spans computer science, information science, and law.

He and his codirectors Batya Friedman and Tadayoshi Kohno determine the Lab's research and practice agenda in this capacity.

Calo also cofounded the University of Washington Center for an Informed Public, which is dedicated to researching and combating digital and analog disinformation.

Calo has published several articles on the legal and policy implications of robots and artificial intelligence.

Updating the behavioral economic theory of market manipulation in light of artificial intelligence and digital media, advocating for a social systems approach to studying AI's effects, anticipating the privacy harms of robotics and AI, and rigorously examining how the affordances of robotics and AI challenge the American legal system are among the book's key contributions.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 


Accidents and Risk Assessment; Product Liability and AI.


Further Reading

Calo, Ryan. 2011. “Peeping Hals.” Artificial Intelligence 175, no. 5–6 (April): 940–41.

Calo, Ryan. 2014. “Digital Market Manipulation.” George Washington Law Review 82, no. 4 (August): 995–1051.

Calo, Ryan. 2015. “Robotics and the Lessons of Cyberlaw.” California Law Review 103, no. 3: 513–63.

Calo, Ryan. 2017. “Artificial Intelligence Policy: A Primer and Roadmap.” University of California, Davis Law Review 51: 399–435.

Crawford, Kate, and Ryan Calo. 2016. “There Is a Blind Spot in AI Research.” Nature 538 (October): 311–13.


Quantum Computing - What Is Quantum Chromodynamics (QCD)?







Quantum Chromodynamics (QCD) is a physics theory that explains interactions mediated by the strong force, one of the four basic forces of nature. 


It was developed as an analogue for Quantum Electrodynamics (QED), which describes interactions owing to the electromagnetic force carried by photons. 



The theory of the strong interaction between quarks mediated by gluons, the basic particles that make up composite hadrons like the proton, neutron, and pion, is known as quantum chromodynamics (QCD). 

QCD is a non-abelian gauge theory with the symmetry group SU, which is a form of quantum field theory (3). 




The color attribute is the QCD equivalent of electric charge. 




Gluons are the theory's force carriers, exactly as photons are in quantum electrodynamics for the electromagnetic force. 

The hypothesis is an essential aspect of particle physics' Standard Model. 

Over the years, a considerable amount of experimental data supporting QCD has accumulated. 



How does the QCD scale work? 


The quantity is known as the QCD scale in quantum chromodynamics (QCD). 

When the energy-momentum involved in the process permits just the up, down, and strange quarks to be produced, but not the heavier quarks, the value is for three "active" quark flavors. 

This is equivalent to energies less than 1.275 GeV. 



Who was the first to discover quantum chromodynamics? 



One of the founders of quantum chromodynamics, Harald Fritzsch, remembers some of the backdrop to the theory's development 40 years ago. 



What is the Quantum Electrodynamics (QED) Theory? 


Quantum electrodynamics (QED) is the quantum field theory of charged particles' interactions with electromagnetic fields. 

It mathematically defines not just light's interactions with matter, but also the interactions of charged particles with one another. 

Albert Einstein's theory of special relativity is integrated into each of QED's equations, making it a relativistic theory. 

Because atoms and molecules are mainly electromagnetic in nature, all of atomic physics may be thought of as a test bed for the hypothesis. 

Experiments using the behavior of subatomic particles known as muons have been some of the most exact tests of QED. 

This sort of particle's magnetic moment has been found to accord with theory to nine significant digits. 

QED is one of the most effective physics theories ever established, with such great precision. 



Recent Developments In The Investigation Of QCD


A new collection of papers edited by Diogo Boito, Instituto de Fisica de Sao Carlos, Universidade de Sao Paulo, Brazil, and Irinel Caprini, Horia Hulubei National Institute for Physics and Nuclear Engineering, Bucharest, Romania, and published in The European Physical Journal Special Topics brings together recent developments in the investigation of QCD. 


The editors explain in a special introduction to the collection that,

the divergence of perturbation expansions in the mathematical descriptions of a system can have important physical consequences because the strong force — carried by gluons between quarks, forming the fundamental building blocks of matter — described by QCD has a much stronger coupling than the electromagnetic force. 


The editors note out that, with to developments in so-called higher-order loop computations, this has become more significant with recent high-precision calculations in QCD. 


"The fact that perturbative expansions in QCD are divergent greatly influences the renormalization scheme and scale dependency of the truncated expansions," write Boito and Caprini, "which provides a major source of uncertainty in the theoretical predictions of the standard model."

"One of the primary problems for precision QCD to meet the needs of future accelerator facilities is to understand and tame this behavior.


A cadre of specialists in the subject discuss these and other themes pertaining to QCD, such as the mathematical theory of revival and the presence of infrared (IR) and ultraviolet (UV) renormalons, in the special edition. 

These issues are approached from a range of perspectives, including a more basic viewpoint or phenomenological approach, and in the context of related quantum field theories.



~ Jai Krishna Ponnappan


You may also want to read more about Quantum Computing here.



Further Reading


Diogo Boito et al, Renormalons and hyperasymptotics in QCD, 

The European Physical Journal Special Topics (2021).

DOI: 10.1140/epjs/s11734-021-00276-w


Quantum Computing Error Correction - Improving Encoding Redundancy Exponentially Drops Net Error Rate.




Researchers at QuTech, a joint venture between TU Delft and TNO, have achieved a quantum error correction milestone. 

They've combined high-fidelity operations on encoded quantum data with a scalable data stabilization approach. 

The results are published in the December edition of Nature Physics. 


Physical quantum bits, also known as qubits, are prone to mistakes. Quantum decoherence, crosstalk, and improper calibration are some of the causes of these problems. 



Fortunately, quantum error correction theory suggests that it is possible to compute while simultaneously safeguarding quantum data from such defects. 


"An error corrected quantum computer will be distinguished from today's noisy intermediate-scale quantum (NISQ) processors by two characteristics," explains QuTech's Prof Leonardo DiCarlo. 


  • "To begin, it will handle quantum data stored in logical rather than physical qubits (each logical qubit consisting of many physical qubits). 

  • Second, quantum parity checks will be interspersed with computing stages to discover and fix defects in physical qubits, protecting the encoded information as it is processed." 


According to theory, if the occurrence of physical faults is below a threshold and the circuits for logical operations and stabilization are fault resistant, the logical error rate may be exponentially reduced. 

The essential principle is that when redundancy is increased and more qubits are used to encode data, the net error decreases. 


Researchers from TU Delft and TNO have recently achieved a crucial milestone toward this aim, producing a logical qubit made up of seven physical qubits (superconducting transmons). 


"We demonstrate that the encoded data may be used to perform all calculation operations. 

A important milestone in quantum error correction is the combination of high-fidelity logical operations with a scalable approach for repetitive stabilization " Prof. Barbara Terhal, also of QuTech, agrees. 


Jorge Marques, the first author and Ph.D. candidate, goes on to say, 


"Researchers have encoded and stabilized till now. We've now shown that we can also calculate. 

This is what a fault-tolerant computer must finally do: handle data while also protecting it from faults. 

We do three sorts of logical-qubit operations: initializing it in any state, changing it using gates, and measuring it. We demonstrate that all operations may be performed directly on encoded data. 

We find that fault-tolerant versions perform better than non-fault-tolerant variants for each category.



Fault-tolerant processes are essential for preventing physical-qubit faults from becoming logical-qubit errors. 

DiCarlo underlines the work's interdisciplinary nature: This is a collaboration between experimental physics, Barbara Terhal's theoretical physics group, and TNO and external colleagues on electronics. 


IARPA and Intel Corporation are the primary funders of the project.


"Our ultimate aim is to demonstrate that as we improve encoding redundancy, the net error rate drops exponentially," DiCarlo says. 

"Our present concentration is on 17 physical qubits, and we'll move on to 49 in the near future. 

Our quantum computer's architecture was built from the ground up to allow for this scalability."


~ Jai Krishna Ponnappan


You may also want to read more about Quantum Computing here.



Further Reading:


J. F. Marques et al, Logical-qubit operations in an error-detecting surface code, Nature Physics (2021). DOI: 10.1038/s41567-021-01423-9

Journal information: Nature Physics 

Abstract:

"Future fault-tolerant quantum computers will require storing and processing quantum data in logical qubits. 
Here we realize a suite of logical operations on a distance-2 surface code qubit built from seven physical qubits and stabilized using repeated error-detection cycles. 
Logical operations include initialization into arbitrary states, measurement in the cardinal bases of the Bloch sphere and a universal set of single-qubit gates. 
For each type of operation, we observe higher performance for fault-tolerant variants over non-fault-tolerant variants, and quantify the difference. 
In particular, we demonstrate process tomography of logical gates, using the notion of a logical Pauli transfer matrix. 
This integration of high-fidelity logical operations with a scalable scheme for repeated stabilization is a milestone on the road to quantum error correction with higher-distance superconducting surface codes."



What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...