Showing posts with label Robotics. Show all posts
Showing posts with label Robotics. Show all posts

AI - Smart Hotels And Smart Hotel Rooms.



In a competitive tourist sector, high-tech and artificial intelligence are being used by luxury hotels to deliver the greatest experience for their visitors and grow their market share.


The experience economy, as it is known in the hospitality management business, is shaping artificial intelligence in hotels.



An experience is created by three major players: a product, a service, and a consumer.


The artifacts presented in the marketplaces are known as products.

Services are the concrete and intangible benefits of a single product, or a collection of goods, as marketed by frontline staff via a procedure.

The end user of these items or services is the client.

Customers are looking for items and services that will meet their requirements.

Hoteliers, on the other hand, must develop extraordinary events that transform manufactured goods and services into real experiences for their consumers in order to emotionally connect with them.


In this approach, experiences become a fungible activity in the market with the goal of retaining clients.



Robotics, data analysis, voice activation, face recognition, virtual and augmented reality, chatbots, and the internet of things are all examples of artificial intelligence in the luxury hotel business (IoT).

Smart rooms are created for hotel guests by providing automated technology that naturally solves their typical demands.


Guests may utilize IoT to control the lights, curtains, speakers, and television in their rooms through a connected tablet.


  • When a person is awake and moving about, a nightlight system may detect this.
  • Wellness gadgets that deliver sensory experiences are available in certain rooms for disabled visitors.
  • Smart rooms may capture personal information from customers and keep it in customer profiles in order to give better service during subsequent visits.



In terms of smart room technology, the Hilton and Marriott worldwide luxury hotel companies are industry leaders.


One of Hilton's initial goals is to provide guests the ability to operate their room's features using their smartphone.


  • Guests may customize their stay according to their preferences utilizing familiar technologies in this manner.
  • Lights, TVs, the temperature, and the entertainment (streaming) service are all adjustable in typical Hilton smart rooms (Ting 2017).
  • A second goal is to provide services via mobile phone apps.
  • During their stay, guests may put their own preferences.
  • They may, for example, choose digital artwork or images from the room's display.
  • Voice activation services are presently being developed for Hilton smart bedrooms (Burge 2017).


Marriott's smart rooms were created in collaboration with Legrand's Eliot technology and Samsung's Artik guest experience platform.


Marriott has deployed cloud-based hotel IoT technologies (Ting 2017).

Two prototype rooms for testing new smart systems have come from this partnership.



The first is a room with smart showers, mirrors, art frames, and speakers that is totally networked.

  • Guests may use voice commands to operate the lighting, air conditioning, curtains, paintings, and television.
  • A touchscreen shower is available, allowing visitors to write on the smart glass of the shower.
  • Shower notes may be turned into papers and sent to a specific address (Business Traveler 2018).
  • The quantity of oxygen in this Marriott room is controlled by sensors that monitor the number of people in the suite.
  • These sensors also help visitors wake up in the middle of the night by displaying the time to get out of bed and lighting the path to the restroom (Ting 2017).
  • A loyalty account allows guests to select their particular preferences ahead to arrival.



A second, lower-tech area is linked through tablet and just has the Amazon Dot voice-controlled smart speaker.


  • The television remote may be used to change the room's characteristics.
  • The benefit of this room is that it has very few implementation requirements (Ting 2017).
  • Hoteliers point to a number of benefits of smart rooms in addition to convenience and customization.
  • Smart rooms help to protect the environment by lowering energy consumption expenses.
  • They may also save money on wages by reducing the amount of time housekeeping and management spend with visitors.



Smart rooms have their own set of constraints.


It may be tough to grasp certain smart technology.


  • For starters, the learning curve for overnight visitors is rather short.
  • Second, the infrastructure and technology required for these rooms continues to be prohibitively costly.
  • Even if there are long-term cost and energy benefits, the initial investment expenses are significant.


Finally, there's the issue of data security.


Hotels must continue to evolve to meet the needs of new generations of paying customers.


Technology is deeply interwoven in the everyday behaviors of millennials and post-millennials.

Their smart phones, video games, and tablets are transforming the meaning of experience in a virtual world.


Luxury tourism already includes high-priced goods and services that are supported by cutting-edge technology.

The quality of future hotel smart room experiences will be influenced by visitor income levels and personal technological capabilities, creating new competitive marketplaces.



Customers expect high-tech comfort and service from hotels.


Hotel operators gain from smart rooms as well, since they serve as a source of large data.

Companies are rapidly collecting, storing, and using all accessible information on their customers in order to provide unique goods and services.

This technique aids businesses in creating twenty-first-century markets in which technology is as important as hotel guests and management.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Smart Cities and Homes.


References & Further Reading:


Burge, Julia. 2017. “Hilton Announces ‘Connected Room,’ The First Mobile-Centric Hotel Room, To Begin Rollout in 2018.” Hilton Press Center, December 7, 2017. https://newsroom.hilton.com/corporate/news/hilton-announces-connected-room-the-first-mobilecentric-hotel-room-to-begin-rollout-in-2018.

Business Traveler. 2018. “Smart Rooms.” Business Traveler (Asia-Pacific Edition), 11.

Imbardelli, A. Patrick. 2019. “Smart Guestrooms Can Transform Hotel Brands.” Hotel Management 234, no. 3 (March): 40.

Pine, B. Joseph, II, and James H. Gilmore. 1998. “Welcome to the Experience Economy.” Harvard Business Review 76, no. 4 (July–August): 97–105.

Swaminathan, Sundar. 2017. Oracle Hospitality Hotel 2025 Industry Report. Palm Beach Gardens, FL: International Luxury Hotel Association.

Ting, Deanna. 2017. “Hilton and Marriott Turn to the Internet of Things to Transform the Hotel Room Experience.” Skift, November 14, 2017. https://skift.com/2017/11/14/hilton-and-marriott-turn-to-the-internet-of-things-to-transform-the-hotel-room-experience/.


AI Glossary - Ambler.

 


Ambler was a self-contained robot created for planetary exploration.

It was capable of traversing even the most difficult terrain.

It had many on-board computers and was capable of planning thousands of steps ahead of time.

It was never used because of its enormous size and weight.



Related Terms:

Sojourner. 


References:

https://www.nasa.gov/mission_pages/tdm/telerobotics/index.html


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


Be sure to refer to the complete & active AI Terms Glossary here.

You may also want to read more about Artificial Intelligence here.



NASA 4-Wheel DuAxel Rover To Explore Moon, Mars, And Asteroids.

 


The adaptability of a flexible rover that can travel long distances and rappel down hard-to-reach regions of scientific interest was shown in a field test in California's Mojave Desert. 



DuAxel is a pair of Axel robots intended to investigate crater walls, pits, scarps, vents, and other severe environments on the moon, Mars, and beyond. 



  • The robot's capacity to split in half and dispatch one of its parts - a two-wheeled Axle robot - down an otherwise impassable hill is shown in this technological demonstration produced at NASA's Jet Propulsion Laboratory in Southern California. 
  • The rappelling Axel may then seek out regions to research on its own, securely navigate slopes and rough barriers, and return to dock with its other half before traveling to a new location. 
  • Although the rover does not yet have a mission, essential technologies are being developed that might one day assist mankind in exploring the solar system's stony planets and moons.




DuAxel is a development of the Axel system, a flexible series of single-axle rovers meant to traverse high-risk terrain on planetary surfaces, such as steep slopes, boulder fields, and caverns — locations that existing rovers, such as Mars Curiosity, would find difficult or impossible to approach. 





DuAxel's Advantages:



To cover greater distances, two connected Axel Rovers are used: 


  • DuAxel travels large distances by connecting two Axel rovers. 
  • They divide in two when they approach a steep slope or cliff so that one tied Axel may rappel down the steep danger to reach otherwise inaccessible area while the other works as an anchor at the top of the slope. 



Tether that can be retracted: 


  • The Axel rover can lower itself down practically any sort of terrain by reeling and unreeling its built-in rope. 



Greater Maneuverability: 


  • The two-wheeled axle simply spins one of its wheels quicker than the other to turn. 
  • The core cylinder between the wheels houses the sensors, actuators, electronics, power, and payload.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Space Exploration and Space Systems here.



References & Further Reading:


JPL Robotics: The Axel Rover System


Educational Resources:


Student Project: Design a Robotic Insect.

Educator Guide: Design a Robotic Insect.





Artificial Intelligence - Who Was Raj Reddy Or Dabbala Rajagopal "Raj" Reddy?

 


 


Dabbala Rajagopal "Raj" Reddy (1937–) is an Indian American who has made important contributions to artificial intelligence and has won the Turing Award.

He holds the Moza Bint Nasser Chair and University Professor of Computer Science and Robotics at Carnegie Mellon University's School of Computer Science.

He worked on the faculties of Stanford and Carnegie Mellon universities, two of the world's leading colleges for artificial intelligence research.

In the United States and in India, he has received honors for his contributions to artificial intelligence.

In 2001, the Indian government bestowed upon him the Padma Bhushan Award (the third highest civilian honor).

In 1984, he was also given the Legion of Honor, France's highest honor, which was created in 1802 by Napoleon Bonaparte himself.

In 1958, Reddy obtained his bachelor's degree from the University of Madras' Guindy Engineering College, and in 1960, he received his master's degree from the University of New South Wales in Australia.

In 1966, he came to the United States to get his doctorate in computer science at Stanford University.

He was the first in his family to get a university degree, which is typical of many rural Indian households.

He went to the academy in 1966 and joined the faculty of Stanford University as an Assistant Professor of Computer Science, where he stayed until 1969, after working in the industry as an Applied Science Representative at IBM Australia from 1960 to 1963.

He began working at Carnegie Mellon as an Associate Professor of Computer Science in 1969 and will continue to do so until 2020.

He rose up the ranks at Carnegie Mellon, eventually becoming a full professor in 1973 and a university professor in 1984.

In 1991, he was appointed as the head of the School of Computer Science, a post he held until 1999.

Many schools and institutions were founded as a result of Reddy's efforts.

In 1979, he launched the Robotics Institute and served as its first director, a position he held until 1999.

He was a driving force behind the establishment of the Language Technologies Institute, the Human Computer Interaction Institute, the Center for Automated Learning and Discovery (now the Machine Learning Department), and the Institute for Software Research at CMU during his stint as dean.

From 1999 to 2001, Reddy was a cochair of the President's Information Technology Advisory Committee (PITAC).

The President's Council of Advisors on Science and Technology (PCAST) took over PITAC in 2005.

Reddy was the president of the American Association for Artificial Intelligence (AAAI) from 1987 to 1989.

The AAAI has been renamed the Association for the Advancement of Artificial Intelligence, recognizing the worldwide character of the research community, which began with pioneers like Reddy.

The former logo, acronym (AAAI), and purpose have been retained.

Artificial intelligence, or the study of giving intelligence to computers, was the subject of Reddy's research.

He worked on voice control for robots, speech recognition without relying on the speaker, and unlimited vocabulary dictation, which allowed for continuous speech dictation.

Reddy and his collaborators have made significant contributions to computer analysis of natural sceneries, job oriented computer architectures, universal access to information (a project supported by UNESCO), and autonomous robotic systems.

Reddy collaborated on Hearsay II, Dragon, Harpy, and Sphinx I/II with his coworkers.

The blackboard model, one of the fundamental concepts that sprang from this study, has been extensively implemented in many fields of AI.

Reddy was also interested in employing technology for the sake of society, and he worked as the Chief Scientist at the Centre Mondial Informatique et Ressource Humaine in France.

He aided the Indian government in the establishment of the Rajiv Gandhi University of Knowledge Technologies, which focuses on low-income rural youth.

He is a member of the International Institute of Information Technology (IIIT) in Hyderabad's governing council.

IIIT is a non-profit public-private partnership (N-PPP) that focuses on technological research and applied research.

He was on the board of directors of the Emergency Management and Research Institute, a nonprofit public-private partnership that offers public emergency medical services.

EMRI has also aided in the emergency management of its neighboring nation, Sri Lanka.

In addition, he was a member of the Health Care Management Research Institute (HMRI).

HMRI provides non-emergency health-care consultation to rural populations, particularly in Andhra Pradesh, India.

In 1994, Reddy and Edward A. Feigenbaum shared the Turing Award, the top honor in artificial intelligence, and Reddy became the first person of Indian/Asian descent to receive the award.

In 1991, he received the IBM Research Ralph Gomory Fellow Award, the Okawa Foundation's Okawa Prize in 2004, the Honda Foundation's Honda Prize in 2005, and the Vannevar Bush Award from the United States National Science Board in 2006.

Reddy has received fellowships from the Institute of Electronic and Electrical Engineers (IEEE), the Acoustical Society of America, and the American Association for Artificial Intelligence, among other prestigious organizations.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Autonomous and Semiautonomous Systems; Natural Language Processing and Speech Understanding.


References & Further Reading:


Reddy, Raj. 1988. “Foundations and Grand Challenges of Artificial Intelligence.” AI Magazine 9, no. 4 (Winter): 9–21.

Reddy, Raj. 1996. “To Dream the Possible Dream.” Communications of the ACM 39, no. 5 (May): 105–12.






Artificial Intelligence - What Is The Advanced Soldier Sensor Information Systems and Technology (ASSIST)?

 



Soldiers are often required to do missions that may take many hours and are quite stressful.

Soldiers are requested to write a report detailing the most significant events that occurred once a mission is completed.

This report is designed to collect information about the environment and local/foreign people in order to better organize future operations.

Soldiers often offer this report primarily based on their memories, still photographs, and GPS data from portable equipment.

There are probably numerous cases when crucial information is missing and not accessible for future mission planning due to the severe stress they face.

Soldiers were equipped with sensors that could be worn directly on their uniforms as part of the ASSIST (Advanced Soldier Sensor Information Systems and Technology) program, which addressed this problem.

Sensors continually recorded what was going on around the troops during the operation.

When the troops returned from their mission, the sensor data was indexed and an electronic record of the events that occurred while the ASSIST system was recording was established.

Soldiers might offer more accurate reports if they had this knowledge instead of depending simply on their memories.

Numerous functions were made possible by AI-based algorithms, including:

• "Capabilities for Image/Video Data Analysis"

• Object Detection/Image Classification—the capacity to detect and identify items (such as automobiles, persons, and license plates) using video, images, and/or other data sources.

• "Audio Data Analysis Capabilities"

• "Arabic Text Translation"—the ability to detect, recognize, and translate written Arabic text (e.g., in imagery data)

• "Change Detection"—the ability to detect changes in related data sources over time (e.g., identify differences in imagery of the same location at different times)

• Sound Recognition/Speech Recognition—the capacity to distinguish speech (e.g., keyword spotting and foreign language recognition) and identify sound events (e.g., explosions, gunfire, and cars) in audio data.

• Shooter Localization/Shooter Classification—the ability to recognize gunshots in the environment (e.g., via audio data processing), as well as the kind of weapon used and the shooter's position.

• "Capabilities for Soldier Activity Data Analysis"

• Soldier State Identification/Soldier Localization—the capacity to recognize a soldier's course of movement in a given area and characterize the soldier's activities (e.g., running, walking, and climbing stairs) To be effective, AI systems like this (also known as autonomous or intelligent systems) must be thoroughly and statistically analyzed to verify that they would work correctly and as intended in a military setting.

The National Institute of Standards and Technology (NIST) was entrusted with assessing these AI systems using three criteria:

1. The precision with which objects, events, and activities are identified and labeled

2. The system's capacity to learn and improve its categorization performance.

3. The system's usefulness in improving operational efficiency To create its performance measurements, NIST devised a two-part test technique.

Metrics 1 and 2 were assessed using component- and system-level technical performance evaluations, respectively, while meter 3 was assessed using system-level utility assessments.

The utility assessments were created to estimate the effect these technologies would have on warfighter performance in a range of missions and job tasks, while the technical performance evaluations were created to ensure the ongoing improvement of ASSIST system technical capabilities.

NIST endeavored to create assessment techniques that would give an appropriate degree of difficulty for system and soldier performance while defining the precise processes for each sort of evaluation.

The ASSIST systems were divided down into components that implemented certain capabilities at the component level.

For example, the system was divided down into an Arabic text identification component, an Arabic text extraction component (to localize individual text characters), and a text translation component to evaluate its Arabic translation capacity.

Each factor was evaluated on its own to see how it affected the system.

Each ASSIST system was assessed as a black box at the system level, with the overall performance of the system being evaluated independently of the individual component performance.

The total system received a single score, which indicated the system's ability to complete the overall job.

A test was also conducted at the system level to determine the system's usefulness in improving operational effectiveness for after-mission reporting.

Because all of the systems reviewed as part of this initiative were in the early phases of development, a formative assessment technique was suitable.

NIST was especially interested in determining the system's value for warfighters.

As a result, we were worried about the influence on their procedures and goods.

User-centered metrics were used to represent this viewpoint.

NIST set out to find measures that may assist answer questions like: What information do infantry troops seek and/or require after completing a field mission? From both the troops' and the S2's (Staff 2—Intelligence Officer) perspectives, how successfully are information demands met? What was ASSIST's contribution to mission reporting in terms of user-stated information requirements? This examination was carried out at the Aberdeen Test Center Military Operations in Urban Terrain (MOUT) location in Aberdeen, Maryland.

The location was selected for a variety of reasons:

• Ground truth—Aberdeen was able to deliver ground truth to within two centimeters of chosen locations.

This provided a strong standard against which the system output could be compared, enabling the assessment team to get a good depiction of what really transpired in the environment.

• Realism—The MOUT location has around twenty structures that were built up to seem like an Iraqi town.

• Testing infrastructure—The facility was outfitted with a number of cameras (both inside and outside) to help us better comprehend the surroundings during testing.

• Soldier availability—For the assessment, the location was able to offer a small squad of active-duty troops.

The MOUT site was enhanced with items, people, and background noises whose location and behavior were programmed to provide a more operationally meaningful test environment.

The goal was to provide an environment in which the various ASSIST systems could test their capabilities by detecting, identifying, and/or capturing various forms of data.

Foreign language speech detection and classification, Arabic text detection and recognition, detection of shots fired and vehicle sounds, classification of soldier states and tracking their locations (both inside and outside of buildings), and identifying objects of interest such as vehicles, buildings, people, and so on were all included in NIST's utility assessments.

Because the tests required the troops to respond according to their training and experience, the soldiers' actions were not scripted as they progressed through each exercise.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.




See also: Battlefield AI and Robotics; Cybernetics and AI.

Further Reading

Schlenoff, Craig, Brian Weiss, Micky Steves, Ann Virts, Michael Shneier, and Michael Linegang. 2006. “Overview of the First Advanced Technology Evaluations for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 125–32. Gaithersburg, MA: National Institute of Standards and Technology.

Steves, Michelle P. 2006. “Utility Assessments of Soldier-Worn Sensor Systems for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 165–71. Gaithersburg, MA: National Institute of Standards and Technology.

Washington, Randolph, Christopher Manteuffel, and Christopher White. 2006. “Using an Ontology to Support Evaluation of Soldier-Worn Sensor Systems for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 172–78. Gaithersburg, MA: National Institute of Standards and Technology.

Weiss, Brian A., Craig I. Schlenoff, Michael O. Shneier, and Ann Virts. 2006. “Technol￾ogy Evaluations and Performance Metrics for Soldier-Worn Sensors for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 157–64. Gaithersburg, MA: National Institute of Standards and Technology.




What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...