Skip to content
Human Interaction with Robots: A Study on Risk Behavior

Human Interaction with Robots: A Study on Risk Behavior

In the ever-evolving world of artificial intelligence and robotics, understanding how humans interact with these complex systems is of paramount importance. A team of mechanical engineers and computer scientists from the University of California San Diego recently conducted a study to explore how humans like interacting with robots, particularly in crowded environments.

The main objective of this research was to understand what algorithms should be used to program robots so they can effectively interact with humans. To this end, the researchers posed two main questions: How would people prefer to interact with robots when navigating crowded situations, and what algorithms should be employed to enhance such exchanges?

The study was initially presented at the 2024 ICRA conference held in Japan. Aamodh Suresh, the study's leading author, stated that as far as the research team's understanding goes, this was the first study of its kind to explore the human perception of risk and its usage in intelligent decision-making involving robots in everyday scenarios.

Angelique Taylor, the study's second author, further explained that the team intended to create a framework to help better understand whether or not humans tend to be risk-averse when interacting with robots. The team approached this by using models from behavioral economics; however, determining which models to use proved to be a challenge. As the research took place during the COVID-19 pandemic, the team had to adapt their methodologies in order to continue the investigation through an online setting.

In the study, subjects played a role-playing game where they had to choose from three different grocery store paths, each offering different risk levels. The primary goal was to reach the milk aisle as quickly as possible. To achieve this, they had to make decisions based on risk levels associated with each path.

The research team observed that people consistently underestimate the risk they are willing to take, especially when there is a reward at stake. Therefore, when programming robots for interaction with humans, researchers are now relying on Kahneman's prospect theory of weighing losses and gains from a reference point.

The team also found that respondents preferred robots to communicate their intentions using speech, gestures, and touch screens. As a next step, the research team plans to conduct an in-person study with a more diverse pool of subjects.

Disclaimer: The above article was written with the assistance of an AI. The original source can be found on ScienceDaily.