A recent study involving AI bots and human participants has uncovered fascinating insights into how we empathize with and protect virtual beings when they experience exclusion. This certainly presents a unique perspective on how humans engage with artificial intelligence (AI). The study results further endorse the need for careful considerations while designing AI bots, bearing in mind our inherent tendencies to perceive AI agents as social beings.
The London-based Imperial College spearheaded the study, which centered around a virtual ball game to understand human behavior towards AI. The study was detailed in the publication 'Human Behavior and Emerging Technologies.'
Jianan Zhou, the lead author and a notable academic at Imperial's Dyson School of Design Engineering, commented on the interactive study. He asserted, "This is an unusual insight into how humans interact with AI, with stimulating implications for AI design and our psychology."
As AI becomes increasingly entwined in our everyday services, human interactions with AI virtual agents are on the rise. The study suggests that while designing these agents, the developers should steer clear from making them excessively human-like. The participants indeed exhibited tendencies to treat AI as social beings, especially when they felt the AI was being left out from the game. This inclination is familiar with human-to-human interactions.
Humans generally encode empathy and the will to correct unfair deeds. Past researches affirm that humans compensate for isolated targets by interacting with them more often, often leading to a sympathetic connection while disliking whoever isolates these targets.
In the AI study, gameplay constituted a game called 'Cyberball,' where both human participants and an AI bot participated. The researchers documented the behavioral responses of the humans when they observed the AI bot being deliberately left out from the play.
Interestingly, the study showed that sweeping majority of time, the participants attempted to mend the perceived unfairness towards the bot by choosing to throw the ball more frequently its way. Older participants were more likely to perceive this unfairness.
The researchers extrapolated that as AI virtual agents become more common in collaborative tasks, increased human engagement will enhance our familiarity and ignite automatic processing. This would naturally place the virtual agents as real team players. The flip side, however, is that these virtual agents may replace human friends or offer advice on physical or mental health, which might be concerning.
The elaborate study used the Cyberball game as an example, but it does not entirely simulate real-life scenarios of human interaction with AI in written or spoken language. As a result, the research team plans to design similar experiments using face-to-face conversations with agents in various settings like a lab or casual circumstances for assessing the depth of the findings.
Disclaimer: The above article was written with the assistance of AI. The original sources can be found on ScienceDaily.