Skip to content
Understanding the Functioning and Limitations of Large Language Models in Facilitating Health Behavior Change

Understanding the Functioning and Limitations of Large Language Models in Facilitating Health Behavior Change

AI tools and, more specifically, large language model-based chatbots are significantly contributing to the healthcare sector. Their functionality spans patient education, evaluation, and management. These AI-driven agents appear promising for initiating and promoting behavior modifications. But do they effectively acknowledge users’ nudge factors and motivational states for a meaningful influence? A recent research study delves deeper into this aspect.

The ACTION Lab researchers at the University of Illinois Urbana-Champaign discovered that these AI tools fall short on recognising certain motivational states of users, thus under-delivering on providing the necessary information. This finding originates from the work of Michelle Bak, a doctoral student, and Jessie Chin, an information sciences professor, as appeared in the Journal of the American Medical Informatics Association.

The duo of Bak and Chin unearthed these insights by evaluating a series of health-related scenarios on large language models from ChatGPT, Google Bard and Llama 2. These scenarios revolved around various health needs, ranging from physical exercise, diet, mental health, to catching health anomalies, and addictions. The language models were assessed on their ability to identify and duly react to the five stages of behavior change, starting from resistance, awareness of problem behavior, taking small action steps, commitment to maintaining the change, and successfully accommodating behavior modification for a continuous period.

The research outcome suggested that AI models can effectively identify established goals and a committed attitude. However, understanding initial motivational states of hesitancy or ambivalence towards a healthier change seems to be a challenge. In other words, users who are in the 'intention' phase of the change or those 'thinking about a change' might not receive the most suitable information to guide them further.

The study also revealed that when users were unwavering, large language models were unable to assist them in evaluating their adverse behavior and its repercussions. For instance, for someone not willing to engage in physical activities, the models failed to trigger emotional engagement by highlighting the negative effects of a sedentary lifestyle. Therefore, the chatbots underperformed in sparking readiness and providing the emotional motivation necessary to implement behavior change.

On a brighter note, once a user was on the path of the said change, these models served their purpose by helping them strive towards their goals. However, they yet again missed out on providing information on reward systems to keep users motivated and offer strategies for reducing environmental triggers that could lead to a relapse.

In conclusion, the study noted that large language models appear partially effective. While they may struggle with the recognition of motivational states through natural language conversations, they showcase potential in supporting behavior change provided the individual exhibits sound motivation and readiness to act.

Chin and Bak also emphasised future studies focusing on fine-tuning these language models. They propose the incorporation of linguistic cues and search patterns information, along with considering social determinants of health to improve understanding of user motivational states. This will help models offer more precise knowledge to aid people in their journey to healthier behavior modification.

Disclaimer: The above article was written with the assistance of AI. The original sources can be found on ScienceDaily.