
From “try yoga” to “start journaling,” mental health advice often adds tasks to our already busy lives. Rarely does it suggest stopping harmful behaviors. New research from the University of Bath and the University of Hong Kong reveals that this “additive advice bias” is prevalent in personal conversations, social media, and even AI chatbot recommendations. The result? Well-intentioned tips that may leave individuals feeling more overwhelmed than supported.
With mental health issues on the rise globally and services under significant strain, friends, family, online communities, and AI often serve as the first line of support. Understanding how advice is given could be crucial in enhancing the effectiveness of this support.
The Study and Its Findings
A series of eight studies involving hundreds of participants, published in Communications Psychology, analyzed experimental data, real-world Reddit advice, and responses from AI like ChatGPT. Participants provided advice to strangers, friends, and even themselves on scenarios involving harmful habits, such as gambling, and missing beneficial activities, like exercise.
- Additive dominates: Across all contexts, suggestions to add activities were far more common than those to remove harmful activities.
- Feasibility and benefit: Adding activities was perceived as easier and more beneficial than eliminating harmful ones.
- Advice varies by relationship: Removing harmful activities was seen as easier for close friends than for oneself.
- AI mirrors human bias: ChatGPT predominantly offered additive advice, reflecting online social media patterns.
Dr. Tom Barry, senior author from the Department of Psychology at the University of Bath, England, stated, “In theory, good advice should balance doing more with doing less. But we found a consistent tilt towards piling more onto people’s plates, and even AI has learned to do it. While well-meaning, it can unintentionally make mental health feel like an endless list of chores.”
Implications of Additive Advice
The research highlights a significant issue in mental health support: the tendency to add rather than subtract. This bias could lead to increased stress and a sense of being overwhelmed, contrary to the intended effect of such advice. Dr. Nadia Adelina, co-author from the Department of Psychology at the University of Hong Kong, emphasized, “As AI chatbots become a major source of mental health guidance, they risk amplifying this bias. Building in prompts to explore what people might remove from their lives could make advice more balanced and less overwhelming.”
This development follows a growing reliance on technology for mental health support, especially as traditional services struggle to meet demand. The move represents a call to action for developers and mental health professionals to consider how advice is structured and delivered.
Looking Forward: Balancing Advice
The findings from this research suggest a need for a paradigm shift in how mental health advice is given. By focusing on both adding beneficial activities and removing harmful ones, a more balanced approach can be achieved. This shift could potentially reduce the burden on individuals seeking help and create more effective mental health support systems.
As mental health continues to be a pressing global issue, understanding and addressing the nuances of advice-giving will be crucial. The research was supported by the Research Promotion Fund of the Department of Psychology, University of Bath, England, and offers a foundation for future studies and improvements in mental health guidance.
Meanwhile, mental health professionals, AI developers, and individuals alike are encouraged to reflect on the nature of the advice they give and receive. By doing so, the path to better mental health support can be paved with both addition and subtraction, ultimately leading to a more manageable and supportive environment for all.