
From “try yoga” to “start journaling,” mental health advice often suggests adding new tasks rather than eliminating harmful ones. A recent study conducted by the University of Bath and the University of Hong Kong reveals a pervasive “additive advice bias” present in personal interactions, social media, and even AI chatbot recommendations. The research suggests that this trend may unintentionally lead to increased feelings of overwhelm rather than relief.
With global mental health challenges on the rise and traditional services under pressure, informal advice from friends, family, online communities, and AI has become a primary source of support. Understanding the nature of this advice could be crucial in enhancing its effectiveness.
Insights from the Research
The study, published in Communications Psychology, comprises eight studies involving hundreds of participants. It analyzed experimental data, real-world advice from Reddit, and responses from ChatGPT. Participants were asked to advise strangers, friends, and themselves on scenarios involving harmful habits, such as gambling, and missing beneficial activities, like exercise.
- Additive dominates: Across all contexts, suggestions to add activities were far more common than those to remove harmful ones.
- Feasibility and benefit: Adding tasks was perceived as easier and more beneficial than cutting out harmful behaviors.
- Advice varies by relationship: Removing harmful activities was seen as more feasible for close friends than for oneself.
- AI mirrors human bias: ChatGPT predominantly offered additive advice, reflecting patterns seen in social media.
Dr. Tom Barry, Senior Author from the Department of Psychology at the University of Bath, commented, “In theory, good advice should balance doing more with doing less. But we found a consistent tilt towards piling more onto people’s plates, and even AI has learned to do it. While well-meaning, it can unintentionally make mental health feel like an endless list of chores.”
Implications for AI and Human Advisors
Dr. Nadia Adelina, co-author from the University of Hong Kong, emphasized the role of AI in perpetuating this bias. “As AI chatbots become a major source of mental health guidance, they risk amplifying this bias. Building in prompts to explore what people might remove from their lives could make advice more balanced and less overwhelming,” she noted.
The research highlights the need for a shift in how advice is structured, suggesting that a more balanced approach could alleviate the burden on individuals seeking mental health support. This is particularly relevant as AI continues to play a growing role in providing guidance.
Historical Context and Future Directions
The concept of additive advice is not new. Historically, self-help movements and wellness trends have often focused on adding positive habits rather than eliminating negative ones. This approach is reflected in popular culture and media, which frequently promote new routines and practices as solutions to personal challenges.
However, the current study underscores the importance of reevaluating this approach, particularly in the context of mental health. By acknowledging the bias toward additive advice, both human and AI advisors can work towards offering more nuanced and effective support.
Looking forward, the researchers suggest further exploration into how advice is given and received across different cultures and communities. This could provide valuable insights into tailoring mental health support to diverse needs and preferences.
This research was supported by the Research Promotion Fund of the Department of Psychology, University of Bath, England. As the conversation around mental health continues to evolve, studies like this play a crucial role in shaping more effective support strategies for individuals worldwide.