Bias and Fairness in AI-Based Mental Health Models

The paper examines bias and fairness issues in AI-based mental health applications, including diagnostic tools, chatbots, and suicide risk prediction models. It reports how unrepresentative datasets lead to misdiagnosis and unequal outcomes across different socioeconomic, gender and racial groups – namely concerning women, local ethnic minorities or non-Western societies -, and presents mitigation strategies such as diverse datasets, fairness metrics, and human-in-the-loop approaches.

Learn more about this paper here: https://www.researchgate.net/publication/389214235_Bias_and_Fairness_in_AI-Based_Mental_Health_Models


Reference

Barnty, Barnabas & Joseph, Oloyede & Ok, Emmanuel. (2025). Bias and Fairness in AI-Based Mental Health Models.