Developing personalized algorithms for sensing mental health symptoms in daily life

This study investigates algorithmic bias in AI tools that predict depression risk using smartphone-sensed behavioral data.

It finds that these tools underperform in larger, more diverse populations because the behavioral patterns used to predict depression are inconsistent across demographic and socioeconomic subgroups.

Specifically, the AI models often misclassify individuals from certain groups—such as older adults or those from different racial or gender backgrounds—as being at lower risk than they actually are. The authors emphasize the need for tailored, subgroup-aware approaches to improve reliability and fairness in mental health prediction tools. This work highlights the importance of addressing demographic bias to ensure equitable AI deployment in mental healthcare.

Learn more about this study here: https://doi.org/10.1038/s44184-025-00147-5


Reference

Timmons, A.C., Tutul, A.A., Avramidis, K. et al. Developing personalized algorithms for sensing mental health symptoms in daily life. npj Mental Health Res 4, 34 (2025).