Deconstructing demographic bias in speech-based machine learning models for digital health

This study investigates algorithmic bias in AI tools that predict depression risk using smartphone-sensed behavioral data.

It finds that the model underperforms across several demographic subgroups, including gender, race, age, and socioeconomic status, often misclassifying individuals with depression as low-risk. For example, older adults and Black or low-income individuals were frequently ranked lower in risk than healthier younger or White individuals.

These biases stem from inconsistent relationships between sensed behaviors and depression across groups. The authors emphasized the need for subgroup-specific modeling to improve fairness and reliability in mental health AI tools.

Learn more about this study here: https://doi.org/10.3389/fdgth.2024.1351637


Reference

Yang M, El-Attar AA and Chaspari T (2024) Deconstructing demographic bias in speech-based machine learning models for digital health. Front. Digit. Health 6: 1351637.