Tag: Aggregation Bias

  • Developing personalized algorithms for sensing mental health symptoms in daily life

    Developing personalized algorithms for sensing mental health symptoms in daily life

    This study investigates algorithmic bias in AI tools that predict depression risk using smartphone-sensed behavioral data.

    It finds that these tools underperform in larger, more diverse populations because the behavioral patterns used to predict depression are inconsistent across demographic and socioeconomic subgroups.

    Specifically, the AI models often misclassify individuals from certain groups—such as older adults or those from different racial or gender backgrounds—as being at lower risk than they actually are. The authors emphasize the need for tailored, subgroup-aware approaches to improve reliability and fairness in mental health prediction tools. This work highlights the importance of addressing demographic bias to ensure equitable AI deployment in mental healthcare.

    Learn more about this study here: https://doi.org/10.1038/s44184-025-00147-5


    Reference

    Timmons, A.C., Tutul, A.A., Avramidis, K. et al. Developing personalized algorithms for sensing mental health symptoms in daily life. npj Mental Health Res 4, 34 (2025).

  • Deconstructing demographic bias in speech-based machine learning models for digital health

    Deconstructing demographic bias in speech-based machine learning models for digital health

    This study investigates algorithmic bias in AI tools that predict depression risk using smartphone-sensed behavioral data.

    It finds that the model underperforms across several demographic subgroups, including gender, race, age, and socioeconomic status, often misclassifying individuals with depression as low-risk. For example, older adults and Black or low-income individuals were frequently ranked lower in risk than healthier younger or White individuals.

    These biases stem from inconsistent relationships between sensed behaviors and depression across groups. The authors emphasized the need for subgroup-specific modeling to improve fairness and reliability in mental health AI tools.

    Learn more about this study here: https://doi.org/10.3389/fdgth.2024.1351637


    Reference

    Yang M, El-Attar AA and Chaspari T (2024) Deconstructing demographic bias in speech-based machine learning models for digital health. Front. Digit. Health 6: 1351637. 

  • Digital health tools for the passive monitoring of depression: a systematic review of methods

    Digital health tools for the passive monitoring of depression: a systematic review of methods

    This systematic review examines studies linking passive data from smartphones and wearables to depression, identifying key methodological flaws and threats to reproducibility. It highlights biases such as representation, measurement, and evaluation bias, stemming from small, homogenous samples and inconsistent feature construction.

    Although gender and race are not explicitly discussed, the lack of diversity in study populations suggests potential demographic bias. The review calls for improved reporting standards and broader sample inclusion to enhance generalizability and clinical relevance. These improvements are essential for ensuring that digital mental health tools are equitable and reliable across diverse populations.

    Learn more about this review here: https://doi.org/10.1038/s41746-021-00548-8


    Reference

    De Angel, V., Lewis, S., White, K., Oetzmann, C., Leightley, D., Oprea, E., Lavelle, G., Matcham, F., Pace, A., Mohr, D. C., Dobson, R., & Hotopf, M. (2022). Digital health tools for the passive monitoring of depression: a systematic review of methods. NPJ digital medicine5(1), 3.