Tag: Learning Bias

  • Assessing Algorithmic Bias in Language-Based Depression Detection: A Comparison of DNN and LLM Approaches

    Assessing Algorithmic Bias in Language-Based Depression Detection: A Comparison of DNN and LLM Approaches

    A study found that large language models (LLMs) outperform traditional deep neural network (DNN) embeddings in automated depression detection and show reduced gender bias, through racial disparities remain. Among DNN fairness-mitigation techniques, the worst-group loss provided the best balance between overall accuracy and demographic fairness, while fairness-regularized loss underperformed.

    The identified biases affect the fairness and diagnostic reliability of AI systems for mental health assessment, particularly by disadvantaging underrepresented racial and gender groups, mainly Hispanic participants in the case of this research. Such disparities risk perpetuating inequities in automated mental health screening and could undermine trust and validity in clinical or public health applications.

    Learn more about the study here: https://doi.org/10.48550/arXiv.2509.25795


    Reference

    Junias, O., Kini, P., & Chaspari, T. (2025). Assessing Algorithmic Bias in Language-Based Depression Detection: A Comparison of DNN and LLM Approaches. 2025 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), 1-7.

  • Minding the Gaps: Neuroethics, AI, and Depression

    Minding the Gaps: Neuroethics, AI, and Depression

    In this article, the author highlights the benefits and potential issues regarding the use of AI in depression diagnosis/treatment, focusing on the prevalent gender, racial and ethnicity biases.

    It is mentioned that, given the historical, inherent biases in society generally and healthcare specifically, AI-driven advancements are not going to serve minority groups as a matter of course. Unless they are tailored to represent and serve all communities equally, they will exacerbate existing biases and disparities.

    Learn more about this article here: https://nonprofitquarterly.org/minding-the-gaps-neuroethics-ai-and-depression/


    Reference

    Boothroyd, Gemma (2024), “Minding the Gaps: Neuroethics, AI, and Depression”, in Nonprofit Quarterly Magazine, winter 2024, “Health Justice in the Digital Age: Can We Harness AI for Good?”

  • A Data-Centric Approach to Detecting and Mitigating Demographic Bias in Pediatric Mental Health Text: A Case Study in Anxiety Detection

    A Data-Centric Approach to Detecting and Mitigating Demographic Bias in Pediatric Mental Health Text: A Case Study in Anxiety Detection

    This study examines classification parity across sex and finds that female adolescents have systematically under-diagnosed mental health disorders: their model’s accuracy was ~4 % lower and false negative rate ~9 % higher compared to male patients. The source of the bias resides in the textual data, namely notes corresponding to male patients tended to be on average 500 words longer and had distinct word usage. To mitigate this, the authors introduce a de-biasing method, based on neutralizing biased terms (gendered words and pronouns) and reducing sentences to essential clinical information. After correcting, diagnostic bias is reduced by up to 27%.

    This emphasizes how linguistically transmitted bias—ensuing from word choice and gendered language—consistently leads to the under-diagnosis of mental health disorders among female adolescents, which critically undermines the impartiality of medical diagnosis and treatment.

    Learn more about this study here: https://doi.org/10.48550/arXiv.2501.00129


    Reference

    Ive, J., Bondaronek, P., Yadav, V., Santel, D., Glauser, T., Cheng, T., Strawn, J.R., Agasthya, G., Tschida, J., Choo, S., Chandrashekar, M., Kapadia, A.J., & Pestian, J.P. (2024). A Data-Centric Approach to Detecting and Mitigating Demographic Bias in Pediatric Mental Health Text: A Case Study in Anxiety Detection. 

  • Multimodal Fusion of EEG and Audio Spectrogram for Major Depressive Disorder Recognition Using Modified DenseNet121

    Multimodal Fusion of EEG and Audio Spectrogram for Major Depressive Disorder Recognition Using Modified DenseNet121

    Depression and anxiety are common, often co-occurring mental health disorders that complicate diagnosis due to overlapping symptoms and reliance on subjective assessments.

    Standard diagnostic tools are widely used but can introduce bias, as they depend on self-reported symptoms and clinician interpretation, which vary across individuals. These methods also fail to account for neurobiological factors such as neurotransmitter imbalances and altered brain connectivity.

    Similarly, clinical AI/ML models used in healthcare often lack demographic diversity in their training data, with most studies failing to report race and gender, leading to biased outputs and reduced fairness. EEG offers a promising, objective approach to monitoring brain activity, potentially improving diagnostic accuracy and helping address biases in mental health assessment, as this study found.

    Learn more about it here: https://doi.org/10.3390/brainsci14101018


    Reference

    Yousufi, M., Damaševičius, R., & Maskeliūnas, R. (2024). Multimodal Fusion of EEG and Audio Spectrogram for Major Depressive Disorder Recognition Using Modified DenseNet121. Brain sciences14(10), 1018.

  • Fairness and bias correction in machine learning for depression prediction across four study populations

    Fairness and bias correction in machine learning for depression prediction across four study populations

    A study found that standard machine learning approaches often exhibit biased behaviours in predicting depression across different populations. It also demonstrated that both standard and novel post-hoc-bias mitigation techniques can effectively reduce unfair bias, though no single model achieves equality of outcomes.

    The biases that were identified risk reinforcing structural inequalities in mental healthcare, particularly affecting underserved populations. This underscores the importance of analyzing fairness during model selection and transparently reporting the impact of debiasing interventions to ensure equitable healthcare applications.

    Learn more about this study here: https://doi.org/10.1038/s41598-024-58427-7


    Reference

    Dang, V.N., Cascarano, A., Mulder, R.H. et al. Fairness and bias correction in machine learning for depression prediction across four study populations. Sci Rep 14, 7848 (2024).

  • Artificial Intelligence in mental health and the biases of language based models

    Artificial Intelligence in mental health and the biases of language based models

    In this literature review of the uses of Natural Language Processing (NLP) models in psychiatry, an approach that “systematically evaluates each stage of model development to explore how biases arise from a clinical, data science and linguistic perspective” was employed to find existing patterns.

    The result was that significant biases were found, with respect to religion, race, gender, nationality, sexuality and age.

    Learn more about this review here: https://doi.org/10.1371/journal.pone.0240376


    Reference

    Straw, I., & Callison-Burch, C. (2020). Artificial Intelligence in mental health and the biases of language based models. PloS one15(12), e0240376.