Tag: Historical Bias

  • Minding the Gaps: Neuroethics, AI, and Depression

    Minding the Gaps: Neuroethics, AI, and Depression

    In this article, the author highlights the benefits and potential issues regarding the use of AI in depression diagnosis/treatment, focusing on the prevalent gender, racial and ethnicity biases.

    It is mentioned that, given the historical, inherent biases in society generally and healthcare specifically, AI-driven advancements are not going to serve minority groups as a matter of course. Unless they are tailored to represent and serve all communities equally, they will exacerbate existing biases and disparities.

    Learn more about this article here: https://nonprofitquarterly.org/minding-the-gaps-neuroethics-ai-and-depression/


    Reference

    Boothroyd, Gemma (2024), “Minding the Gaps: Neuroethics, AI, and Depression”, in Nonprofit Quarterly Magazine, winter 2024, “Health Justice in the Digital Age: Can We Harness AI for Good?”

  • Gender Bias in AI’s Perception of Cardiovascular Risk

    Gender Bias in AI’s Perception of Cardiovascular Risk

    The study investigated gender bias in GPT-4’s assessment of coronary artery disease risk and showed that there was a substantial shift in the perception of risk between men and women when a psychiatric comorbidity was added to the vignette, even when they presented identical complaints.

    This resulted in women being assessed as having as lower risk of CAD when concurrently having a psychiatric condition.

    Learn more about this study here: https://www.jmir.org/2024/1/e54242


    Reference

    Achtari M, Salihu A, Muller O, Abbé E, Clair C, Schwarz J, Fournier S
    Gender Bias in AI’s Perception of Cardiovascular Risk
    J Med Internet Res 2024;26:e54242
    DOI: 10.2196/54242

  • Artificial Intelligence in mental health and the biases of language based models

    Artificial Intelligence in mental health and the biases of language based models

    In this literature review of the uses of Natural Language Processing (NLP) models in psychiatry, an approach that “systematically evaluates each stage of model development to explore how biases arise from a clinical, data science and linguistic perspective” was employed to find existing patterns.

    The result was that significant biases were found, with respect to religion, race, gender, nationality, sexuality and age.

    Learn more about this review here: https://doi.org/10.1371/journal.pone.0240376


    Reference

    Straw, I., & Callison-Burch, C. (2020). Artificial Intelligence in mental health and the biases of language based models. PloS one15(12), e0240376.