Considering how there is limited research on fairness in automated decision making systems in the clinical domain, particularly in the mental health domain, this study explores clinicians’ perceptions of AI fairness through two distinct scenarios: violence risk assessment and depression phenotype recognition using textual clinical notes.
Clinicians were engaged with through semi-structured interviews to understand their fairness perceptions and to identify appropriate quantitative fairness objectives for these scenarios. Then, a set of bias mitigation strategies were compared, developed to improve at least one of the four selected fairness objectives. The findings underscore the importance of carefully selecting fairness measures, as prioritizing less relevant measures can have a detrimental rather than a beneficial effect on model behavior in real-world clinical use.
Learn more about the article here: https://doi.org/10.1609/aies.v7i1.31732
Reference
Sogancioglu, G., Mosteiro, P., Salah, A. A., Scheepers, F., & Kaya, H. (2024). Fairness in AI-Based Mental Health: Clinician Perspectives and Bias Mitigation. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1390-1400.
