Popular scientific communication
Politiken: Forskere i kunstig intelligens: Kære minister, forestil dig et fremtidsscenario, hvor du går ned med stress, A Feragen, M Ganz, S Holm, 01.02.2023
Aasa Feragen on Go' Morgen P3, Er Google Translate sexsistisk?, 12.12.2022
Forskerzonen: Kan kunstig intelligens give fair forudsigelser? Sune Holm, Melanie Ganz, Aasa Feragen, 29.11.2020
Tech Management, November 2020: Algoritmer kan også være sexistiske
Sune Holm on DR Deadline 22.8.2020, see minute 18:35
Politiken 4.6.2020: Algoritmer kopierer lægers fordomme om race og køn
Sune Holm on DR1's 21 Søndag, see minute 23:38
Aasa Feragen on the project in Prosabladet
Publications
Are demographically invariant models and representations in medical imaging fair?
E Petersen, E Ferrante, M Ganz, A Feragen
[Preprint]
On (assessing) the fairness of risk score models
E Petersen, M Ganz, S Holm, A Feragen
ACM FAccT 2023
[Paper] [Presentation Video] [Layperson summary] [Code]
Feature robustness and sex differences in medical imaging: a case study in MRI-based Alzheimer's disease detection
E Petersen, A Feragen, L da Costa Zemsch, A Henriksen, OE Wiese Christensen, M Ganz
MICCAI 2022
[Paper] [arxiv] [Code]
Assessing Bias in Medical AI
M Ganz, SH Holm, A Feragen
Interpretable Machine Learning in Healthcare, workshop at ICML 2021
[Paper] [bibtex]
Supervised theses
Fairness-Oriented Interpretability of Predictive Algorithms
C A Fuglsang-Damgaard, E Zinck
MSc Eng Thesis, 2022
[PDF]
Assessing bias in large and public datasets for medical image analysis considering Alzheimer's disease detection with AI models
M L da Costa Zemsch
External MSc Thesis with FAU (Germany), 2021
[PDF]
Demographic bias in public neuroimaging databases, and its effect on AI systems for computer-aided diagnosis
C Kergel Pedersen
MSc Thesis, 2021
[PDF]
Identifying and mitigating bias in machine learning models
D J Vigild, L Johansson
MSc Eng Thesis, 2021
[PDF]