Bias and fairness in medicine

How can society trust AI algorithms to make fair healthcare decisions? AI algorithms will, by their very nature, reproduce biases and discrimination present in the data used for their training. As discrimination is illegal in applications such as hiring, fair AI algorithms have become a topic of research, where training a "fair" predictive model is an optimization task among "fair" solutions defined by hard constraints.

Healthcare, however, is not constrained by the law, but by ethics: We would likely not, in the name of fairness, allow reducing our diagnostic abilities for one part of the population just because we cannot diagnose another.

This project will, via large-scale registry studies of major depressive disorder, statistically examine the nature of bias in diagnosis and treatment. In an interaction between ethics/philosophy and mathematical modelling, we will redefine fairness for the medical domain and develop predictive AI algorithms that are, in the new sense, fair.

More information

The project officially started 1.4.2020; stay tuned as we move along!

For more information, please contact Aasa Feragen at or Eike Petersen at