Responsible AI Seminar

An interdisciplinary, inter-university seminar series

Responsible AI draws on widely different scientific disciplines, from the technical aspects of AI, via ethics, philosophy and law, to the individual realities of different application domains. With this hybrid-format seminar series, we wish to take advantage of the limitations imposed by Covid-19 to start an informal conversation about different aspects of Responsible AI across Denmark and abroad.

Please join us for exciting, important, and truly interdisciplinary conversations!

Initiators: Aasa Feragen, Melanie Ganz and Sune Hannibal Holm from the DFF-funded project Bias and Fairness in Medicine.

Current main organizer: Eike Petersen, also from the DFF-funded project Bias and Fairness in Medicine.

The seminar can always be attended at this fixed zoom link.

If you want to receive information about upcoming seminars, please sign up for our newsletter (at the bottom of this page)!

Coming talks

June 24th, 2022, 2pm (CET): Wojciech Samek (TU Berlin, Fraunhofer HHI). From Attribution Maps to Concept-Level Explainable AI.

Abstract: The emerging field of Explainable AI (XAI) aims to bring transparency to today's powerful but opaque deep learning models. While local XAI methods explain individual predictions in form of attribution maps, thereby identifying “where” important features occur (but not providing information about what they represent), global explanation techniques visualize “what” concepts a model has generally learned to encode. Both types of methods thus only provide partial insights and leave the burden of interpreting the model's reasoning to the user. Building on Layer-wise Relevance Propagation (LRP), one of the most popular local XAI techniques, this talk will connect lines of local and global XAI research by introducing Concept Relevance Propagation (CRP), a next-generation XAI technique which explains individual predictions in terms of localized and human-understandable concepts. Other than the related state-of-the-art, CRP answers both the “where” and “what” question, thereby providing deep insights into the model’s reasoning process. In the talk we will demonstrate on multiple datasets, model architectures and application domains, that CRP-based analyses allow one to (1) gain insights into the representation and composition of concepts in the model as well as quantitatively investigate their role in prediction, (2) identify and counteract Clever Hans filters focusing on spurious correlations in the data, and (3) analyze whole concept subspaces and their contributions to fine-grained decision making. By lifting XAI to the concept level, CRP opens up a new way to analyze, debug and interact with ML models, which is of particular interest in safety-critical applications and the sciences.

Wojciech Samek is a professor in the Department of Electrical Engineering and Computer Science at the Technical University of Berlin and is jointly heading the Department of Artificial Intelligence and the Explainable AI Group at Fraunhofer Heinrich Hertz Institute (HHI), Berlin, Germany.

Attend this talk on Zoom or (physically) at the AI pioneer center.

📆 Add this event to your calendar.

Previous talks

Ben Glocker (Imperial College London, Kheiron Medical Technologies): Algorithmic encoding of protected characteristics. March 30, 2022. [Video]
Laure Wynants (Maastricht University): A journey through the disorderly world of diagnostic and prognostic models for covid-19: a living systematic review. March 2, 2022. [Video]
Timothy Miller (University of Melbourne): Explainable artificial intelligence: beware the inmates running the asylum. Or: How I learnt to stop worrying and love the social and behavioural sciences. February 9, 2022. [Video]
Enzo Ferrante (CONICET): Gender bias in X-ray classifiers for computer-assisted diagnosis. November 5, 2021. [Video]
Julia Amann (ETH): Reconciling medical AI & patient-centered care. October 1, 2021. [Video]
Anders Eklund (Linköping University): Sharing synthetic medical images — a way to circumvent GDPR? September 17, 2021. [Video]
Jens Chistian Krarup Bjerring, (AU): Black-box decision making in medicine: some thoughts and questions. June 18, 2021. [Video]
Veronika Cheplygina (ITU): How I failed machine learning in medical imaging — shortcomings and recommendations. May 28, 2021. [Slides] [Video]
Lars Kai Hansen (DTU): Values in AI. May 7, 2021. [Slides] [Video]

All previous talks can be found in our Youtube playlist.


Notice: Many mail providers flag our subscription confirmation mail as junk. (In particular, this includes the KU mail system!) Please check your junk mail folder, and if the confirmation mail still does not appear, try a different email address. Sorry for the inconvenience! There is, unfortunately, nothing we can do about this.
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter and stay updated.

Subscribers receive information regarding events in our seminar series (at most two mails per event), and occasionally (at most once per month) information about other events organized by our team. We will never share your data with anyone beyond our team and our mail service provider, Sendinblue.

I agree to receive your newsletters and accept the data privacy statement.

We use Sendinblue as our marketing platform. By Clicking below to submit this form, you acknowledge that the information you provided will be transferred to Sendinblue for processing in accordance with their terms of use.