EXPLAINABLE AI AND MACHINE LEARNING

DOCTORAL RESEARCH PROJECT

Dedja Klest

In particular, this PhD project will focus on interpretability towards the end user: trust and accountability are important issues in decision support systems. Especially in medical and industrial settings, such systems should preferably be interpretable. We will focus on time-to event problems (also known as survival analysis). Standard machine learning techniques can not directly be applied to survival data, mainly because of the censoring, meaning that for some data instances, the event of interest is not measured, e.g. because of dropout or the study ended before event took place. A particular type of interpretability to consider in this context is providing actionable advice suggesting the end user which actions are expected to improve outcome. A second type is to involve the end user or domain expert to provide event times for unlabeled or censored observations that are queried by the system (active learning). We will also consider the multi-target setting, where queries can take the form of (instance, event) pairs instead of just instances, allowing a finer grained query specification.

Person in charge of the project

Duration

  • 2019-2023
  • Faculty of Medicine
  • Doctoral Programme in Biomedical Sciences (Leuven)
Scroll to top