Artificial Intelligence for Medicine and Health

We develop novel methods for medical computer vision applications such as medical image analysis tools, multi-modal medical models and interactive medical models, where we put emphasis on sustainability through data-efficient training.

We work on projects with industry partners (e.g., previously Zeiss), in the context of graduate school programs (e.g., HIDSS4HEALTH) and in other research contexts (e.g., KiKIT).

We are part of KITHealthTech where we ask the question: How can we digitalize technology and processes in healthcare?

Computer-aided Diagnostic Systems Grounded in Medical Knowledge

Anatomical knowledge to be used in medical models

We published multiple medical datasets enriched with high quality, automatically obtained human anatomy labels for X-ray images (BMVC, dataset) and CT scans (ICIP, dataset). With this auxiliary information on anatomical structures, we develop methods that can make use of it and thereby improve the performance on disease segmentation (MICCAI). Our research also advances the evaluation of medical segmentation models. We propose CC-Metrics, an adaptation to commonly used metrics to better reflect the capability of a model to discover instances as opposed to only finding large segments, which is important in contexts such as tumor segmentation (AAAI). 

Medical Practitioners in the loop: Harnessing Interactivity between Doctors and AI

Taxonomy for Deep Medical Interactive Segmentation

  

AI systems can be used to automatically process medical data and directly gather insights based on this data, or they can be constructed as interactive models, where a medical doctor stands in direct collaboration with the system. Such an interactive design can accelerate how fast medical knowledge can be gathered from an expert to train better medical image analysis models. 

 

In our work on interactive models, we analyzed the literature in a systematic review, discovering a taxonomy for deep medical interactive segmentation models (TPAMI). Further, we explored how to best integrate cues given by medical doctors into interactive deep learning models (MICCAI) and explored techniques to make interactive models faster (ISBI). We regularly participate (NATURE machine intelligence) and organize medical interactive segmentation challenges.

Learning with less: Medical AI in Scarce Data Scenarios

Increased flexibility: Experts may provide heterogeneous annotations

A central problem in artificial intelligence systems is the requirement to train them on large datasets with expensive annotations. Furthermore, in the medical domain, doctors are needed to create these annotations, as they have the expertise on how to interpret, e.g., medical images. To this end, solutions are needed to make it possible to train models with only very few annotations and to make the process of training these systems as flexible as possible to best accommodate the expert’s time. 

 

We propose training strategies for data efficient training (CVPR, AAAI), where with only a handful of annotations, we are able to train models for semantic segmentation with only minor performance loss as compared to models trained with hundreds of annotations. Further, we research training techniques that add flexibility to the annotation process by accepting highly heterogeneous training signals that can be used to train medical segmentation models (CVPR, ECCV). We also investigate the adaptation of pre-trained models towards new data distributions without the need to collect expensive pixel-wise annotations (ISBI).

Multi-modal AI in Medicine: From OCT, X-ray and CT to Natural Language

Flexible text-prompts for open-set medical image classification

The medical domain is comprised of highly multi-modal data. A wide range of different imaging modalities are used to gather insights into a patient’s health, some of which are optical coherence tomography, computed tomography, magnetic resonance imaging or X-ray scans. On top, in medical day-to-day routines textual data in the form of reports accumulates. 

 

In our research, we bring together different imaging modalities to benefit from the complimentary information they offer and thereby train better deep learning models (ICCV-W). We also showed that medical images and radiological reports can be utilized to train classification models without having to provide additional explicit labels while still enabling open-set recognition (MICCAI). 

Sleep Monitoring

Sleep monitoring setup

Under VIPSAFE and SPHERE projects we have worked on several sleep monitoring tasks:

  • Breath Analysis
  • Sleep Position
  • Agitation Quantification
  • Action Recognition

We aim to provide better and safer care in Intensive Care Rooms, and improve sleep quality for the elderly in nursery homes and ageing-at-home setups.

Dementia and Engagement

Older people who are not engaged in social and mental activities show faster mental decay. Sadly, the limited amount of human resources and large amount of workload in nursery homes tends to limit the time careteakers have for social estimulating tasks. In order to assist on this efforts we have worked on the AKTIV project, in which a virtual persona addresses by name the residents in an elderly home and encourages them to play some simple games and engage in conversations with fellow residents.

Publications List