DGM4MICCAI

Deep Generative Models workshop @ MICCAI 2023

Overview

Deep generative models such as Generative Adversarial Network (GAN) and Variational Auto-Encoder (VAE) are currently receiving widespread attention from not only the computer vision and machine learning communities, but also in the MIC and CAI community. These models combine the advanced deep neural networks with classical density estimation (either explicit or implicit) for achieving state-of-the-art results. DGM4MICCAI workshop at MICCAI 2023 will be all about Deep Generative Models in Medical Image Computing and Computer Assisted Interventions.

!!! Deadline extended to 1. July 2023, 11:59 PM Pacific Time !!!

Submit your work through the CMT submission system.

Submission Details

The online submission for DGM4MICCAI is open until 28. June 2023, 11:59 PM Pacific Time. Contributions must be submitted online through the CMT submission system.

We seek contributions that include, but are not limited to:

  • Novel architectures, loss functions, and theoretical developments for:
    • GANs and Adversarial Learning
    • Variational Auto-Encoder
    • Disentanglement
    • Flow
    • Autoregressive models
  • Stable Diffusion Models
  • Causal generative models
  • Multi-Modality and Cross-Modality linking
  • Novel metrics and uncertainty estimates for performance assessment and interpretability of generative models
  • Generative models under limited, sparse and noisy image inputs
  • Supervised and Unsupervised Domain Adaptation, Transfer Learning and Multi-Task Learning
  • Segmentation, Detection, Synthesis, Reconstruction, Denoising, Supersampling, Registration
  • Image-to-Image translation for Synthetic Training Data Generation or Augmented Reality
  • Neural Rendering
  • Diffusion Models, Normalizing Flow Models, Invertible Networks

We particularly welcome papers driven by the theme "MIC meets CAI". Interesting novel applications of deep generative models in MIC and CAI beyond these topics are also welcome.

Workshop proceedings are published as part of Springer Nature's Lecture Notes in Computer Science (LNCS) series. Manuscripts will be reviewed in double-blinded peer-review. Please prepare your workshop papers according to the MICCAI submission guidelines (LNCS template, 8 pages maximum).
Supplementary material: PDF documents or mp4 videos, 10 MB maximum file size.

Timeline

Event Date
Paper Submission Deadline 28. June 2023 1. July 2023
Final Decision 16. July 2023 21. July 2023
Camera ready papers due 30. July 2023 1. August 2023
DGM4MICCAI Workshop 8. October 2023

Program

Start End Topic Session chair
8:00 AM 8:15 AM Opening Sandy Engelhardt
8:15 AM 9:45 AM Oral Session (Papers 26, 30, 31, 9, 23, 32, 36, 37, 42) + Q&A
9:45 AM 10:00 AM Coffee break
10:00 AM 10:45 AM Keynote by Ke Li
10:45 AM 12:15 PM Oral Session (Papers 1, 3, 7, 8, 10, 14, 17, 18, 19, 22, 28, 29, 33, 40) + Q&A
12:15 PM 12:30 PM Closing remarks Sandy Engelhardt

Please note that times might be subject to change, so keep an eye on the schedule or follow us on Twitter to be up-to-date.

Keynote Speaker

Ke Li

Assistant Professor at Simon Fraser University

Canada

Ke Li is an Assistant Professor and Visual Computing Chair in the School of Computing Science at Simon Fraser University. He is interested in a broad range of topics in machine learning, computer vision, NLP and algorithms. He is particularly passionate about tackling long-standing fundamental problems that cannot be tackled with a straightforward application of conventional techniques. He was previously a Member of the Institute for Advanced Study (IAS), and received his Ph.D. from UC Berkeley and B.Sc. from the University of Toronto.

Better Data Efficiency and Uncertainty Estimation with IMLE

Deep generative models have emerged as one of the most powerful tools in modern AI. While there have been substantial advances in generation quality in recent years, established methods are still lacking in data efficiency and uncertainty estimation. Both are essential in medical imaging, where the amount of data available is often orders of magnitude less than in the natural image domain, and how much influence the model should have on clinical decisions depends on the degree of model certainty. In this talk, I will provide an overview of a new generative modelling technique my lab developed, known as Implicit Maximum Likelihood Estimation (IMLE), and demonstrate applications to various problems in computer vision. IMLE outperforms both diffusion models and GANs in data efficiency and uncertainty estimation — it allows for training on as few as 100 examples and accurately quantifies the degree of uncertainty at the level of individual pixels.

Oral Sessions

Long Oral

Paper IDTitle
26Towards generalised neural implicit representations for image registration
30ViT-DAE: Transformer-driven Diffusion Autoencoder for Histopathology Image Analysis
31Anomaly Guided Generalizable Super-Resolution of Chest X-Ray Images using Multi-Level Information Rendering
9A 3D generative model of pathological multi-modal MR images and segmentations
23Pre-Training with Diffusion models for Dental Radiography segmentation
32Ultrasound Image Reconstruction with Denoising Diffusion Restoration Models
36Diffusion Model Based Knee Cartilage Segmentation in MRI
37Semantic Image Synthesis for Abdominal CT
42Reference-Free Isotropic 3D EM Reconstruction using Diffusion Models

Short Oral

Paper IDTitle
1Diffusion-based Data Augmentation for Skin Disease Classification: Impact Across Original Medical Datasets to Fully Synthetic Images
3Unsupervised anomaly detection in 3D brain FDG PET: A benchmark of 17 VAE-based approaches
7Privacy Distillation: Reducing Re-identification Risk of Diffusion Models
8Characterizing the Features of Mitotic Figures Using a Conditional Diffusion Probabilistic Model
10Rethinking A Unified Generative Adversarial Model for MRI Modality Completion
14Federated Multimodal and Multiresolution Graph Integration for Connectional Brain Template Learning
17Diffusion Models for Generative Histopathology
18Shape-guided Conditional Latent Diffusion Models for Synthesising Brain Vasculature
19Metrics to Quantify Global Consistency in Synthetic Medical Images
22MIM-OOD: Generative Masked Image Modelling for Out-of-Distribution Detection in Medical Images
28Investigating Data Memorization in 3D Latent Diffusion Models for Medical Image Synthesis
29ICoNIK: Generating Respiratory-Resolved Abdominal MR Reconstructions Using Neural Implicit Representations in k-Space
33Importance of Aligning Training Strategy with Evaluation for Diffusion Models in 3D Multiclass Segmentation<
40CT reconstruction from few planar X-rays with application towards low-resource radiotherapy

Organizing Committee

  • Sandy Engelhardt, Heidelberg University, Germany
  • Ilkay Oksuz, Istanbul Technical University, Turkey
  • Dajiang Zhu, University of Texas, USA
  • Yixuan Yuan, City University of Hong Kong, China
  • Anirban Mukhopadhyay, TU Darmstadt, Germany
Student organizers:
  • Lalith Sharan, University Hospital Heidelberg, Germany
  • Henry Krumb, TU Darmstadt, Germany
  • Moritz Fuchs, TU Darmstadt, Germany
  • Amin Ranem, TU Darmstadt, Germany
  • John Kalkhof, TU Darmstadt, Germany
  • Yannik Frisch, TU Darmstadt, Germany
  • Caner Özer, Istanbul Technical University, Turkey

Program Committee

  • Li Wang, University of Texas at Arlington, USA
  • Tong Zhang , Peng Cheng Laboratory, Shenzhen, China
  • Ping Lu, Oxford University, UK
  • Roxane Licandro, Medical University of Vienna, Austria
  • Veronika Zimmer, TU Muenchen, Germany
  • Dwarikanath Mahapatra, Inception Institute of AI, UAE
  • Michael Sdika, CREATIS Lyon, France
  • Jelmer Wolterink, Univ. of Twente, The Netherlands
  • Alejandro Granados, King's College London, UK
  • Jinglei Lv, The University of Sydney, Australia
  • Onat Dalmaz, Bilkent University, Turkey
  • Camila González, Stanford University, USA
  • Magda Paschali, Stanford University, USA

Contact:
anirban.mukhopadhyay[at]gris.informatik.tu-darmstadt.de