Our goal is to accelerate scientific discovery using machine learning and artificial intelligence: We develop computational tools to interpret complex empirical observations and numerical simulations to provide scientific insights.
Collaborting closely with researchers from various disciplines, we apply these tools for scientific discovery, and to identify shared opportunities and challenges for AI in Science. We are particularly interested in neuroscience: We build data-driven mechanistic models to understand how neuronal networks process sensory information and control intelligent behaviour, and to identify underlying causes and potential treatments of neurological disorders.
Research Directions
Simulation-Based Inference for Scientific Discovery
Simulation-based inference (SBI) enables Bayesian inference in complex scientific models, i.e. to identify models and model-parameters which are compatible both with empirical data and prior knowlege. Importantly, SBI can be applied to black-box simulators, as it only needs access to model-simulations, and and associated likelihoods or gradients. Several SBI approaches also enable amortized inference: After an initial training-phase, inference on additional observations can be performed rapidly, without need for additional simulations. This make it possible to also scale Bayesian inference to time-critical or high-throughput applications.
Our group has contributed a range of SBI tools, in particular variants of neural posterior estimation (NPE): SNPE-B and various embedding methods for amortized inference (Lueckmann et al NeurIPS 2017), SNPE-C/APT (Greenberg et al ICML 2019), Truncated-SNPE (Deistler et al NeurIPS 2021), methods for learning simulation-informed priors (Deistler et al PNAS 2022) and joint estimation of likelihoods and posteriors (Gloeckler et al ICLR 2022) and adversarial approaches (Ramesh et al ICLR 2022). We contributed the standard benchmark for SBI algorithms (see this website, Lueckmann et al AISTATS 2021). We also explored the use of SBI for model discovery (Schröder et al ICML 2024), robust estimation (Gao et al NeurIPS 2023), source estimation (Moss, Vetter et al at NeurIPS 2024), and the adversarial robustness of amortized inference (Gloeckler et al ICML 2023).
More recently, we have also developed a flexible transformer-based SBI method—the simformer (Gloeckler et al ICML 2024)—which enables efficient amortized inference on complex models, and allows post-hoc changes of parameters, missing data, and more, and specialized approaches for time-series data (Gloeckler, Toyota et al ICLR 2025). We have also shown how foundation models for tabular data can be used to accelerate SBI (Vetter, Gloeckler et al arXiv 2025).
Together with our collaborators, we have applied these tools across a range of scientific domains:
Neuroscience: SBI can be used to build and explore mechanistic models of neural dynamics (Lueckmann et al eLife 2020, blog post, Deistler et al PNAS 2022, Gao et al bioRxiv 2024), models of neural connectivity (Boelts et al PLOS Comp Biology 2023) and plasticity (Confavreux et al NeurIPS 2023), and to link ion-channel genes and biophysical models (Bernarts et al bioRxiv 2025).
Astrophysics: SBI can be used to infer parameters to gravitational wave models (Dax et al PRL 2021, Dax et al ICLR 2022, Dax et al PRL 2023). This is collaboration lead by Bernhard Schölkopf, MPI for Intelligent Systems and Alessandra Buonanno, MPI for Gravitational Physics. Dax and colleagues showed that a specialized SBI approach enables rapid identification of binary neutron star mergers (Dax et al Nature 2025, Nature briefing, Nature news and views, MPI press release).
Computational Imaging: Simulation-based machinling can be used to enhance performance and efficiency of algorithms for single-molecule localization microscopy (SMLM) (Speiser, Mueller et al Nature Methods 2021, blog post, press release), and to enable fast and probabilistic inference for diffusion MRI (Manzano-Patron et al Medical Image Analysis 2025).
Geoscience: SBI enabled inference of basal melting rates in the Antarctic ice sheet (Moss et al Journal of Glaciology 2025).
The sbi toolbox
We initiated the open-source Python package sbi (Boelts et al JOSS 2020, Deistler, Boelts et al JOSS 2025), which provides a user-friendly interface for simulation-based inference using a range of different SBI methods, including both amortized and non-amortized approaches. The toolbox is now maintained and extended by an active community of contributors both within and beyond the lab. Documentation and tutorials are available at sbi.readthedocs.io/en/latest/.
Building Mechanistic Models of Neural Computations
We develop mechanistic models that capture how biological neural circuits compute and drive behavior. A recent focus has been to develop machine learning methods which make it possible to optimize mechanistic models of neural dynamics, either on behavioural tasks or to achieve a close match to experimental detail.
Together with our long-term collaborator Srinivas Turaga and his group at HHMI Janelia, we have been building models of the fruit fly visual system using dense connectomic reconstructions to simulate and understand behaviorally relevant visual processing (Lappalainen et al Nature 2024, research briefing, press release, German press release, blog post, NPR, transmitter article, SWP, code package).
We also built Jaxley, a differentiable simulator written in JAX for biophysical neuron and circuit models. It supports GPU acceleration and gradient-based optimization, and enables efficient, large-scale optimization of mechanistic models (Deistler et al bioRxiv 2025).
Machine Learning for interpreting neurophysical recordings from the human brain
We develop machine learning methods to interpret high-dimensional neurophysiological data across spatial and temporal scales. We have been particular interested tools tht allow us to model both invasive and non-invasive recordings from the human brain.
Diffusion models: We have shown that (latent) diffusion models can be used to create highly realistic samples of neurophysiological data (Vetter et al Cell Patterns 2024), and provide low-dimensional interpretable embeddings of the data (Kapoor, Schulz et al NeurIPS 2024).
Recurrent Neural Networks: RNNs can be used to model the temporal structure of single-neuron recordings in the human brain (Liebe et al Nature Neuroscience 2025, Pals et al PLOS Comp Biology 2024). We have also investigated efficient fitting approaches for stochastic low-rank RNNS (Pals et al NeurIPS 2024).
We are, or have been, funded by the DFG (through the Excellence Cluster: Machine Learning for Science, Collaborative Research Centers SFB 1233, SFB 1089, SPP 2041), the ERC Grant “DeepCoMechTome”, the HFSP, the Else Kröner Fresenius Foundation (ClinBrAIn), the BMBF (Tuebingen AI Center, Projects Adimem, Simalesam, DeepHumanVision), and the Carl Zeiss Foundation.
More information about our research can be found on our publications page and on GitHub.
![]() Simulation-based inferenceMany domains in science use computer simulations to study an observed phenomenon. Consider for example physics models of particle movements, models of electrical activity in the brain, or modelling the spread of a disease. As these simulations become more and more complex, it becomes increasingly difficult to fit them to data, i.e. to find parameters such that the simulation output reproduces experimentally observed data. We develop methods that efficiently solve this problem by using neural networks that perform Bayesian inference. Read the article in the ML for Science Blog. |
![]() Deep learning for realistic models of neural circuitsWe aim to look at intelligent systems and their environments jointly to understand stimulus-evoked neural computation and behavior. One way to increase our understanding of stimulus-evoked neural computation is to use and build models of neural circuits and compare their representations to data measured in biological neural circuits. In particular, deep convolutional neural networks (DNNs) for image classification are compelling models of neural computation in the mammalian visual system. But they lack a one-to-one mapping of artificial to biological neurons. To remedy this lack of the actual circuitry in DNNs, we ask how we can incorporate knowledge of the connectivity of neural circuits into models of neural computation? We investigate this question together with Dr. Srini Turaga and other scientists from the HHMI Janelia Research Campus. To do so, we develop connectome-constrained models of the Drosophila visual system, which learn to recognize e.g. movement in naturalistic movie sequences. |
![]() Deep learning for microscopyIn Single molecule localization microscopy (SMLM), super-resolution images of biological structures are assembled from a large number of individually detected spots. Deep neural networks (DNNs) are well suited for the task of detecting and localizing patterns in images, but for this application, and many other similar tasks in modern microscopy, no ground truth data is available for straight forward network training. Together with scientists from the HHMI Janelia Research Campus and the EMBL Heidelberg, we develop methods to train DNNs on simulations that closely resemble real data. Networks trained in this way achieve superior performance in difficult conditions and allow for much faster imaging. For more information read the article in the ML for Science Blog and university press release. |
![]() Low-dimensional dynamicsHow can we efficiently link natural behavior or cognitive functions with the underlying neural population dynamics? To gain insight in this question, we develop ML-based tools that can infer low-dimensional trajectories underlying both neural population activity and behavior. We use for instance diffusion models for efficient generation of realistic data (both continuous voltage and discrete spikes), and recurrent neural networks (RNNs) as interpretable models of neural dynamics. Additionally, we capitalize on recent developments in machine learning that have enabled real-time behavioral tracking of animals in unconstrained lab settings and model neural activity during natural behavior. |
![]() Machine learning for medical research and clinical applicationsWe use probabilistic generative models to analyse data for clinical research and applications. Our aim is to obtain an interpretable probabilistic model of the data to facilitate downstream tasks in the clinical domain. To this end, we work on several projects – on deep generative models for spatio-temporal modeling of neuroimaging data to study disease progression in neurodegenerative diseases such as Alzheimer’s; on dynamical systems for single cell mRNA sequence data to simulate different interventions and extract biological hypothesis; on interpretable probabilistic machine learning models for physiological time series to impute missing values and predict adverse events. |
![]() Inferring the properties and history of Antarctic Ice Sheets with Machine LearningGlaciologists investigate the ice sheets and ice shelves of Antarctica using a variety of methods, key among them is radio-echo sounding. This method allows glaciologists to measure the reflections of emitted radar waves from the internal layers of the ice body and from the interface between ice and underlying material. How can we use this data to extract information about the historical climate conditions, as well as the properties of the ice and what lies below it? We investigate this question together with collaborators in the glaciology and geophysics group led by Prof. Reinhard Drews at the University of Tübingen. We develop approaches combining the physical modelling of processes within the ice sheets, together with machine learning-enabled simulation-based inference to tackle this question. This provides uncertainty-aware predictions about the state of the ice sheet. Photo from Glaciology & Geophysics Tübingen. |