Our goal is to accelerate scientific discovery using machine learning and artificial intelligence: We develop computational tools to interpret complex empirical observations and numerical simulations to provide scientific insights.

Collaborting closely with researchers from various disciplines, we apply these tools for scientific discovery, and to identify shared opportunities and challenges for AI in Science. We are particularly interested in neuroscience: We build data-driven mechanistic models to understand how neuronal networks process sensory information and control intelligent behaviour, and to identify underlying causes and potential treatments of neurological disorders.

Research Directions



Building Mechanistic Models of Neural Computations

We develop mechanistic models that capture how biological neural circuits compute and drive behavior. A recent focus has been to develop machine learning methods which make it possible to optimize mechanistic models of neural dynamics, either on behavioural tasks or to achieve a close match to experimental detail.

Together with our long-term collaborator Srinivas Turaga and his group at HHMI Janelia, we have been building models of the fruit fly visual system using dense connectomic reconstructions to simulate and understand behaviorally relevant visual processing (Lappalainen et al Nature 2024, research briefing, press release, German press release, blog post, NPR, transmitter article, SWP, code package).

We also built Jaxley, a differentiable simulator written in JAX for biophysical neuron and circuit models. It supports GPU acceleration and gradient-based optimization, and enables efficient, large-scale optimization of mechanistic models (Deistler et al bioRxiv 2025).




Simulation-Based Inference for Scientific Discovery

Simulation-based inference (SBI) enables Bayesian inference in complex scientific models, i.e. to identify models and model-parameters which are compatible both with empirical data and prior knowlege. Importantly, SBI can be applied to black-box simulators, as it only needs access to model-simulations, and and associated likelihoods or gradients. Several SBI approaches also enable amortized inference: After an initial training-phase, inference on additional observations can be performed rapidly, without need for additional simulations. This make it possible to also scale Bayesian inference to time-critical or high-throughput applications.

Our group has contributed a range of SBI tools, in particular variants of neural posterior estimation (NPE): SNPE-B and various embedding methods for amortized inference (Lueckmann et al NeurIPS 2017), SNPE-C/APT (Greenberg et al ICML 2019), Truncated-SNPE (Deistler et al NeurIPS 2021), methods for learning simulation-informed priors (Deistler et al PNAS 2022) and joint estimation of likelihoods and posteriors (Gloeckler et al ICLR 2022) and adversarial approaches (Ramesh et al ICLR 2022). We contributed the standard benchmark for SBI algorithms (see this website, Lueckmann et al AISTATS 2021). We also explored the use of SBI for model discovery (Schröder et al ICML 2024), robust estimation (Gao et al NeurIPS 2023), source estimation (Moss, Vetter et al at NeurIPS 2024), and the adversarial robustness of amortized inference (Gloeckler et al ICML 2023).

More recently, we have also developed a flexible transformer-based SBI method—the simformer (Gloeckler et al ICML 2024)—which enables efficient amortized inference on complex models, and allows post-hoc changes of parameters, missing data, and more, and specialized approaches for time-series data (Gloeckler, Toyota et al ICLR 2025). We have also shown how foundation models for tabular data can be used to accelerate SBI (Vetter, Gloeckler et al arXiv 2025).

Together with our collaborators, we have applied these tools across a range of scientific domains:

Neuroscience: SBI can be used to build and explore mechanistic models of neural dynamics (Goncalves, Lueckmann, Deistler et al eLife 2020, blog post, Deistler et al PNAS 2022, Gao et al bioRxiv 2024), models of neural connectivity (Boelts et al PLOS Comp Biology 2023) and plasticity (Confavreux et al NeurIPS 2023), and to link ion-channel genes and biophysical models (Bernarts et al bioRxiv 2025).

Astrophysics: SBI can be used to infer parameters to gravitational wave models (Dax et al PRL 2021, Dax et al ICLR 2022, Dax et al PRL 2023). This is collaboration lead by Bernhard Schölkopf, MPI for Intelligent Systems and Alessandra Buonanno, MPI for Gravitational Physics. Dax and colleagues showed that a specialized SBI approach enables rapid identification of binary neutron star mergers (Dax et al Nature 2025, Nature briefing, Nature news and views, MPI press release).

Computational Imaging: Simulation-based machinling can be used to enhance performance and efficiency of algorithms for single-molecule localization microscopy (SMLM) (Speiser, Mueller et al Nature Methods 2021, blog post, press release), and to enable fast and probabilistic inference for diffusion MRI (Manzano-Patron et al Medical Image Analysis 2025).

Geoscience: SBI enabled inference of basal melting rates in the Antarctic ice sheet (Moss et al Journal of Glaciology 2025).

The sbi toolbox

We initiated the open-source Python package sbi (Boelts et al JOSS 2020, Deistler, Boelts et al JOSS 2025), which provides a user-friendly interface for simulation-based inference using a range of different SBI methods, including both amortized and non-amortized approaches. The toolbox is now maintained and extended by an active community of contributors both within and beyond the lab. Documentation and tutorials are available at sbi.readthedocs.io/en/latest/.




Machine Learning for interpreting neurophysical recordings from the human brain

We develop machine learning methods to interpret high-dimensional neurophysiological data across spatial and temporal scales. We have been particular interested tools tht allow us to model both invasive and non-invasive recordings from the human brain.

Diffusion models: We have shown that (latent) diffusion models can be used to create highly realistic samples of neurophysiological data (Vetter et al Cell Patterns 2024), and provide low-dimensional interpretable embeddings of the data (Kapoor, Schulz et al NeurIPS 2024).

Recurrent Neural Networks: RNNs can be used to model the temporal structure of single-neuron recordings in the human brain (Liebe et al Nature Neuroscience 2025, Pals et al PLOS Comp Biology 2024). We have also investigated fitting approaches for stochastic low-rank RNNS, enabling one to obtain RNNs that are also generative models of neural data (Pals et al NeurIPS 2024).

We are, or have been, funded by the DFG (through the Excellence Cluster: Machine Learning for Science, Collaborative Research Centers SFB 1233, SFB 1089, SPP 2041), the ERC Grant “DeepCoMechTome”, the HFSP, the Else Kröner Fresenius Foundation (ClinBrAIn), the BMBF (Tuebingen AI Center, Projects Adimem, Simalesam, DeepHumanVision), and the Carl Zeiss Foundation.


Funding

We are, or have been, funded by the DFG (through the Excellence Cluster: Machine Learning for Science, Collaborative Research Centers SFB 1233, SFB 1089, SPP 2041), the ERC Grant “DeepCoMechTome”, the HFSP, the Else Kröner Fresenius Foundation (ClinBrAIn), the BMBF (Tuebingen AI Center, Projects Adimem, Simalesam, DeepHumanVision), and the Carl Zeiss Foundation.

More information about our research can be found on our publications page and on GitHub.