Our goal is to accelerate scientific discovery using machine learning and artificial intelligence: We want to develop computational methods that help scientists interpret empirical data and use them to discover and constrain theoretical models. To this end, we collaborate with experimental researchers from various disciplines. We are particularly interested in applications in the neurosciences: We want to understand how neuronal networks in the brain process sensory information and control intelligent behaviour, and to develop methods for the diagnosis and therapy of neuronal dysfunction.

More information about our research can be found on our publications page and on GitHub.



Simulation-based inference

Many domains in science use computer simulations to study an observed phenomenon. Consider for example physics models of particle movements, models of electrical activity in the brain, or modelling the spread of a disease. As these simulations become more and more complex, it becomes increasingly difficult to fit them to data, i.e. to find parameters such that the simulation output reproduces experimentally observed data. We develop methods that efficiently solve this problem by using neural networks that perform Bayesian inference. Read the article in the ML for Science Blog.




Deep learning for realistic models of neural circuits

We aim to look at intelligent systems and their environments jointly to understand stimulus-evoked neural computation and behavior. One way to increase our understanding of stimulus-evoked neural computation is to use and build models of neural circuits and compare their representations to data measured in biological neural circuits. In particular, deep convolutional neural networks (DNNs) for image classification are compelling models of neural computation in the mammalian visual system. But they lack a one-to-one mapping of artificial to biological neurons. To remedy this lack of the actual circuitry in DNNs, we ask how we can incorporate knowledge of the connectivity of neural circuits into models of neural computation? We investigate this question together with Dr. Srini Turaga and other scientists from the HHMI Janelia Research Campus. To do so, we develop connectome-constrained models of the Drosophila visual system, which learn to recognize e.g. movement in naturalistic movie sequences.




Deep learning for microscopy

In Single molecule localization microscopy (SMLM), super-resolution images of biological structures are assembled from a large number of individually detected spots. Deep neural networks (DNNs) are well suited for the task of detecting and localizing patterns in images, but for this application, and many other similar tasks in modern microscopy, no ground truth data is available for straight forward network training. Together with scientists from the HHMI Janelia Research Campus and the EMBL Heidelberg, we develop methods to train DNNs on simulations that closely resemble real data. Networks trained in this way achieve superior performance in difficult conditions and allow for much faster imaging. For more information read the article in the ML for Science Blog and university press release.




Low-dimensional dynamics

How can we efficiently link natural behavior or cognitive functions with the underlying neural population dynamics? To gain insight in this question, we develop ML-based tools that can infer low-dimensional trajectories underlying both neural population activity and behavior. We use for instance diffusion models for efficient generation of realistic data (both continuous voltage and discrete spikes), and recurrent neural networks (RNNs) as interpretable models of neural dynamics. Additionally, we capitalize on recent developments in machine learning that have enabled real-time behavioral tracking of animals in unconstrained lab settings and model neural activity during natural behavior.




Machine learning for medical research and clinical applications

We use probabilistic generative models to analyse data for clinical research and applications. Our aim is to obtain an interpretable probabilistic model of the data to facilitate downstream tasks in the clinical domain. To this end, we work on several projects – on deep generative models for spatio-temporal modeling of neuroimaging data to study disease progression in neurodegenerative diseases such as Alzheimer’s; on dynamical systems for single cell mRNA sequence data to simulate different interventions and extract biological hypothesis; on interpretable probabilistic machine learning models for physiological time series to impute missing values and predict adverse events.




Inferring the properties and history of Antarctic Ice Sheets with Machine Learning

Glaciologists investigate the ice sheets and ice shelves of Antarctica using a variety of methods, key among them is radio-echo sounding. This method allows glaciologists to measure the reflections of emitted radar waves from the internal layers of the ice body and from the interface between ice and underlying material. How can we use this data to extract information about the historical climate conditions, as well as the properties of the ice and what lies below it? We investigate this question together with collaborators in the glaciology and geophysics group led by Prof. Reinhard Drews at the University of Tübingen. We develop approaches combining the physical modelling of processes within the ice sheets, together with machine learning-enabled simulation-based inference to tackle this question. This provides uncertainty-aware predictions about the state of the ice sheet. Photo from Glaciology & Geophysics Tübingen.