Our goal is to accelerate scientific discovery using machine learning and artificial intelligence: We want to develop computational methods that help scientists interpret empirical data and use them to discover and constrain theoretical models. To this end, we collaborate with experimental researchers from various disciplines. We are particularly interested in applications in the neurosciences: We want to understand how neuronal networks in the brain process sensory information and control intelligent behaviour, and to develop methods for the diagnosis and therapy of neuronal dysfunction.

More information about our research can be found on our publications page and on GitHub.

Simulation-based inference

Many domains in science use computer simulations to study an observed phenomenon. Consider for example physics models of particle movements, models of electrical activity in the brain, or modelling the spread of a disease. As these simulations become more and more complex, it becomes increasingly difficult to fit them to data, i.e. to find parameters such that the simulation output reproduces experimentally observed data. We develop methods that efficiently solve this problem by using neural networks that perform Bayesian inference. Read more .

Deep learning for microscopy

In Single molecule localization microscopy (SMLM), super-resolution images of biological structures are assembled from a large number of individually detected spots. Deep neural networks (DNNs) are well suited for the task of detecting and localizing patterns in images, but for this application, and many other similar tasks in modern microscopy, no ground truth data is available for straight forward network training. Together with scientists from the HHMI Janelia Research Campus and the EMBL Heidelberg, we develop methods to train DNNs on simulations that closely resemble real data. Networks trained in this way achieve superior performance in difficult conditions and allow for much faster imaging. Read more here and here.

Low-dimensional dynamics

We use sequential variational autoencoders to model population activity and behavior of freely moving mice. More details coming soon.

Deep learning and the brain

We aim to look at intelligent systems and their environments jointly to understand stimulus-evoked neural computation and behavior. One way to increase our understanding of stimulus-evoked neural computation is to use and build models of neural circuits and compare their representations to data measured in biological neural circuits. In particular, deep convolutional neural networks (DNNs) for image classification are compelling models of neural computation in the mammalian visual system. But they lack a one-to-one mapping of artificial to biological neurons. To remedy this lack of the actual circuitry in DNNs, we ask how we can incorporate knowledge of the connectivity of neural circuits into models of neural computation? We investigate this question together with scientists from the HHMI Janelia Research Campus. To do so, we develop connectome-constrained models of the Drosophila visual system, which learn to recognize e.g. movement in naturalistic movie sequences.

Simulation-based inference illustration: Franz-Georg Stämmele, taken from this blog.