You can also check out Google Scholar for a more up-to-date publication list.
Research Articles and Reviews
2025
Research Articles
Dax, Maximilian, Green, Stephen R, Gair, Jonathan, Gupte, Nihar, Pürrer, Michael, Raymond, Vivien, Wildberger, Jonas, Macke, Jakob H, Buonanno, Alessandra, Schölkopf, Bernhard
Real-time gravitational-wave inference for binary neutron stars using machine learning Nature, 2025 url |
preprint |
news and views |
briefing Zucca, Stefano, Schulz, Auguste, Pedro, Gonçalves J, Macke, Jakob H, Aman, Saleem B, Solomon, Sam G.
Visual loom caused by self-movement or object-movement elicits distinct responses in mouse superior colliculus Current Biology, 2025 url Haxel, Lisa, Ahola, Oskari, Belardinelli, Paolo, Ermolova, Maria, Humaidan, Dania, Macke, Jakob H, Ziemann, Ulf
Decoding Motor Excitability in TMS using EEG-Features: An Exploratory Machine Learning Approach IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2025 url Schulz, Auguste, Vetter, Julius, Gao, Richard, Morales, Daniel, Lobato-Rios, Victor, Ramdya, Pavan, Gonçalves, Pedro J, Macke, Jakob H
Modeling conditional distributions of neural and behavioral data with masked variational autoencoders Cell Reports, 2025 url Gloeckler, Manuel, Toyota, Shoji, Fukumizu, Kenji, Macke, Jakob H
Compositional simulation-based inference for time series ICLR, 2025 url Moss, Guy, Višnjević, Vjeran, Eisen, Olaf, Orachewski, Falk M, Schröder, Cornelius, Macke, Jakob H, Drews, Reinhard
Simulation-Based Inference of Surface Accumulation and Basal Melt Rates of an Antarctic Ice Shelf from Isochronal Layers Journal of Glaciology, 2025 url |
preprint Vetter, Julius, Gloeckler, Manuel, Gedon, Daniel, Macke, Jakob H
Effortless, Simulation-Efficient Bayesian Inference using Tabular Foundation Models NeurIPS, 2025 url Deistler, Michael, Kadhim, Kyra L, Pals, Matthijs, Beck, Jonas, Huang, Ziwei, Gloeckler, Manuel, Lappalainen, Janne K, Schröder, Cornelius, Berens, Philipp, Pedro, Gonçalves J, Macke, Jakob H
Jaxley: Differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics Nature Methods, 2025 url |
News and Views |
Uni Tübingen press release |
ML for science blog post |
code Moss, Guy, Muhle, Leah Sophie, Drews, Reinhard, Macke, Jakob H, Schröder, Cornelius
FNOPE: Simulation-based inference on function spaces with Fourier Neural Operators NeurIPS, 2025 url Bernaerts, Yves, Deistler, Michael, Gonçalves, Pedro J, Beck, Jonas, Stimberg, Marcel, Scala, Federico, Tolias, Andreas S, Macke, Jakob H, Kobak, Dmitry, Berens, Philipp
Combined statistical-biophysical modeling links ion channel genes to physiology of cortical neuron types Patterns, 2025 url Boelts, Jan, Deistler, Michael, Gloeckler, Manuel, Tejero-Cantero, Álvaro, Lueckmann, Jan-Matthis, Moss, Guy, Steinbach, Peter, Moreau, Thomas, Muratore, Fabio, Linhart, Julia, Durkan, Conor, Vetter, Julius, Miller, Benjamin Kurt, Herold, Maternus, Ziaeemehr, Abolfazl, Pals, Matthijs, Gruner, Theo, Bischoff, Sebastian, Krouglova, Nastya, Gao, Richard, Lappalainen, Janne K, Mucsányi, Bálint, Pei, Felix, Schulz, Auguste, Stefanidi, Zinovia, Rodrigues, Pedro, Schröder, Cornelius, Abu Zaid, Faried, Beck, Jonas, Kapoor, Jaivardhan, Greenberg, David S, Gonçalves, Pedro J, Macke, Jakob H
sbi reloaded: a toolkit for simulation-based inference workflows Journal of Open Source Software, 2025 url Gupte, Nihar, Ramos-Buades, Antoni, Buonanno, Alessandra, Gair, Jonathan, Miller, M Coleman, Dax, Maximilian, Green, Stephen R, Pürrer, Michael, Wildberger, Jonas, Macke, Jakob H, Romero-Shaw, Isobel M, Schölkopf, Bernhard
Evidence for eccentricity in the population of binary black holes observed by LIGO-Virgo-KAGRA Physical Review D, 2025 url Manzano-Patrón, JP, Deistler, Michael, Schröder, Cornelius, Kypraios, Theodore, Gonçalves, Pedro J, Macke, Jakob H, Sotiropoulos, Stamatios SN
Uncertainty mapping and probabilistic tractography using Simulation-Based Inference in diffusion MRI: A comparison with classical Bayes Medical Image Analysis, 2025 url Motallebzadeh, Hamid, Deistler, Michael, Schönleitner, Florian M, Macke, Jakob H, Puria, Sunil
Simulation-based inference for subject-specific tuning of middle ear finite-element models towards personalized objective diagnosis Scientific Reports, 2025 url Tanoh, Iris C, Deistler, Michael, Macke, Jakob H, Linderman, Scott W
Identifying multi-compartment Hodgkin-Huxley models with high-density extracellular voltage recordings NeurIPS, 2025 url Haxel, Lisa, Ahola, Otto, Kapoor, Jaivardhan, Ziemann, Ulf, Macke, Jakob H
Personalized real-time inference of momentary excitability from human EEG NeuroImage, 2025 url Kadhim, Kyra L, Beck, Jonas, Huang, Ziwei, Macke, Jakob H, Rieke, Fred, Euler, Thomas, Deistler, Michael, Berens, Philipp
A data and task-constrained mechanistic model of the mouse outer retina shows robustness to contrast variations NeurIPS, 2025 url Preprints and Technical Reports
Deistler, Michael, Boelts, Jan, Steinbach, Peter, Moss, Guy, Moreau, Thomas, Gloeckler, Manuel, Rodrigues, Pedro LC, Linhart, Julia, Lappalainen, Janne K, Miller, Benjamin Kurt, Gonçalves, Pedro J, Lueckmann, Jan-Matthis, Schröder, Cornelius, Macke, Jakob H
Simulation-based inference: A practical guide arXiv, 2025 url Haxel, Lisa, Kapoor, Jaivardhan, Ziemann, Ulf, Macke, Jakob H
EDAPT: Towards Calibration-Free BCIs with Continual Online Adaptation arXiv, 2025 url Kapoor, Jaivardhan, Macke, Jakob H, Baumgartner, Christian F
MRExtrap: Longitudinal Aging of Brain MRIs using Linear Modeling in Latent Space arXiv, 2025 url Confavreux, Basile, Harrington, Zoe, Kania, Maciej, Ramesh, Poornima, Krouglova, Anastasia N, Bozelos, Panos A, Macke, Jakob H, Saxe, Andrew M, Gonçalves, Pedro J, Vogels, Tim P
Memory by a thousand rules: Automated discovery of functional multi-type plasticity rules reveals variety and degeneracy at the heart of learning bioRxiv, 2025 url Kofler, Annalena, Dax, Maximilian, Green, Stephen R, Wildberger, Jonas, Gupte, Nihar, Macke, Jakob H, Gair, Jonathan, Buonanno, Alessandra, Schölkopf, Bernhard
Flexible Gravitational-Wave Parameter Estimation with Transformers arXiv, 2025 url Ciganda, Daniel, Campón, Ignacio, Permanyer, Iñaki, Macke, Jakob H
Learning Individual Reproductive Behavior from Aggregate Fertility Rates via Neural Posterior Estimation arXiv, 2025 url Chintaluri, Chaitanya, Podlaski, William, Bozelos, Panos A, Gonçalves, Pedro J, Lueckmann, Jan-Matthis, Macke, Jakob H, Vogels, Tim P
An ion channel omnimodel for standardized biophysical neuron modelling bioRxiv, 2025 url Bischoff, Sebastian, Poličar, Pavlin G, Mukherjee, Sayak, Macke, Jakob H, Claassen, Manfred, Schröder, Cornelius
velotest: Statistical assessment of RNA velocity embeddings reveals quality differences for reliable trajectory visualizations bioRxiv, 2025 url 2024
Research Articles
Pals, Matthijs, Macke, Jakob H, Barak, Omri
Trained recurrent neural networks develop phase-locked limit cycles in a working memory task PLOS CB, 2024 url Vetter, Julius, Macke, Jakob H, Gao, Richard
Generating realistic neurophysiological time series with denoising diffusion probabilistic models Cell Patterns, 2024 url Vetter, Julius, Moss, Guy, Schröder, Cornelius, Gao, Richard, Macke, Jakob H
Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation NeurIPS, 2024 url Beck, Jonas, Bosch, Nathanael, Deistler, Michael, Kadhim, Kyra L, Macke, Jakob H, Hennig, Philipp, Berens, Philipp
Diffusion Tempering Improves Parameter Estimation with Probabilistic Integrators for Ordinary Differential Equations arxiv, 2024 url Haxel, Lisa, Belardinelli, Paolo, Ermolova, Maria, Humaidan, Dania, Macke, Jakob H, Ziemann, Ulf
Decoding Motor Excitability in TMS using EEG-Features: An Exploratory Machine Learning Approach biorxiv, 2024 url Bischoff, Sebastian, Darcher, Alana, Deistler, Michael, Gao, Richard, Gerken, Franziska, Gloeckler, Manuel, Haxel, Lisa, Kapoor, Jaivardhan, Lappalainen, Janne K, Macke, Jakob H, Moss, Guy, Pals, Matthijs, Pei, Felix, Rapp, Rachel, Sağtekin, A Erdem, Schröder, Cornelius, Schulz, Auguste, Stefanidi, Zinovia, Toyota, Shoji, Ulmer, Linda, Vetter, Julius
A Practical Guide to Sample-based Statistical Distances for Evaluating Generative Models in Science Transactions on Machine Learning Research, 2024 url Gloeckler, Manuel, Deistler, Michael, Weilbach, Christian, Wood, Frank, Macke, Jakob H
All-in-one simulation-based inference ICML, 2024 url Kapoor, Jaivardhan, Schulz, Auguste, Vetter, Julius, Pei, Felix, Gao, Richard, Macke, Jakob H
Latent Diffusion for Neural Spiking Data NeurIPS, 2024 url Pals, Matthijs, Sağtekin, A Erdem, Pei, Felix, Gloeckler, Manuel, Macke, Jakob H
Inferring stochastic low-rank recurrent neural networks from neural data NeurIPS, 2024 url Gao, Richard, Deistler, Michael, Schulz, Auguste, Pedro, Gonçalves J, Macke, Jakob H
Deep inverse modeling reveals dynamic-dependent invariances in neural circuit mechanisms bioRxiv, 2024 preprint Lappalainen, Janne K, Tschopp, Fabian D, Prakhya, Sridhama, McGill, Mason, Nern, Aljoscha, Kazunori, Shinomiya, Takemura, Shin-ya, Gruntman, Eyal, Macke, Jakob H, Turaga, Srinivas C
Connectome-constrained networks predict neural activity across the fly visual system Nature, 2024 url |
code |
briefing |
press release |
blog Krouglova, Anastasia N, Johnson, Hayden R, Confavreux, Basile, Deistler, Michael, Gonçalves, Pedro J
Multifidelity Simulation-based Inference for Computationally Expensive Simulators arxiv, 2024 url Schröder, Cornelius, Macke, Jakob H
Simultaneous identification of models and parameters of scientific simulators ICML, 2024 url Vetter, Julius, Lim, Kathleen, Dijkstra, Tjeerd MH, Dargaville, Peter A, Kohlbacher, Oliver, Macke, Jakob H, Poets, Christian F
Neonatal apnea and hypopnea prediction in infants with Robin sequence with neural additive models for time series PLOS Digital Health, 2024 url Preprints and Technical Reports
Zeraati, Roxana, Levina, Anna, Macke, Jakob H, Gao, Richard
Neural timescales from a computational perspective arXiv, 2024 url Gerken, Franziska, Darcher, Alana, Gonçalves, Pedro J, Rapp, Rachel, Elezi, Ismail, Niediek, Johannes, Kehl, Marcel S, Reber, Thomas P, Liebe, Stefanie, Macke, Jakob H, Mormann, Florian, Leal-Taixé, Laura
Decoding movie content from neuronal population activity in the human medial temporal lobe bioRxiv, 2024 url Harth, Philipp, Udvary, Daniel, Boelts, Jan, Macke, Jakob H, Baum, Daniel, Hege, Hans-Christian, Oberlaender, Marcel
Dissecting origins of wiring specificity in dense reconstructions of neural tissue bioRxiv, 2024 url 2023
Research Articles
Dax, Maximillian, Green, Stephen R, Gair, Jonathan, Pürrer, Michael, Wildberger, Jonas, Macke, Jakob H, Buonanno, Alessandra, Schölkopf, Bernhard
Neural Importance Sampling for Rapid and Reliable Gravitational-Wave Inference PRL, 2023 url |
preprint Gloeckler, Manuel, Deistler, Michael, Macke, Jakob H
Adversarial robustness of amortized Bayesian inference ICML, 2023 url Gao, Richard, Deistler, Michael, Macke, Jakob H
Generalized Bayesian Inference for Scientific Simulators via Amortized Cost Estimation NeurIPS, 2023 url Boelts, Jan, Harth, Philipp, Gao, Richard, Udvary, Daniel, Yanez, Felipe, Baum, Daniel, Hege, Hans-Christian, Oberlaender, Marcel, Macke, Jakob H
Simulation-based inference for efficient identification of generative models in connectomics Plos CB, 2023 url Dax, Maximilian, Green, Stephen R, Gair, Jonathan, Pürrer, Michael, Wildberger, Jonas, Macke, Jakob H, Hege, Hans-Christian, Buonanno, Alessandra, Schölkopf, Bernhard
Neural Importance Sampling for Rapid and Reliable Gravitational-Wave Inference Physical Review Letters, 2023 url Wildberger, Jonas, Dax, Maximilian, Green, Stephen R, Gair, Jonathan, Pürrer, Michael, Macke, Jakob H, Hege, Hans-Christian, Buonanno, Alessandra, Schölkopf, Bernhard
Adapting to noise distribution shifts in flow-based gravitational-wave inference Physical Review D, 2023 url Dax, Maximilian, Wildberger, Jonas, Buchholz, Simon, Green, Stephen R, Macke, Jakob H, Schölkopf, Bernhard
Flow Matching for Scalable Simulation-Based Inference NeurIPS, 2023 url Gorecki, Mila, Macke, Jakob H., Deistler, Michael
Amortized Bayesian Decision Making for simulation-based models arxiv, 2023 url Moss, Guy, Višnjević, Vjeran, Eisen, Olaf, Orachewski, Falk M, Schröder, Cornelius, Macke, Jakob H, Drews, Reinhard
Simulation-Based Inference of Surface Accumulation and Basal Melt Rates of an Antarctic Ice Shelf from Isochronal Layers arxiv, 2023 url Confavreux, Basile, Ramesh, Poornima, Goncalves, Pedro J, Macke, Jakob H, Vogels, Tim P
Meta-learning families of plasticity rules in recurrent spiking networks using simulation-based inference NeurIPS, 2023 url Preprints and Technical Reports
Kapoor, Jaivardhan, Macke, Jakob H, Baumgartner, Christian F
Multiscale Metamorphic VAE for 3D Brain MRI Synthesis NeurIPS 2022 Workshop on Medical Imaging meets NeurIPS, 2023 url Ramesh, Poornima, Confavreux, Basile, Gonçalves, Pedro J, Vogels, Tim P, Macke, Jakob H
Indistinguishable network dynamics can emerge from unalike plasticity rules bioRxiv, 2023 url 2022
Research Articles
Deistler, Michael, Macke, Jakob H, Gonçalves, Pedro J
Energy efficient network activity from disparate circuit parameters PNAS, 2022 url Ramesh, Poornima, Lueckmann, Jan-Matthis, Boelts, Jan, Tejero-Cantero, Álvaro, Greenberg, David S, Gonçalves, Pedro J, Macke, Jakob H
GATSBI: Generative Adversarial Training for Simulation-Based Inference ICLR, 2022 url Dax, Maximilian, Green, Stephen R, Gair, Jonathan, Deistler, Michael, Schölkopf, Bernhard, Macke, Jakob H
Group equivariant neural posterior estimation ICLR, 2022 url Glöckler, Manuel, Deistler, Michael, Macke, Jakob H
Variational methods for simulation-based inference ICLR, 2022 url Udvary, Daniel, Hart, Phillip, Macke, Jakob H, Hege, Hans-Christian, de Kock, Christiaaan PJ, Sakman, Bert, Oberlaender, Marcel
The impact of neuronal structure on cortical network architecture Cell Reports, 39 (2), 2022 url Jan Boelts, Jan-Matthis Lueckmann, Richard Gao, Macke Jakob H
Flexible and efficient simulation-based inference for models of decision-making eLife, 2022 url Liebe, Stefanie, Niediek, Johannes, Pals, Matthijs, Reber, Thomas P, Faber, Jennifer, Bostroem, Jan, Elger, Christian E, Macke, Jakob H, Mormann, Florian
Phase of firing does not reflect temporal order in sequence memory of humans and recurrent neural networks bioRxiv, 2022 url Deistler, Michael, Goncalves, Pedro J, Macke, Jakob H
Truncated proposals for scalable and hassle-free simulation-based inference NeurIPS, 2022 url Blum, Corinna, Baur, David, Achauer, Lars-Christian, Berens, Philipp, Biergans, Stephanie, Erb, Michael, Hömberg, Volker, Huang, Ziwei, Kohlbacher, Oliver, Liepert, Joachim, Lindig, Tobias, Lohmann, Gabriele, Macke, Jakob H, Römhild, Jörg, Rösinger-Hein, Christine, Zrenner, Brigitte, Ziemann, Ulf
Personalized neurorehabilitative precision medicine: from data to therapies (MWKNeuroReha) - a multi-centre prospective observational clinical trial to predict long-term outcome of patients with acute motor stroke BMC neurology, 22 (1), pp. 1-15, 2022 url Monsees, Arne, Voit, Kay-Michael, Wallace, Damian J, Sawinski, Juergen, Charyasz, Edyta, Scheffler, Klaus, Macke, Jakob H, Kerr, Jason ND
Estimation of skeletal kinematics in freely moving rodents Nature Methods, 2022 url Beck, Jonas, Deistler, Michael, Bernaerts, Yves, Macke, Jakob H, Berens, Philipp
Efficient identification of informative features in simulation-based inference NeurIPS, 2022 url 2021
Research Articles
Dax, Maximilian, Green, Stephen R, Gair, Jonathan Gair, Macke, Jakob H, Buonanno, Alessandra, Schölkopf, Bernhard
Amortized Bayes inference of gravitational waves with normalizing flows Fourth Workshop on Machine Learning and the Physical Sciences at NeurIPS, 2021 pdf Pofahl, Martin, Nikbakht, Negar, Haubrich, André N, Nguyen, Theresa, Masala, Nicola, Distler, Fabian, Braganza, Oliver, Macke, Jakob H, Ewell, Laura A, Golcuk, Kurtulus, Beck, Heinz
Synchronous activity patterns in the dentate gyrus during immobility Elife, 10 , p. e65786, 2021 pdf Lueckmann, Jan-Matthis, Boelts, Jan, Greenberg, David S, Gonçalves, Pedro J, Macke, Jakob H
- Benchmarking Simulation-Based Inference
Recent advances in probabilistic modelling have led to a large number of simulation-based inference algorithms which do not require numerical evaluation of likelihoods. However, a public benchmark with appropriate performance metrics for such'likelihood-free'algorithms has been lacking. This has made it difficult to compare algorithms and identify their strengths and weaknesses. We set out to fill this gap: We provide a benchmark with inference tasks and suitable performance metrics, with an initial selection of algorithms including recent approaches employing neural networks and classical Approximate Bayesian Computation methods. We found that the choice of performance metric is critical, that even state-of-the-art algorithms have substantial room for improvement, and that sequential estimation improves sample efficiency. Neural network-based approaches generally exhibit better performance, but there is no uniformly best algorithm. We provide practical advice and highlight the potential of the benchmark to diagnose problems and improve algorithms. The results can be explored interactively on a companion website. All code is open source, making it possible to contribute further benchmark tasks and inference algorithms.
AISTATS, 2021 url Dehnen, Gert, Kehl, Marcel S, Darcher, Alana, Müller, Tamara T, Macke, Jakob H, Borger, Valeri, Surges, Rainer, Mormann, Florian
Duplicate Detection of Spike Events: A Relevant Problem in Human Single-Unit Recordings Brain Science, 11 (6), p. 761, 2021 url Corna, Andrea, Ramesh, Poornima, Jetter, Florian, Lee, Meng-Jung, Macke, Jakob H, Zeck, Günther
Discrimination of simple objects decoded from the output of retinal ganglion cells upon sinusoidal electrical stimulation Journal of Neural Engineering, 18 (4), p. 046086, 2021 url Speiser, Artur, Muller, Lucas-Raphael, Philipp Hoess, Matti, Ulf, Obara, Christopher J, Legant, Wesley R, Kreshuk, Anna, Macke, Jakob H, Ries, Jonas, Turaga, Srinivas C
Deep learning enables fast and dense single-molecule localization with high accuracy Nature Methods, 18 , pp. 1082–1090, 2021 url Lavin, Alexander, Zenil, Hector, Paige, Brooks, Krakauer, David, Gottschlich, Justin, Mattson, Tim, Anadkumar, Anima, Choudry, Sanjay, Rocki, Kamil, Baydin, Atilim Günes, Prunkl, Carina, Isayev, Olexandr, Peterson, Erik, McMahon, Peter L, Macke, Jakob H, Cranmer, Kyle, Zhang, Jiaxin, Wainwright, Haruko, Hanuka, Adi, Veloso, Manuela, Assefa, Samuel, Zheng, Stephan, Pfeffer, Avi
Simulation Intelligence: Towards a New Generation of Scientific Methods arXiv, 2021 url Dax, Maximilian, Green, Stephen R, Gair, Jonathan, Macke, Jakob H, Buonanno, Alessandra, Schölkopf, Bernhard
Real-time gravitational wave science with neural posterior Physical review letters, 127 (24), p. 241103, 2021 url Glöckler, Manuel, Deistler, Michael, Macke Jakob H
Variational methods for simulation-based inference ICLR, 2021 url Dax, Maximilian, Green, Stephen R., Gair, Jonathan, Deistler, Michael, Schölkopf, Bernhard, Macke Jakob H
Group equivariant neural posterior estimation ICLR, 2021 url 2020
Research Articles
Sekhar, Sudarshan, Ramesh, Poornima, Bassetto, Giacomo, Zrenner, Eberhart, Macke, Jakob H, Rathbun, Daniel L
Characterizing retinal ganglion cell responses to electrical stimulation using generalized linear models Frontiers in Neuroscience, 14 , p. 378, 2020 url Rene, Alexandre, Longtin, Andre, Macke, Jakob H.
Inference of a mesoscopic population model from population spike trains Neural computation, 32 (8), pp. 1448--1498, 2020 Tejero-Cantero, Alvaro, Boelts, Jan, Deistler, Michael, Lueckmann, Jan-Matthis, Durkan, Conor, Gonçalves, Pedro J, Greenberg, David S, Macke, Jakob H
Sbi a toolkit for simulation-based inference Journal of Open Source Software, 5 (52), p. 2505, 2020 url Gonçalves, Pedro J, Lueckmann, Jan-Matthis, Deistler, Michael, Nonnenmacher, Marcel, Öcal, Kaan, Bassetto, Giacomo, Chintaluri, C, Podlaski, WF, Haddad, SA, Vogels, TP, Grennberg DS, Macke Jakob H
Training deep neural density estimators to identify mechanistic models of neural dynamics Elife, 2020 url Tejero-Cantero, Alvaro, Boelts, Jan, Deistler, Michael, Lueckmann, Jan-Matthis, Durkan, Conor, Gonçalves, Pedro J, Greenberg, David S, Macke, Jakob H
Sbi a toolkit for simulation-based inference Journal of Open Source Software, 5 (52), p. 2505, 2020 url 2019
Research Articles
Barrett DGT, Morcos Ari S, Macke JH
- Analyzing biological and artificial neural networks: challenges with opportunities for synergy?
Deep neural networks (DNNs) transform stimuli across multiple processing stages to produce representations that can be used to solve complex tasks, such as object recognition in images. However, a full understanding of how they achieve this remains elusive. The complexity of biological neural networks substantially exceeds the complexity of DNNs, making it even more challenging to understand the representations they learn. Thus, both machine learning and computational neuroscience are faced with a shared challenge: how can we analyze their representations in order to understand how they solve complex tasks? We review how data-analysis concepts and techniques developed by computational neuroscientists can be useful for analyzing representations in DNNs, and in turn, how recently developed techniques for analysis of DNNs can be useful for understanding representations in biological neural networks. We explore opportunities for synergy between the two fields, such as the use of DNNs as in silico model systems for neuroscience, and how this synergy can lead to new hypotheses about the operating principles of biological neural networks.
Current Opinion in Neurobiology, 55 , pp. 55-64, 2019 url |
pdf Greenberg, D.S., Nonnenmacher, M., Macke, J.H.
Automatic Posterior Transformation for Likelihood-Free Inference Proceedings of the 36th International Conference on Machine Learning, 97 , pp. 2404-2414, 2019 url Ansuini, A, Laio, A, Macke, JH, Zoccolan, D
Intrinsic dimension of data representations in deep neural networks Advances in Neural Information Processing Systems 32, 2019 pdf Boelts, Jan, Harth, Philipp, Yanez, Felipe, Hege, Hans-Christian, Oberlaender, Marcel, Macke, Jakob H
Bayesian inference for synaptic connectivity rules in anatomically realistic cortical connectomes Bernstein Conference 2019, 2019 pdf Lueckmann J, Bassetto G, Karaletsos T, Macke JH
- Likelihood-free inference with emulator networks
Approximate Bayesian Computation (ABC) provides methods for Bayesian inference in simulation-based stochastic models which do not permit tractable likelihoods. We present a new ABC method which uses probabilistic neural emulator networks to learn synthetic likelihoods on simulated data -- both local emulators which approximate the likelihood for specific observed data, as well as global ones which are applicable to a range of data. Simulations are chosen adaptively using an acquisition function which takes into account uncertainty about either the posterior distribution of interest, or the parameters of the emulator. Our approach does not rely on user-defined rejection thresholds or distance functions. We illustrate inference with emulator networks on synthetic examples and on a biophysical neuron model, and show that emulators allow accurate and efficient inference even on high-dimensional problems which are challenging for conventional ABC approaches.
Proceedings of Machine Learning Research, 96 , pp. 32-53, 2019 URL |
pdf Speiser, Artur, Turaga, Srinivas C, Macke, Jakob H
Teaching deep neural networks to localize sources in super-resolution microscopy by combining simulation-based learning and unsupervised learning axXiv, 2019 Ramesh, Poornima, Atayi, Mohamad, Macke, Jakob H
Adversarial training of neural encoding models on population spike trains Workshop, 2019 Ramesh, Poornima, Atayi, Mohamad, Macke, Jakob H
Adversarial training of neural encoding models on population spike trains Workshop, 2019 Macke, Jakob H, Nienborg, Hendrikje
Choice (-history) correlations in sensory cortex: cause or consequence? Current opinion in neurobiology, 58 , pp. 148--154, 2019 2018
Research Articles
Berens P, Freeman J, Deneux T, Chenkov N, McColgan T, Speiser A, Macke JH, Turaga S, Mineault P, Rupprecht P, Gerhard S, Friedrich RW, Friedrich J, Paninski P, Pachitariu M, Harris KD, Bolte B, Machado TA, Ringach D, Reimer J, Froudarakis E, Euler T, Roman-Roson M, Theis L, Tolias AS, Bethge M
- Community-based benchmarking improves spike inference from two-photon calcium imaging data
In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike trains from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.
PLoS computational biology, 14 , 2018 biorXiv |
URL |
pdf Lueckmann J, Macke JH*, Nienborg H*
- Can serial dependencies in choices and neural activity explain choice probabilities?
During perceptual decisions the activity of sensory neurons co-varies with choice, a co-variation often quantified as “choice-probability”. Moreover, choices are influenced by a subject’s previous choice (serial dependence) and neuronal activity often shows temporal correlations on long (seconds) timescales. Here, we test whether these findings are linked. Using generalized linear models we analyze simultaneous measurements of behavior and V2 neural activity in macaques performing a visual discrimination task. Both, decisions and spiking activity show substantial temporal correlations and cross-correlations but seem to reflect two mostly separate processes. Indeed, removing history effects using semi-partial correlation analysis leaves choice probabilities largely unchanged. The serial dependencies in choices and neural activity therefore cannot explain the observed choice probability. Rather, serial dependencies in choices and spiking activity reflect two predominantly separate but parallel processes, which are coupled on each trial by co-variations between choices and activity. These findings provide important constraints for computational models of perceptual decision-making that include feedback signals.
Journal of Neuroscience, 38 , pp. 3495-3506, 2018 url |
pdf Djurdjevic V, Ansuini A, Bertolini D, Macke JH, Zoccolan D
- Accuracy of Rats in Discriminating Visual Objects Is Explained by the Complexity of Their Perceptual Strategy
Despite their growing popularity as models of visual functions, it remains unclear whether rodents are capable of deploying advanced shape-processing strategies when engaged in visual object recognition. In rats, for instance, pattern vision has been reported to range from mere detection of overall object luminance to view-invariant processing of discriminative shape features. Here we sought to clarify how refined object vision is in rodents, and how variable the complexity of their visual processing strategy is across individuals. To this aim, we measured how well rats could discriminate a reference object from 11 distractors, which spanned a spectrum of image-level similarity to the reference. We also presented the animals with random variations of the reference, and processed their responses to these stimuli to derive subject-specific models of rat perceptual choices. Our models successfully captured the highly variable discrimination performance observed across subjects and object conditions. In particular, they revealed that the animals that succeeded with the most challenging distractors were those that integrated the wider variety of discriminative features into their perceptual strategies. Critically, these strategies were largely preserved when the rats were required to discriminate outlined and scaled versions of the stimuli, thus showing that rat object vision can be characterized as a transformation-tolerant, feature-based filtering process. Overall, these findings indicate that rats are capable of advanced processing of shape information, and point to the rodents as powerful models for investigating the neuronal underpinnings of visual object recognition and other high-level visual functions.
Current Biology, 28 , pp. 1005–1015, 2018 url |
dispatch |
pdf |
press Nonnenmacher M, Turaga SC, Macke JH
- Extracting low-dimensional dynamics from multiple large-scale neural population recordings by learning to predict correlations
A powerful approach for understanding neural population dynamics is to extract low-dimensional trajectories from population recordings using dimensionality reduction methods. Current approaches for dimensionality reduction on neural data are limited to single population recordings, and can not identify dynamics embedded across multiple measurements. We propose an approach for extracting low-dimensional dynamics from multiple, sequential recordings. Our algorithm scales to data comprising millions of observed dimensions, making it possible to access dynamics distributed across large populations or multiple brain areas. Building on subspace-identification approaches for dynamical systems, we perform parameter estimation by minimizing a moment-matching objective using a scalable stochastic gradient descent algorithm: The model is optimized to predict temporal covariations across neurons and across time. We show how this approach naturally handles missing data and multiple partial recordings, and can identify dynamics and predict correlations even in the presence of severe subsampling and small overlap between recordings. We demonstrate the effectiveness of the approach both on simulated data and a whole-brain larval zebrafish imaging dataset.
Advances in Neural Information Processing Systems 30: 31st Conference on Neural Information Processing Systems (NeurIPS 2017), 2018 pdf |
URL |
arXiv |
code Lueckmann J*, Gonçalves P*, Bassetto G, Oecal K, Nonnenmacher M, Macke JH
- Flexible statistical inference for mechanistic models of neural dynamics
Mechanistic models of single-neuron dynamics have been extensively studied in computational neuroscience. However, identifying which models can quantitatively reproduce empirically measured data has been challenging. We propose to overcome this limitation by using likelihood-free inference approaches (also known as Approximate Bayesian Computation, ABC) to perform full Bayesian inference on single-neuron models. Our approach builds on recent advances in ABC by learning a neural network which maps features of the observed data to the posterior distribution over parameters. We learn a Bayesian mixture-density network approximating the posterior over multiple rounds of adaptively chosen simulations. Furthermore, we propose an efficient approach for handling missing features and parameter settings for which the simulator fails -- both being prevalent issues in models of neural dynamics -- as well as a strategy for automatically learning relevant features using recurrent neural networks. On synthetic data, our approach efficiently estimates posterior distributions and recovers ground-truth parameters. On in-vitro recordings of membrane voltages, we recover multivariate posteriors over biophysical parameters, which yield model-predicted voltage traces that accurately match empirical data. Our approach will enable neuroscientists to perform Bayesian inference on complex neuron models without having to design model-specific algorithms, closing the gap between mechanistic and statistical approaches to single-neuron modelling.
Advances in Neural Information Processing Systems 30: 31st Conference on Neural Information Processing Systems (NeurIPS 2017), 2018 pdf |
URL |
arXiv |
code Speiser A, Jinyao Y, Archer E, Buesing L, Turaga SC, Macke JH
- Fast amortized inference of neural activity from calcium imaging data with variational autoencoders
Calcium imaging permits optical measurement of neural activity. Since intracellular calcium concentration is an indirect measurement of neural activity, computational tools are necessary to infer the true underlying spiking activity from fluorescence measurements. Bayesian model inversion can be used to solve this problem, but typically requires either computationally expensive MCMC sampling, or faster but approximate maximum-a-posteriori optimization. Here, we introduce a flexible algorithmic framework for fast, efficient and accurate extraction of neural spikes from imaging data. Using the framework of variational autoencoders, we propose to amortize inference by training a deep neural network to perform model inversion efficiently. The recognition network is trained to produce samples from the posterior distribution over spike trains. Once trained, performing inference amounts to a fast single forward pass through the network, without the need for iterative optimization or sampling. We show that amortization can be applied flexibly to a wide range of nonlinear generative models and significantly improves upon the state of the art in computation time, while achieving competitive accuracy. Our framework is also able to represent posterior distributions over spike-trains. We demonstrate the generality of our method by proposing the first probabilistic approach for separating backpropagating action potentials from putative synaptic inputs in calcium imaging of dendritic spines.
Advances in Neural Information Processing Systems 30: 31st Conference on Neural Information Processing Systems (NeurIPS 2017), 2018 pdf |
URL |
arXiv |
code Preprints and Technical Reports
David GT Barrett, Ari S Morcos, Jakob H Macke
- Analyzing biological and artificial neural networks: challenges with opportunities for synergy?
Deep neural networks (DNNs) transform stimuli across multiple processing stages to produce representations that can be used to solve complex tasks, such as object recognition in images. However, a full understanding of how they achieve this remains elusive. The complexity of biological neural networks substantially exceeds the complexity of DNNs, making it even more challenging to understand the representations that they learn. Thus, both machine learning and computational neuroscience are faced with a shared challenge: how can we analyze their representations in order to understand how they solve complex tasks? We review how data-analysis concepts and techniques developed by computational neuroscientists can be useful for analyzing representations in DNNs, and in turn, how recently developed techniques for analysis of DNNs can be useful for understanding representations in biological neural networks. We explore opportunities for synergy between the two fields, such as the use of DNNs as in-silico model systems for neuroscience, and how this synergy can lead to new hypotheses about the operating principles of biological neural networks.
Arxiv Preprint, 2018 URL |
pdf 2017
Research Articles
Nonnenmacher M, Behrens C, Berens P, Bethge M, Macke JH
- Signatures of criticality arise from random subsampling in simple population models
The rise of large-scale recordings of neuronal activity has fueled the hope to gain new insights into the collective activity of neural ensembles. How can one link the statistics of neural population activity to underlying principles and theories? One attempt to interpret such data builds upon analogies to the behaviour of collective systems in statistical physics. Divergence of the specific heat—a measure of population statistics derived from thermodynamics—has been used to suggest that neural populations are optimized to operate at a “critical point”. However, these findings have been challenged by theoretical studies which have shown that common inputs can lead to diverging specific heat. Here, we connect “signatures of criticality”, and in particular the divergence of specific heat, back to statistics of neural population activity commonly studied in neural coding: firing rates and pairwise correlations. We show that the specific heat diverges whenever the average correlation strength does not depend on population size. This is necessarily true when data with correlations is randomly subsampled during the analysis process, irrespective of the detailed structure or origin of correlations. We also show how the characteristic shape of specific heat capacity curves depends on firing rates and correlations, using both analytically tractable models and numerical simulations of a canonical feed-forward population model. To analyze these simulations, we develop efficient methods for characterizing large-scale neural population activity with maximum entropy models. We find that, consistent with experimental findings, increases in firing rates and correlation directly lead to more pronounced signatures. Thus, previous reports of thermodynamical criticality in neural populations based on the analysis of specific heat can be explained by average firing rates and correlations, and are not indicative of an optimized coding strategy. We conclude that a reliable interpretation of statistical tests for theories of neural coding is possible only in reference to relevant ground-truth models.
PLoS Comput Biol 13(10):e1005718, 2017 URL |
pdf |
supplement |
arXiv |
code 2016
Research Articles
Schuett HH, Harmeling S, Macke JH, Wichmann FA
- Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data
The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion—goodness-of-fit—which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation—psignifit 4—performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available (https://github.com/wichmann-lab/psignifit) and a python implementation will follow soon.
Vision Research, 122 , pp. 105-123, 2016 code |
code direct |
pdf |
URL |
Wichmann lab Park M, Bohner G, Macke JH
- Unlocking neural population non-stationarities using hierarchical dynamics models
Neural population activity often exhibits rich variability. This variability can arise from single-neuron stochasticity, neural dynamics on short time-scales, as well as from modulations of neural firing properties on long time-scales, often referred to as neural non-stationarity. To better understand the nature of co-variability in neural circuits and their impact on cortical information processing, we introduce a hierarchical dynamics model that is able to capture both slow inter-trial modulations in firing rates as well as neural population dynamics. We derive a Bayesian Laplace propagation algorithm for joint inference of parameters and population states. On neural population recordings from primary visual cortex, we demonstrate that our model provides a better account of the structure of neural firing than stationary dynamics models.
Advances in Neural Information Processing Systems 28: 29th Conference on Neural Information Processing Systems (NeurIPS 2015), 2016 pdf |
supplement |
arXiv |
code 2015
Research Articles
Archer EW, Koster U, Pillow JW, Macke JH
- Low-dimensional models of neural population activity in sensory cortical circuits
Neural responses in visual cortex are influenced by visual stimuli and by ongoing spiking activity in local circuits. An important challenge in computational neuroscience is to develop models that can account for both of these features in large multi-neuron recordings and to reveal how stimulus representations interact with and depend on cortical dynamics. Here we introduce a statistical model of neural population activity that integrates a nonlinear receptive field model with a latent dynamical model of ongoing cortical activity. This model captures the temporal dynamics, effective network connectivity in large population recordings, and correlations due to shared stimulus drive as well as common noise. Moreover, because the nonlinear stimulus inputs are mixed by the ongoing dynamics, the model can account for a relatively large number of idiosyncratic receptive field shapes with a small number of nonlinear inputs to a low-dimensional latent dynamical model. We introduce a fast estimation method using online expectation maximization with Laplace approximations. Inference scales linearly in both population size and recording duration. We apply this model to multi-channel recordings from primary visual cortex and show that it accounts for a large number of individual neural receptive fields using a small number of nonlinear inputs and a low-dimensional dynamical model.
Advances in Neural Information Processing Systems 27: 28th Conference on Neural Information Processing Systems (NeurIPS 2014), pp. 343-351, 2015 URL |
pdf Putzky P, Franzen F, Bassetto G, Macke JH
- A Bayesian model for identifying hierarchically organised states in neural population activity
Neural population activity in cortical circuits is not solely driven by external inputs, but is also modulated by endogenous states. These cortical states vary on multiple time-scales and also across areas and layers of the neocortex. To understand information processing in cortical circuits, we need to understand the statistical structure of internal states and their interaction with sensory inputs. Here, we present a statistical model for extracting hierarchically organized neural population states from multi-channel recordings of neural spiking activity. We model population states using a hidden Markov decision tree with state-dependent tuning parameters and a generalized linear observation model. Using variational Bayesian inference, we estimate the posterior distribution over parameters from population recordings of neural spike trains. On simulated data, we show that we can identify the underlying sequence of population states over time and reconstruct the ground truth parameters. Using extracellular population recordings from visual cortex, we find that a model with two levels of population states outperforms a generalized linear model which does not include state-dependence, as well as models which only including a binary state. Finally, modelling of state-dependence via our model also improves the accuracy with which sensory stimuli can be decoded from the population response.
Advances in Neural Information Processing Systems 27: 28th Annual Conference on Neural Information Processing Systems (NeurIPS 2014), pp. 3095-3103, 2015 URL |
pdf |
supplement |
spotlight |
code Küffner R, Zach N, Norel R, Hawe J, Schoenfeld D, Wang L, Li G, Fang L, Mackey L, Hardiman O, Cudkowicz M, Sherman A, Ertaylan G, Grosse-Wentrup M, Hothorn T, van Ligtenberg J, Macke JH, Meyer T, Schölkopf B, Tran L, Vaughan R, Stolovitzky G, Leitner ML
- Crowdsourced analysis of clinical trial data to predict amyotrophic lateral sclerosis progression
Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disease with substantial heterogeneity in its clinical presentation. This makes diagnosis and effective treatment difficult, so better tools for estimating disease progression are needed. Here, we report results from the DREAM-Phil Bowen ALS Prediction Prize4Life challenge. In this crowdsourcing competition, competitors developed algorithms for the prediction of disease progression of 1,822 ALS patients from standardized, anonymized phase 2/3 clinical trials. The two best algorithms outperformed a method designed by the challenge organizers as well as predictions by ALS clinicians. We estimate that using both winning algorithms in future trial designs could reduce the required number of patients by at least 20%. The DREAM-Phil Bowen ALS Prediction Prize4Life challenge also identified several potential nonstandard predictors of disease progression including uric acid, creatinine and surprisingly, blood pressure, shedding light on ALS pathobiology. This analysis reveals the potential of a crowdsourcing competition that uses clinical trial data for accelerating ALS research and development.
Nature Biotechnology, 33 (1), pp. 51-57, 2015 URL |
DOI Reviews and Book Chapters
Panzeri S, Macke JH, Gross J, Kayser C
- Neural population coding: combining insights from microscopic and mass signals
Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior.
Trends in Cognitive Sciences, 19 (3), pp. 162–172, 2015 URL |
DOI |
pdf Macke JH, Buesing L, Sahani M
Estimating State and Parameters in State Space Models of Spike Trains Advanced State Space Methods for Neural and Clinical Data, 2015 pdf |
code 2014
Research Articles
Fründ I, Wichmann FA, Macke JH
- Quantifying the effect of intertrial dependence on perceptual decisions
In the perceptual sciences, experimenters study the causal mechanisms of perceptual systems by probing observers with carefully constructed stimuli. It has long been known, however, that perceptual decisions are not only determined by the stimulus, but also by internal factors. Internal factors could lead to a statistical influence of previous stimuli and responses on the current trial, resulting in serial dependencies, which complicate the causal inference between stimulus and response. However, the majority of studies do not take serial dependencies into account, and it has been unclear how strongly they influence perceptual decisions. We hypothesize that one reason for this neglect is that there has been no reliable tool to quantify them and to correct for their effects. Here we develop a statistical method to detect, estimate, and correct for serial dependencies in behavioral data. We show that even trained psychophysical observers suffer from strong history dependence. A substantial fraction of the decision variance on difficult stimuli was independent of the stimulus but dependent on experimental history. We discuss the strong dependence of perceptual decisions on internal factors and its implications for correct data interpretation.
Journal of Vision, 14 (7), p. 9, 2014 URL |
DOI |
pdf |
supplement |
code |
news Turaga SC, Buesing L, Packer AM, Dalgleish H, Pettit N, Hausser M, Macke JH
- Inferring neural population dynamics from multiple partial recordings of the same neural circuit
Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for stitching" together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized---beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs.
Advances in Neural Information Processing Systems 26: 27th Conference on Neural Information Processing Systems (NeurIPS 2013), pp. 539-547, 2014 URL |
pdf Reviews and Book Chapters
Macke JH
- Electrophysiology Analysis, Bayesian
Bayesian analysis of electrophysiological data refers to the statistical processing of data obtained in electrophysiological experiments (i.e., recordings of action potentials or voltage measurements with electrodes or imaging devices) which utilize methods from Bayesian statistics. Bayesian statistics is a framework for describing and modelling empirical data using the mathematical language of probability to model uncertainty. Bayesian statistics provides a principled and flexible framework for combining empirical observations with prior knowledge and for quantifying uncertainty. These features are especially useful for analysis questions in which the dataset sizes are small in comparison to the complexity of the model, which is often the case in neurophysiological data analysis.
Encyclopedia of Computational Neuroscience, pp. 1-5, 2014 URL |
DOI |
preprint 2013
Research Articles
Watanabe M, Bartels A, Macke JH, Murayama Y, Logothetis NK
- Temporal Jitter of the BOLD Signal Reveals a Reliable Initial Dip and Improved Spatial Resolution
fMRI, one of the most important noninvasive brain imaging methods, relies on the blood oxygen level-dependent (BOLD) signal, whose precise underpinnings are still not fully understood [1]. It is a widespread assumption that the components of the hemodynamic response function (HRF) are fixed relative to each other in time, leading most studies as well as analysis tools to focus on trial-averaged responses, thus using or estimating a condition- or location-specific “canonical HRF” [2, 3 and 4]. In the current study, we examined the nature of the variability of the BOLD response and asked in particular whether the positive BOLD peak is subject to trial-to-trial temporal jitter. Our results show that the positive peak of the stimulus-evoked BOLD signal exhibits a trial-to-trial temporal jitter on the order of seconds. Moreover, the trial-to-trial variability can be exploited to uncover the initial dip in the majority of voxels by pooling trial responses with large peak latencies. Initial dips exposed by this procedure possess higher spatial resolution compared to the positive BOLD signal in the human visual cortex. These findings allow for the reliable observation of fMRI signals that are physiologically closer to neural activity, leading to improvements in both temporal and spatial resolution.
Current Biology, 23 (21), pp. 2146–2150, 2013 URL |
DOI Macke JH, Murray I, Latham PE
- Estimation bias in maximum entropy models
Maximum entropy models have become popular statistical models in neuroscience and other areas in biology and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling bias; i.e., the true entropy of the data can be severely underestimated. Here, we study the sampling properties of estimates of the entropy obtained from maximum entropy models. We focus on pairwise binary models, which are used extensively to model neural population activity. We show that if the data is well described by a pairwise model, the bias is equal to the number of parameters divided by twice the number of observations. If, however, the higher order correlations in the data deviate from those predicted by the model, the bias can be larger. Using a phenomenological model of neural population recordings, we find that this additional bias is highest for small firing probabilities, strong correlations and large population sizes—for the parameters we tested, a factor of about four higher. We derive guidelines for how long a neurophysiological experiment needs to be in order to ensure that the bias is less than a specified criterion. Finally, we show how a modified plug-in estimate of the entropy can be used for bias correction.
Entropy, 15 (8), pp. 3109-3219, 2013 URL |
DOI |
pdf |
code Haefner RM, Gerwinn S, Macke JH, Bethge M
- Inferring decoding strategies from choice probabilities in the presence of correlated variability
The activity of cortical neurons in sensory areas covaries with perceptual decisions, a relationship that is often quantified by choice probabilities. Although choice probabilities have been measured extensively, their interpretation has remained fraught with difficulty. We derive the mathematical relationship between choice probabilities, read-out weights and correlated variability in the standard neural decision-making model. Our solution allowed us to prove and generalize earlier observations on the basis of numerical simulations and to derive new predictions. Notably, our results indicate how the read-out weight profile, or decoding strategy, can be inferred from experimentally measurable quantities. Furthermore, we developed a test to decide whether the decoding weights of individual neurons are optimal for the task, even without knowing the underlying correlations. We confirmed the practicality of our approach using simulated data from a realistic population model. Thus, our findings provide a theoretical foundation for a growing body of experimental results on choice probabilities and correlations.
Nature Neuroscience, 16 (2), pp. 235–242, 2013 URL |
DOI |
news |
code Buesing L, Macke JH, Sahani M
- Spectral learning of linear dynamics from generalised-linear observations with application to neural population data
Latent linear dynamical systems with generalised-linear observation models arise in a variety of applications, for example when modelling the spiking activity of populations of neurons. Here, we show how spectral learning methods for linear systems with Gaussian observations (usually called subspace identification in this context) can be extended to estimate the parameters of dynamical system models observed through non-Gaussian noise models. We use this approach to obtain estimates of parameters for a dynamical model of neural population data, where the observed spike-counts are Poisson-distributed with log-rates determined by the latent dynamical process, possibly driven by external inputs. We show that the extended system identification algorithm is consistent and accurately recovers the correct parameters on large simulated data sets with much smaller computational cost than approximate expectation-maximisation (EM) due to the non-iterative nature of subspace identification. Even on smaller data sets, it provides an effective initialization for EM, leading to more robust performance and faster convergence. These benefits are shown to extend to real neural data.
Advances in Neural Information Processing Systems 25: 26th Conference on Neural Information Processing Systems (NeurIPS 2012), pp. 1691-1699, 2013 URL |
pdf 2012
Research Articles
Schwartz G, Macke JH, Amodei D, Tang H, Berry MJ
- Low Error Discrimination using a Correlated Population Code
We explored the manner in which spatial information is encoded by retinal ganglion cell populations. We flashed a set of 36 shape stimuli onto the tiger salamander retina and used different decoding algorithms to read out information from a population of 162 ganglion cells. We compared the discrimination performance of linear decoders, which ignore correlation induced by common stimulation, against nonlinear decoders, which can accurately model these correlations. Similar to previous studies, decoders that ignored correlation suffered only a modest drop in discrimination performance for groups of up to ∼30 cells. However, for more realistic groups of 100+ cells, we found order-of-magnitude differences in the error rate. We also compared decoders that used only the presence of a single spike from each cell against more complex decoders that included information from multiple spike counts and multiple time bins. More complex decoders substantially outperformed simpler decoders, showing the importance of spike timing information. Particularly effective was the first spike latency representation, which allowed zero discrimination errors for the majority of shape stimuli. Furthermore, the performance of nonlinear decoders showed even greater enhancement compared to linear decoders for these complex representations. Finally, decoders that approximated the correlation structure in the population by matching all pairwise correlations with a maximum entropy model fit to all 162 neurons were quite successful, especially for the spike latency representation. Together, these results suggest a picture in which linear decoders allow a coarse categorization of shape stimuli, while nonlinear decoders, which take advantage of both correlation and spike timing, are needed to achieve high-fidelity discrimination.
Journal of Neurophysiology, 108 (4), pp. 1069-1088, 2012 URL |
DOI |
code Buesing L, Macke JH, Sahani M
- Learning stable, regularised latent models of neural population dynamics
Ongoing advances in experimental technique are making commonplace simultaneous recordings of the activity of tens to hundreds of cortical neurons at high temporal resolution. Latent population models, including Gaussian-process factor analysis and hidden linear dynamical system (LDS) models, have proven effective at capturing the statistical structure of such data sets. They can be estimated efficiently, yield useful visualisations of population activity, and are also integral building-blocks of decoding algorithms for brain-machine interfaces (BMI). One practical challenge, particularly to LDS models, is that when parameters are learned using realistic volumes of data the resulting models often fail to reflect the true temporal continuity of the dynamics; and indeed may describe a biologically-implausible unstable population dynamic that is, it may predict neural activity that grows without bound. We propose a method for learning LDS models based on expectation maximisation that constrains parameters to yield stable systems and at the same time promotes capture of temporal structure by appropriate regularisation. We show that when only little training data is available our method yields LDS parameter estimates which provide a substantially better statistical description of the data than alternatives, whilst guaranteeing stable dynamics. We demonstrate our methods using both synthetic data and extracellular multi-electrode recordings from motor cortex.
Network, 23 (1-2), pp. 24-47, 2012 URL |
DOI |
pdf Macke JH, Murray I, Latham P
- How biased are maximum entropy models?
Maximum entropy models have become popular statistical models in neuroscience and other areas in biology, and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling bias; i.e. the true entropy of the data can be severely underestimated. Here we study the sampling properties of estimates of the entropy obtained from maximum entropy models. We show that if the data is generated by a distribution that lies in the model class, the bias is equal to the number of parameters divided by twice the number of observations. However, in practice, the true distribution is usually outside the model class, and we show here that this misspecification can lead to much larger bias. We provide a perturbative approximation of the maximally expected bias when the true model is out of model class, and we illustrate our results using numerical simulations of an Ising model; i.e. the second-order maximum entropy distribution on binary data.
Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems (NeurIPS 2011), pp. 2034-2042, 2012 URL |
pdf |
code Macke JH, Büsing L, Cunningham JP, Yu BM, Shenoy KV, Sahani M
- Empirical models of spiking in neural populations
Neurons in the neocortex code and compute as part of a locally interconnected population. Large-scale multi-electrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurrent spiking of cells within a local network? We argue that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects only a very small fraction of the local population, the most appropriate model captures shared variability by a low-dimensional latent process evolving with smooth dynamics, rather than by putative direct coupling. We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised linear spike-response models (GLMs) using cortical recordings. We find that the latent dynamical approach outperforms the GLM in terms of goodness-offit, and reproduces the temporal correlations in the data more accurately. We also compare models whose observations models are either derived from a Gaussian or point-process models, finding that the non-Gaussian model provides slightly better goodness-of-fit and more realistic population spike counts.
Advances in Neural Information Processing Systems 24: 25th Conference on Neural Information Processing Systems (NeurIPS 2011), pp. 1350-1358, 2012 URL |
pdf |
code 2011
Research Articles
Macke JH, Gerwinn S, White LW, Kaschube M, Bethge M
- Gaussian process methods for estimating cortical maps
A striking feature of cortical organization is that the encoding of many stimulus features, for example orientation or direction selectivity, is arranged into topographic maps. Functional imaging methods such as optical imaging of intrinsic signals, voltage sensitive dye imaging or functional magnetic resonance imaging are important tools for studying the structure of cortical maps. As functional imaging measurements are usually noisy, statistical processing of the data is necessary to extract maps from the imaging data. We here present a probabilistic model of functional imaging data based on Gaussian processes. In comparison to conventional approaches, our model yields superior estimates of cortical maps from smaller amounts of data. In addition, we obtain quantitative uncertainty estimates, i.e. error bars on properties of the estimated map. We use our probabilistic model to study the coding properties of the map and the role of noise-correlations by decoding the stimulus from single trials of an imaging experiment.
NeuroImage, 56 (2), pp. 570-581, 2011 URL |
DOI |
code Macke JH, Opper M, Bethge M
- Common Input Explains Higher-Order Correlations and Entropy in a Simple Model of Neural Population Activity
Simultaneously recorded neurons exhibit correlations whose underlying causes are not known. Here, we use a population of threshold neurons receiving correlated inputs to model neural population recordings. We show analytically that small changes in second-order correlations can lead to large changes in higher-order redundancies, and that the resulting interactions have a strong impact on the entropy, sparsity, and statistical heat capacity of the population. Our findings for this simple model may explain some surprising effects recently observed in neural population recordings.
Physical Review Letters, 106 (20), p. 208102, 2011 URL |
DOI |
code |
supplement Reviews and Book Chapters
Macke J, Berens P, Bethge M
- Statistical analysis of multi-cell recordings: linking population coding models to experimental data
Modern recording techniques such as multi-electrode arrays and two-photon imaging methods are capable of simultaneously monitoring the activity of large neuronal ensembles at single cell resolution. These methods finally give us the means to address some of the most crucial questions in systems neuroscience: what are the dynamics of neural population activity? How do populations of neurons perform computations? What is the functional organization of neural ensembles? While the wealth of new experimental data generated by these techniques provides exciting opportunities to test ideas about how neural ensembles operate, it also provides major challenges: multi-cell recordings necessarily yield data which is high-dimensional in nature. Understanding this kind of data requires powerful statistical techniques for capturing the structure of the neural population responses, as well as their relationship with external stimuli or behavioral observations. Furthermore, linking recorded neural population activity to the predictions of theoretical models of population coding has turned out not to be straightforward. These challenges motivated us to organize a workshop at the 2009 Computational Neuroscience Meeting in Berlin to discuss these issues. In order to collect some of the recent progress in this field, and to foster discussion on the most important directions and most pressing questions, we issued a call for papers for this Research Topic. We asked authors to address the following four questions: 1. What classes of statistical methods are most useful for modeling population activity? 2. What are the main limitations of current approaches, and what can be done to overcome them? 3. How can statistical methods be used to empirically test existing models of (probabilistic) population coding? 4. What role can statistical methods play in formulating novel hypotheses about the principles of information processing in neural populations? A total of 15 papers addressing questions related to these themes are now collected in this Research Topic. Three of these articles have resulted in "Focused reviews" in Frontiers in Neuroscience (Crumiller et al., 2011; Rosenbaum et al., 2011; Tchumatchenko et al., 2011), illustrating the great interest in the topic. Many of the articles are devoted to a better understanding of how correlations arise in neural circuits, and how they can be detected, modeled, and interpreted. For example, by modeling how pairwise correlations are transformed by spiking non-linearities in simple neural circuits, Tchumatchenko et al. (2010) show that pairwise correlation coefficients have to be interpreted with care, since their magnitude can depend strongly on the temporal statistics of their input-correlations. In a similar spirit, Rosenbaum et al. (2010) study how correlations can arise and accumulate in feed-forward circuits as a result of pooling of correlated inputs. Lyamzin et al. (2010) and Krumin et al. (2010) present methods for simulating correlated population activity and extend previous work to more general settings. The method of Lyamzin et al. (2010) allows one to generate synthetic spike trains which match commonly reported statistical properties, such as time varying firing rates as well signal and noise correlations. The Hawkes framework presented by Krumin et al. (2010) allows one to fit models of recurrent population activity to the correlation-structure of experimental data. Louis et al. (2010) present a novel method for generating surrogate spike trains which can be useful when trying to assess the significance and time-scale of correlations in neural spike trains. Finally, Pipa and Munk (2011) study spike synchronization in prefrontal cortex during working memory. A number of studies are also devoted to advancing our methodological toolkit for analyzing various aspects of population activity (Gerwinn et al., 2010; Machens, 2010; Staude et al., 2010; Yu et al., 2010). For example, Gerwinn et al. (2010) explain how full probabilistic inference can be performed in the popular model class of generalized linear models (GLMs), and study the effect of using prior distributions on the parameters of the stimulus and coupling filters. Staude et al. (2010) extend a method for detecting higher-order correlations between neurons via population spike counts to non-stationary settings. Yu et al. (2010) describe a new technique for estimating the information rate of a population of neurons using frequency-domain methods. Machens (2010) introduces a novel extension of principal component analysis for separating the variability of a neural response into different sources. Focusing less on the spike responses of neural populations but on aggregate signals of population activity, Boatman-Reich et al. (2010) and Hoerzer et al. (2010) describe methods for a quantitative analysis of field potential recordings. While Boatman-Reich et al. (2010) discuss a number of existing techniques in a unified framework and highlight the potential pitfalls associated with such approaches, Hoerzer et al. (2010) demonstrate how multivariate autoregressive models and the concept of Granger causality can be used to infer local functional connectivity in area V4 of behaving macaques. A final group of studies is devoted to understanding experimental data in light of computational models (Galán et al., 2010; Pandarinath et al., 2010; Shteingart et al., 2010). Pandarinath et al. (2010) present a novel mechanism that may explain how neural networks in the retina switch from one state to another by a change in gap junction coupling, and conjecture that this mechanism might also be found in other neural circuits. Galán et al. (2010) present a model of how hypoxia may change the network structure in the respiratory networks in the brainstem, and analyze neural correlations in multi-electrode recordings in light of this model. Finally, Shteingart et al. (2010) show that the spontaneous activation sequences they find in cultured networks cannot be explained by Zipf’s law, but rather require a wrestling model. The papers of this Research Topic thus span a wide range of topics in the statistical modeling of multi-cell recordings. Together with other recent advances, they provide us with a useful toolkit to tackle the challenges presented by the vast amount of data collected with modern recording techniques. The impact of novel statistical methods on the field and their potential to generate scientific progress, however, depends critically on how readily they can be adopted and applied by laboratories and researchers working with experimental data. An important step toward this goal is to also publish computer code along with the articles (Barnes, 2010) as a successful implementation of advanced methods also relies on many details which are hard to communicate in the article itself. In this way it becomes much more likely that other researchers can actually use the methods, and unnecessary re-implementations can be avoided. Some of the papers in this Research Topic already follow this goal (Gerwinn et al., 2010; Louis et al., 2010; Lyamzin et al., 2010). We hope that this practice becomes more and more common in the future and encourage authors and editors of Research Topics to make as much code available as possible, ideally in a format that can be easily integrated with existing software sharing initiatives (Herz et al., 2008; Goldberg et al., 2009).
Frontiers in Computational Neuroscience, 5 (35), pp. 1-2, 2011 URL |
DOI |
pdf-book(big!) Gerwinn S, Macke JH, Bethge M
- Reconstructing stimuli from the spike-times of leaky integrate and fire neurons
Reconstructing stimuli from the spike trains of neurons is an important approach for understanding the neural code. One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes. An important problem is to determine how much information about the continuously varying stimulus can be extracted from the time-points at which spikes were observed, especially if these time-points are subject to some sort of randomness. For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released. A simple decoding algorithm previously derived for the noiseless case can be extended to the stochastic case, but turns out to be biased. Here, we review a solution to this problem, by presenting a simple yet efficient algorithm which greatly reduces the bias, and therefore leads to better decoding performance in the stochastic case.
Frontiers in Neuroscience, 5 (1), pp. 1-16, 2011 URL |
DOI 2010
Research Articles
Lyamzin DR, Macke JH, Lesica NA
- Modeling population spike trains with specified time-varying spike rates, trial-to-trial variability, and pairwise signal and noise correlations
As multi-electrode and imaging technology begin to provide us with simultaneous recordings of large neuronal populations, new methods for modeling such data must also be developed. Here, we present a model for the type of data commonly recorded in early sensory pathways: responses to repeated trials of a sensory stimulus in which each neuron has it own time-varying spike rate (as described by its PSTH) and the dependencies between cells are characterized by both signal and noise correlations. This model is an extension of previous attempts to model population spike trains designed to control only the total correlation between cells. In our model, the response of each cell is represented as a binary vector given by the dichotomized sum of a deterministic "signal" that is repeated on each trial and a Gaussian random "noise" that is different on each trial. This model allows the simulation of population spike trains with PSTHs, trial-to-trial variability, and pairwise correlations that match those measured experimentally. Furthermore, the model also allows the noise correlations in the spike trains to be manipulated independently of the signal correlations and single-cell properties. To demonstrate the utility of the model, we use it to simulate and manipulate experimental responses from the mammalian auditory and visual systems. We also present a general form of the model in which both the signal and noise are Gaussian random processes, allowing the mean spike rate, trial-to-trial variability, and pairwise signal and noise correlations to be specified independently. Together, these methods for modeling spike trains comprise a potentially powerful set of tools for both theorists and experimentalists studying population responses in sensory systems.
Frontiers in Computational Neuroscience, 4 (144), pp. 1-11, 2010 URL |
DOI |
pdf Macke JH, Wichmann FA
- Estimating predictive stimulus features from psychophysical data: The decision image technique applied to human faces
One major challenge in the sensory sciences is to identify the stimulus features on which sensory systems base their computations, and which are predictive of a behavioral decision: they are a prerequisite for computational models of perception. We describe a technique (decision images) for extracting predictive stimulus features using logistic regression. A decision image not only defines a region of interest within a stimulus but is a quantitative template which defines a direction in stimulus space. Decision images thus enable the development of predictive models, as well as the generation of optimized stimuli for subsequent psychophysical investigations. Here we describe our method and apply it to data from a human face classification experiment. We show that decision images are able to predict human responses not only in terms of overall percent correct but also in terms of the probabilities with which individual faces are (mis-) classified by individual observers. We show that the most predictive dimension for gender categorization is neither aligned with the axis defined by the two class-means, nor with the first principal component of all faces-two hypotheses frequently entertained in the literature. Our method can be applied to a wide range of binary classification tasks in vision or other psychophysical contexts.
Journal of Vision, 10 (5), p. 22, 2010 URL |
DOI |
pdf Gerwinn S, Macke J, Bethge M
- Bayesian inference for generalized linear models for spiking neurons
Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate.
Frontiers in Computational Neuroscience, 4 (12), pp. 1-17, 2010 URL |
DOI |
pdf Macke JH, Gerwinn S, Kaschube M, White LE, Bethge M
- Bayesian estimation of orientation preference maps
Imaging techniques such as optical imaging of intrinsic signals, 2-photon calcium imaging and voltage sensitive dye imaging can be used to measure the functional organization of visual cortex across different spatial and temporal scales. Here, we present Bayesian methods based on Gaussian processes for extracting topographic maps from functional imaging data. In particular, we focus on the estimation of orientation preference maps (OPMs) from intrinsic signal imaging data. We model the underlying map as a bivariate Gaussian process, with a prior covariance function that reflects known properties of OPMs, and a noise covariance adjusted to the data. The posterior mean can be interpreted as an optimally smoothed estimate of the map, and can be used for model based interpolations of the map from sparse measurements. By sampling from the posterior distribution, we can get error bars on statistical properties such as preferred orientations, pinwheel locations or pinwheel counts. Finally, the use of an explicit probabilistic model facilitates interpretation of parameters and quantitative model comparisons. We demonstrate our model both on simulated data and on intrinsic signaling data from ferret visual cortex.
Advances in Neural Information Processing Systems 22: 23rd Conference on Neural Information Processing Systems (NeurIPS 2009), pp. 1195-1203, 2010 URL |
pdf |
code 2009
Research Articles
Gerwinn S, Macke JH, Bethge M
- Bayesian population decoding of spiking neurons
The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a `spike-by-spike‘ online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.
Frontiers in Computational Neuroscience, 3 (21), pp. 1-14, 2009 URL |
DOI |
pdf Macke JH, Berens P, Ecker AS, Tolias AS, Bethge M
- Generating Spike Trains with Specified Correlation Coefficients
Spike trains recorded from populations of neurons can exhibit substantial pairwise correlations between neurons and rich temporal structure. Thus, for the realistic simulation and analysis of neural systems, it is essential to have efficient methods for generating artificial spike trains with specified correlation structure. Here we show how correlated binary spike trains can be simulated by means of a latent multivariate gaussian model. Sampling from the model is computationally very efficient and, in particular, feasible even for large populations of neurons. The entropy of the model is close to the theoretical maximum for a wide range of parameters. In addition, this framework naturally extends to correlations over time and offers an elegant way to model correlated neural spike counts with arbitrary marginal distributions.
Neural Computation, 21 (2), pp. 397-423, 2009 URL |
DOI |
pdf |
code Preprints and Technical Reports
Macke JH, Opper M, Bethge M
- The effect of pairwise neural correlations on global population statistics
Simultaneously recorded neurons often exhibit correlations in their spiking activity. These correlations shape the statistical structure of the population activity, and can lead to substantial redundancy across neurons. Here, we study the effect of pairwise correlations on the population spike count statistics and redundancy in populations of threshold-neurons in which response-correlations arise from correlated Gaussian inputs. We investigate the scaling of the redundancy as the population size is increased, and compare the asymptotic redundancy in our models to the corresponding maximum- and minimum entropy models.
MPG Technical Report, (183), 2009 PDF 2008
Research Articles
Ku S-P, Gretton A, Macke J, Logothetis NK
- Comparison of Pattern Recognition Methods in Classifying High-resolution BOLD Signals Obtained at High Magnetic Field in Monkeys
Pattern recognition methods have shown that functional magnetic resonance imaging (fMRI) data can reveal significant information about brain activity. For example, in the debate of how object categories are represented in the brain, multivariate analysis has been used to provide evidence of a distributed encoding scheme [Science 293:5539 (2001) 24252430]. Many follow-up studies have employed different methods to analyze human fMRI data with varying degrees of success [Nature reviews 7:7 (2006) 523534]. In this study, we compare four popular pattern recognition methods: correlation analysis, support-vector machines (SVM), linear discriminant analysis (LDA) and Gaussian naïve Bayes (GNB), using data collected at high field (7 Tesla) with higher resolution than usual fMRI studies. We investigate prediction performance on single trials and for averages across varying numbers of stimulus presentations. The performance of the various algorithms depends on the nature of the brain activity being categorized: for several tasks, many of the methods work well, whereas for others, no method performs above chance level. An important factor in overall classification performance is careful preprocessing of the data, including dimensionality reduction, voxel selection and outlier elimination.
Magnetic Resonance Imaging, 26 (7), pp. 1007-1014, 2008 URL |
DOI |
pdf Macke JH, Zeck G, Bethge M
- Receptive Fields without Spike-Triggering
Stimulus selectivity of sensory neurons is often characterized by estimating their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble. This approach treats each spike as an independent message but does not take into account that information might be conveyed through patterns of neural activity that are distributed across space or time. Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons? Here, we present a generalization of the linear receptive field which is not bound to be triggered on individual spikes but can be meaningfully linked to distributed response patterns. More precisely, we seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled. We use an extension of reverse-correlation methods based on canonical correlation analysis. The resulting population receptive fields span the subspace of stimuli that is most informative about the population response. We evaluate our approach using both neuronal models and multi-electrode recordings from rabbit retinal ganglion cells. We show how the model can be extended to capture nonlinear stimulus-response relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. Our technique can also be used to calculate receptive fields from multi-dimensional neural measurements such as those obtained from dynamic imaging methods.
Advances in Neural Information Processing Systems 20: 21st Annual Conference on Neural Information Processing Systems (NeurIPS 2007), pp. 969-976, 2008 URL |
pdf Gerwinn S, Macke J, Seeger M, Bethge M
- Bayesian Inference for Spiking Neuron Models with a Sparsity Prior
Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response.
Advances in Neural Information Processing Systems 20: 21st Conference on Neural Information Processing Systems (NeurIPS 2007), pp. 529-536, 2008 URL |
pdf Macke JH, Maack N, Gupta R, Denk W, Schölkopf B, Borst A
- Contour-propagation Algorithms for Semi-automated Reconstruction of Neural Processes
A new technique, Serial Block Face Scanning Electron Microscopy (SBFSEM), allows for automatic sectioning and imaging of biological tissue with a scanning electron microscope. Image stacks generated with this technology have a resolution sufficient to distinguish different cellular compartments, including synaptic structures, which should make it possible to obtain detailed anatomical knowledge of complete neuronal circuits. Such an image stack contains several thousands of images and is recorded with a minimal voxel size of 10-20nm in the x and y- and 30nm in z-direction. Consequently, a tissue block of 1mm3 (the approximate volume of the Calliphora vicina brain) will produce several hundred terabytes of data. Therefore, highly automated 3D reconstruction algorithms are needed. As a first step in this direction we have developed semiautomated segmentation algorithms for a precise contour tracing of cell membranes. These algorithms were embedded into an easy-to-operate user interface, which allows direct 3D observation of the extracted objects during the segmentation of image stacks. Compared to purely manual tracing, processing time is greatly accelerated.
Journal of Neuroscience Methods, 167 (2), pp. 349-357, 2008 URL |
DOI |
pdf 2007
Research Articles
Bethge M, Gerwinn S, Macke JH
- Unsupervised learning of a steerable basis for invariant image representations
There are two aspects to unsupervised learning of invariant representations of images: First, we can reduce the dimensionality of the representation by finding an optimal trade-off between temporal stability and informativeness. We show that the answer to this optimization problem is generally not unique so that there is still considerable freedom in choosing a suitable basis. Which of the many optimal representations should be selected? Here, we focus on this second aspect, and seek to find representations that are invariant under geometrical transformations occuring in sequences of natural images. We utilize ideas of steerability and Lie groups, which have been developed in the context of filter design. In particular, we show how an anti-symmetric version of canonical correlation analysis can be used to learn a full-rank image basis which is steerable with respect to rotations. We provide a geometric interpretation of this algorithm by showing that it finds the two-dimensional eigensubspaces of the avera ge bivector. For data which exhibits a variety of transformations, we develop a bivector clustering algorithm, which we use to learn a basis of generalized quadrature pairs (i.e. complex cells) from sequences of natural images.
Human Vision and Electronic Imaging XII: Proceedings of the SPIE Human Vision and Electronic Imaging Conference 2007, pp. 1-12, 2007 URL |
DOI |
pdf Laub J, Macke JH, Müller K-R, Wichmann FA
- Inducing Metric Violations in Human Similarity Judgements
Attempting to model human categorization and similarity judgements is both a very interesting but also an exceedingly difficult challenge. Some of the difficulty arises because of conflicting evidence whether human categorization and similarity judgements should or should not be modelled as to operate on a mental representation that is essentially metric. Intuitively, this has a strong appeal as it would allow (dis)similarity to be represented geometrically as distance in some internal space. Here we show how a single stimulus, carefully constructed in a psychophysical experiment, introduces l2 violations in what used to be an internal similarity space that could be adequately modelled as Euclidean. We term this one influential data point a conflictual judgement. We present an algorithm of how to analyse such data and how to identify the crucial point. Thus there may not be a strict dichotomy between either a metric or a non-metric internal space but rather degrees to which potentially large subsets of stimuli are represented metrically with a small subset causing a global violation of metricity.
Advances in Neural Information Processing Systems 19: 20th Conference on Neural Information Processing Systems (NeurIPS 2006), pp. 777-784, 2007 URL |
pdf