Resources

We have started to share code accompanying our publications, most of it in Matlab or Python. This listing also includes packages for which a substantial portion of the code was developed by collaborators of the group. Details and references can be found in repositories.

Code

Linear dynamical system models with Poisson observations for modelling neural population spike trains

Code | CodeSSID

Neural populaton spike trains can exhibit substantial correlations both across neurons and across time. For data obtained from cortical multi-electrode recordings, these correlations are unlikely to arise solely from direct synaptic interactions, but are more likely to arise from cortical dynamics or shared modulatory influences. A simple model for capturing the structure of correlations in cortical recordings is to assume that they arise from a common coupling to an underlying dynamical system. A simple model for such systems is to assume that the underlying dynamics are low-dimensional and linear. As neural spike trains consist of multivariate point processes (or, in discrete time, count processes), such low-dimensional dynamics need to be combined with a suitable observation model, e.g. a Poisson model. We call the resulting model a Poisson-observation Linear Dynamical System, PLDS. We developed a number of different estimation procedures for PLDS models, including both a Laplace- and variational approximation for state inference, expectation maximization (EM) for parameter learning, and nonlinear subspace identification for fast parameter estimation (or as initialization for EM)

The methods are described in Macke et al 2011, Buesing et al 2012 and reviewed in Macke et al 2015, and were developed in collaboration with Lars Buesing, Maneesh Sahani, John Cunningham, Byron Yu and Krishna Shenoy.

Much of this code was written by Lars Buesing (Gatsby Unit & Columbia University, bitbucket). He also provided implementations of nuclear-norm minisation (as developed in Pfau et al 2013, github) and exponential family PCA.

Yuan Gao contributed code for fitting generalized count linear models , as described in Gao Buesing Shenoy Cunningham NIPS 2015.

Code for [Subspace Identification]((https://bitbucket.org/mackelab/pop_spike_dyn/downloads/Buesing_Macke_2013_PLSID.pdf) can be found in a separate repository.

Modelling multivariate binary data with correlations

Code

Multivariate, correlated binary data arises in a variety of applications. In neuroscience, spike trains recorded from populations of neurons are often analyzed after discretizing the data into binary multivariate ‘words’, which often exhibit correlations across neurons. The statistical structure of these ‘spike words’ has attracted substantial interest: What models are well suited for capturing these correlations? What insights can we gain from these statistics about putative underlying mechanisms? What are the functional consequences of these correlations? We have tackled these questions both by developing statistical models for discretized neural population spike trains, and by theoretically analyzing the properties of these models. The repository pop_spike includes a number of statistical methods for multivariate binary data, including

  • Dichotomized Gaussian models for multivariate binary and count data, as described in Macke et al 2009. The Dichotomized Gaussian provides a very efficient method for sampling multivariate binary data with specified second-order correlations, and with higher-order correlations which are realistic for neural population data.

  • Second-order maximum entropy models for multivariate binary data, also known as Ising models in statistical physics Schwartz et al 2013, and for calcuating the entropy bias in these models as described in Macke et al 2013 and Macke et al 2012.

  • Dichotomized Gaussian and Maximum Entropy models for homogeneous models, in which many theoretical quantities of interest (including entropy and specific heat) can be easily calculated, as described in Macke et al 2011.

The code in these repositories was developed in collaboration with Philipp Berens, Alexander Ecker, Andreas Tolias, Matthias Bethge, Manfred Opper, Iain Murray and Peter Latham. The MCMC methods for fitting maximum entropy models to large neural population is based on code developed by Tamara Broderick and Michael Berry. Marcel Nonnenmacher is currently working on the ‘next generation’ of these methods which is not online yet, but which we are happy to share on request.

Psignifit 4- Bayesian Psychometric function fitting without a pain in the neck

Code

Psychometric functions play a central role in psychology and behavioural neuroscience, and are used extensively to model detection or discrimination behaviour as a function of an independent variable, e.g. the contrast of a visual stimulus. After data collection, researchers frequently fit a psychometric function to their data relating the independent variable on the abscissa to the observer’s behaviour on the ordinate. Previous Bayesian inference methods for psychometric function estimation are based on MCMC methods, which can be fiddly in practice and require manual interaction and expert knowledge. Together with Heiko Schuett, Stefan Harmeling and Felix Wichmann, we overcame these limitations using numerical integration methods. We are also included an beta-binomial observation method which is important for modelling non-stationary observers. The repository implementing Heiko’s method (Psignifit 4) is available at the github page of the Wichmann lab.

Gaussian Process methods for modelling cortical maps

Code and data | paper | paper 2

The primary visual cortex of primates and carnivores exhibits a remarkable organization– neurons are organized into topographic maps according to their tuning properties. Most strinkingly, neurons are arranged according to their preferred orientation into ‘orientation preference maps’ (OPMs) which include both `iso-orientation domains’ and ‘pin-wheels’, i.e. singularities around which multiple orientation collumns are arranged radially.

Estimation of OPMS from neural imaging measurements is a challenging statistical question. We developed Gaussian process methods for modelling orientation preference maps which make it possible to encode prior knowledge about the statistical structure of OPMs as well as about the correlation-structure of the noise, with the goal of making it possible to estimate maps more efficiently, and to obtain estimates of uncertainty about the estimated map.

The method was developed with Matthias Bethge, Mattias Kaschube and Leonard White, and is described in Macke et al 2011 ) and Macke et al 2010.

Gaussian process factor analysis with Poisson observations

Code

Advances in multi-cell recording techniques allow neuroscientists to record neural activity from large populations of neurons. A popular approach to analysing neural population activity is to identify a smooth, low-dimensional summary of the population activity. Gaussian process factor analysis (GPFA) is a powerful technique for neural dimensionality reduction. However, it is based on a simplifying assumption that is not appropriate for neural spike trains: Spike counts conditioned on the intensity are Gaussian distributed, which fails to capture the discrete nature of neural spike counts or the mean-variance relationships typically observed in neural spike trains. Here we propose to extend GPFA using a more realistic assumption for spike generation: spike counts conditioned on the intensity are Poisson distributed, and the intensity is a non-linear function of the latent state. In his masters-thesis, Hooram developped and implemented methods for fitting Gaussian Process Factor Analysis models with Poisson observations.

Modelling inter-trial dependence in psychophysics

Code

Psychophysical experiments are used extensively in psychology and behavioural neuroscience. While it is often used that different trials of an experiment are statistically independent, it is well known that there are many violations of this assumption, and both human observers and animals often exhibit so called ‘sequential dependencies’ across multiple trials. Together with Ingo Fruend and Felix Wichmann, we developed statistical methods for modelling psychophysical data with sequential dependencies. The method can be used to estimate psychometric functions in the presence of serial dependence, and also to gain insight into the structure of the dependencies (e.g. how do errors on previous trials influence the decision on the next trial?). Methods are described in detail in Fruend et al 2014.

Poisson dynamical system models for nonstationary population spike trains

Code | Data (Tolias lab, BCM) | Paper

Neural population activity often exhibits rich variability. This variability can arise from single-neuron stochasticity, neural dynamics on short time-scales, as well as from modulations of neural firing properties on long time-scales, often referred to as neural non-stationarity. To better understand the nature of co-variability in neural circuits and their impact on cortical information processing, we introduced a hierarchical dynamics model that is able to capture both slow inter-trial modula- tions in firing rates as well as neural population dynamics, and derived a Bayesian Laplace propagation algorithm for joint inference of parameters and population states. This repository includes a matlab implementation of our NIPS paper. The method was developed and implemented by Mijung Park and Gergo Bohner (Gatsby Unit, UCL).

Variational Inference on Tree Structured Hidden Markov Models

Code | Paper

Methods for fitting tree-structured hidden markov models with binary observations to identify hierarchical states.

 

Courses

Machine Learning I, GS Neural Information Processing

Tuebingen, 2012

repository

Slides, lectures notes and exercises are available for the course ‘Machine Learning I’ which Jakob taught together with Mathias Bethge at the Graduate School for Neural Information Processing Systems in WS 2012.

 

Meetings

Journal Clubs

If you are interested in joining one our of reading groups, please send an email to Jakob.

Reading group history

Date   Paper
09.02.17 “Gaussian Processes for Machine Learning”, by C. E. Rasmussen and C. Williams  
26.01.17 “Rényi Divergence Variational Inference”, by Y. Li and R. E. Turner  
12.01.17 “MCMC using Hamiltonian dynamics”, R. M. Neal  
08.12.16 “Automatic Variational ABC”, by A, Moreno et al.  
17.11.16 “Fast epsilon-free inference of simulation models with Bayesian conditional density estimation” by G. Papamakarios and I. Murray  
18.08.16 “Reinforcement learning”, tutorial by M. E. Harmon and S. S. Harmon  
04.08.16 “Bayesian methods for hackers”, by C. Davidson-Pilon. We talked about propabilistic programming in PyMC. The session was based on chapters 1 to 3 of the abovementioned book.  
21.07.16 Chapter 7 of “Theoretical Neuroscience”, by P. Dayan and L. F. Abbott  
16.06.16 “The Levenberg-Marquardt Algorithm” by A. Ranganathan  
02.06.16 “A tutorial on approximate Bayesian computation” by B. M. Turner, T. V. Zandt  
12.05.16 “Information Theory, Inference, and Learning Algorithms” by D. J. C. MacKay  
28.04.16 “Information Theory, Inference, and Learning Algorithms” by D. J. C. MacKay  
14.04.16 “Structured Sparsity through Convex Optimization” by F. Bach et al.  
31.03.16 “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers” by Boyd et al.  
11.03.16 “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers” by Boyd et al.  
26.02.16 “MuProp: Unbiased Backpropagation for Stochastic Neural Networks” by S. Gu et al.  
10.02.16 “A Recurrent Latent Variable Model for Sequential Data” by J. Chung et al.  
28.01.16 “Black box variational inference for state space models” by E. Archer et al.  
14.01.16 “Auto-Encoding Variational Bayes” by D. P. Kingma, M. Welling  
07.01.16 “A Critical Review of Recurrent Neural Networks of Sequence Learning” by Z. C. Lipton et al.  
10.12.15 “ImageNet Classification with Deep Convolutional Neural Networks” by A. Krizhevsky et al.  
26.11.15 “Practical Bayesian Optimization of Machine Learning Algorithms” by J. Snoek et al.  
05.11.15 “Slice sampling covariance hyperparameters of latent Gaussian models” by I. Murray, R. P. Adams  
08.10.15 “Firefly Monte Carlo: Exact MCMC with Subset of Data” by D. Maclaurin, R. P. Adams