We have share code accompanying most of our publications, most of it in Matlab or Python. This listing also includes packages for which a substantial portion of the code was developed by collaborators of the group. Details and references can be found in repositories. Please refer to the publications page for further resources and code, or to our github page github.com/mackelab.
Dimensionality reduction across multiple partial recordingsCode
A powerful approach for understanding neural population dynamics is to extract low-dimensional trajectories from population recordings using dimensionality reduction methods. Many neural activity recordings however do not capture activity in all relevant parts of the neural population at the same time. Most methods for dimensionality reduction on neural data are limited to single population recordings, and can not identify dynamics embedded across multiple measurements.
We developed a method for extracting low-dimensional dynamics from multiple, sequential recordings. Building on subspace-identification approaches for dynamical systems, the algorithm scales to millions of observed variables. It naturally handles missing data and multiple partial recordings, and can identify dynamics in relevant subspaces and predict correlations even in the presence of severe subsampling and small overlap between recordings, when substantial blocks of data are missing.
Modelling multivariate binary data with correlationsWebsite | Code
Multivariate, correlated binary data arises in a variety of applications. In neuroscience, spike trains recorded from populations of neurons are often analyzed after discretizing the data into binary multivariate ‘words’, which often exhibit correlations across neurons. The statistical structure of these ‘spike words’ has attracted substantial interest: What models are well suited for capturing these correlations? What insights can we gain from these statistics about putative underlying mechanisms? What are the functional consequences of these correlations? We have tackled these questions both by developing statistical models for discretized neural population spike trains, and by theoretically analyzing the properties of these models. The toolbox Corbinian includes a number of statistical methods for multivariate binary data, including
CorBinian is the successor to our pop_spike repository, which was developed in collaboration with Philipp Berens, Alexander Ecker, Andreas Tolias, Matthias Bethge, Manfred Opper, Iain Murray and Peter Latham. The MCMC methods for fitting maximum entropy models to large neural population is based on code developed by Tamara Broderick and Michael Berry.
Density estimation likelihood-free inferenceGitHub | Docs | Paper
delfi is a Python package for density estimation likelihood-free inference.
Different inference algorithms are implemented:
Linear dynamical system models with Poisson observations for modelling neural population spike trainsCode | CodeSSID
Neural populaton spike trains can exhibit substantial correlations both across neurons and across time. For data obtained from cortical multi-electrode recordings, these correlations are unlikely to arise solely from direct synaptic interactions, but are more likely to arise from cortical dynamics or shared modulatory influences. A simple model for capturing the structure of correlations in cortical recordings is to assume that they arise from a common coupling to an underlying dynamical system. A simple model for such systems is to assume that the underlying dynamics are low-dimensional and linear. As neural spike trains consist of multivariate point processes (or, in discrete time, count processes), such low-dimensional dynamics need to be combined with a suitable observation model, e.g. a Poisson model. We call the resulting model a Poisson-observation Linear Dynamical System, PLDS. We developed a number of different estimation procedures for PLDS models, including both a Laplace- and variational approximation for state inference, expectation maximization (EM) for parameter learning, and nonlinear subspace identification for fast parameter estimation (or as initialization for EM)
The methods are described in Macke et al 2011, Buesing et al 2012 and reviewed in Macke et al 2015, and were developed in collaboration with Lars Buesing, Maneesh Sahani, John Cunningham, Byron Yu and Krishna Shenoy.
Much of this code was written by Lars Buesing (Gatsby Unit & Columbia University, bitbucket). He also provided implementations of nuclear-norm minisation (as developed in Pfau et al 2013, github) and exponential family PCA.
Yuan Gao contributed code for fitting generalized count linear models , as described in Gao Buesing Shenoy Cunningham NIPS 2015.
Psignifit 4- Bayesian Psychometric function fitting without a pain in the neckCode
Psychometric functions play a central role in psychology and behavioural neuroscience, and are used extensively to model detection or discrimination behaviour as a function of an independent variable, e.g. the contrast of a visual stimulus. After data collection, researchers frequently fit a psychometric function to their data relating the independent variable on the abscissa to the observer’s behaviour on the ordinate. Previous Bayesian inference methods for psychometric function estimation are based on MCMC methods, which can be fiddly in practice and require manual interaction and expert knowledge. Together with Heiko Schuett, Stefan Harmeling and Felix Wichmann, we overcame these limitations using numerical integration methods. We are also included an beta-binomial observation method which is important for modelling non-stationary observers. The repository implementing Heiko’s method (Psignifit 4) is available at the github page of the Wichmann lab.
Gaussian Process methods for modelling cortical mapsCode and data | Python code | paper | paper 2
The primary visual cortex of primates and carnivores exhibits a remarkable organization– neurons are organized into topographic maps according to their tuning properties. Most strinkingly, neurons are arranged according to their preferred orientation into ‘orientation preference maps’ (OPMs) which include both `iso-orientation domains’ and ‘pin-wheels’, i.e. singularities around which multiple orientation collumns are arranged radially.
Estimation of OPMS from neural imaging measurements is a challenging statistical question. We developed Gaussian process methods for modelling orientation preference maps which make it possible to encode prior knowledge about the statistical structure of OPMs as well as about the correlation-structure of the noise, with the goal of making it possible to estimate maps more efficiently, and to obtain estimates of uncertainty about the estimated map.
Gaussian process factor analysis with Poisson observationsCode
Advances in multi-cell recording techniques allow neuroscientists to record neural activity from large populations of neurons. A popular approach to analysing neural population activity is to identify a smooth, low-dimensional summary of the population activity. Gaussian process factor analysis (GPFA) is a powerful technique for neural dimensionality reduction. However, it is based on a simplifying assumption that is not appropriate for neural spike trains: Spike counts conditioned on the intensity are Gaussian distributed, which fails to capture the discrete nature of neural spike counts or the mean-variance relationships typically observed in neural spike trains. Here we propose to extend GPFA using a more realistic assumption for spike generation: spike counts conditioned on the intensity are Poisson distributed, and the intensity is a non-linear function of the latent state. In his masters-thesis, Hooram developped and implemented methods for fitting Gaussian Process Factor Analysis models with Poisson observations.
Modelling inter-trial dependence in psychophysicsCode
Psychophysical experiments are used extensively in psychology and behavioural neuroscience. While it is often used that different trials of an experiment are statistically independent, it is well known that there are many violations of this assumption, and both human observers and animals often exhibit so called ‘sequential dependencies’ across multiple trials. Together with Ingo Fruend and Felix Wichmann, we developed statistical methods for modelling psychophysical data with sequential dependencies. The method can be used to estimate psychometric functions in the presence of serial dependence, and also to gain insight into the structure of the dependencies (e.g. how do errors on previous trials influence the decision on the next trial?). Methods are described in detail in Fruend et al 2014.
Poisson dynamical system models for nonstationary population spike trainsCode | Data (Tolias lab, BCM) | Paper
Neural population activity often exhibits rich variability. This variability can arise from single-neuron stochasticity, neural dynamics on short time-scales, as well as from modulations of neural firing properties on long time-scales, often referred to as neural non-stationarity. To better understand the nature of co-variability in neural circuits and their impact on cortical information processing, we introduced a hierarchical dynamics model that is able to capture both slow inter-trial modula- tions in firing rates as well as neural population dynamics, and derived a Bayesian Laplace propagation algorithm for joint inference of parameters and population states. This repository includes a matlab implementation of our NIPS paper. The method was developed and implemented by Mijung Park and Gergo Bohner (Gatsby Unit, UCL).
Variational Inference on Tree Structured Hidden Markov ModelsCode | Paper
Methods for fitting tree-structured hidden markov models with binary observations to identify hierarchical states.
Machine Learning I, GS Neural Information Processing
Slides, lectures notes and exercises are available for the course ‘Machine Learning I’ which Jakob taught together with Mathias Bethge at the Graduate School for Neural Information Processing Systems in WS 2012.