11 research outputs found

    Non-reversible Gaussian processes for identifying latent dynamical structure in neural data

    Get PDF
    A common goal in the analysis of neural data is to compress large population recordings into sets of interpretable, low-dimensional latent trajectories. This problem can be approached using Gaussian process (GP)-based methods which provide uncertainty quantification and principled model selection. However, standard GP priors do not distinguish between underlying dynamical processes and other forms of temporal autocorrelation. Here, we propose a new family of “dynamical” priors over trajectories, in the form of GP covariance functions that express a property shared by most dynamical systems: temporal non-reversibility. Non-reversibility is a universal signature of autonomous dynamical systems whose state trajectories follow consistent flow fields, such that any observed trajectory could not occur in reverse. Our new multi-output GP kernels can be used as drop-in replacements for standard kernels in multivariate regression, but also in latent variable models such as Gaussian process factor analysis (GPFA). We therefore introduce GPFADS (Gaussian Process Factor Analysis with Dynamical Structure), which models single-trial neural population activity using low-dimensional, non-reversible latent processes. Unlike previously proposed non-reversible multi-output kernels, ours admits a Kronecker factorization enabling fast and memory-efficient learning and inference. We apply GPFADS to synthetic data and show that it correctly recovers ground truth phase portraits. GPFADS also provides a probabilistic generalization of jPCA, a method originally developed for identifying latent rotational dynamics in neural data. When applied to monkey M1 neural recordings, GPFADS discovers latent trajectories with strong dynamical structure in the form of rotations

    Building population models for large-scale neural recordings: opportunities and pitfalls

    Get PDF
    Modern recording technologies now enable simultaneous recording from large numbers of neurons. This has driven the development of new statistical models for analyzing and interpreting neural population activity. Here we provide a broad overview of recent developments in this area. We compare and contrast different approaches, highlight strengths and limitations, and discuss biological and mechanistic insights that these methods provide

    Temporal alignment and latent Gaussian process factor inference in population spike trains

    Get PDF
    We introduce a novel scalable approach to identifying common latent structure in neural population spike-trains, which allows for variability both in the trajectory and in the rate of progression of the underlying computation. Our approach is based on shared latent Gaussian processes (GPs) which are combined linearly, as in the Gaussian Process Factor Analysis (GPFA) algorithm. We extend GPFA to handle unbinned spike-train data by incorporating a continuous time point-process likelihood model, achieving scalability with a sparse variational approximation. Shared variability is separated into terms that express condition dependence, as well as trial-to-trial variation in trajectories. Finally, we introduce a nested GP formulation to capture variability in the rate of evolution along the trajectory. We show that the new method learns to recover latent trajectories in synthetic data, and can accurately identify the trial-to-trial timing of movement-related parameters from motor cortical data without any supervision

    Neural states in parietal areas during arm reaching

    Get PDF
    Since the first subdivisions of the brain into macro regions, it has always been thought a priori that, given the heterogeneity of neurons, different areas host specific functions and process unique information in order to generate a behaviour. Moreover, the various sensory inputs coming from different sources (eye, skin, proprioception) flow from one macro area to another, being constantly computed and updated. Therefore, especially for non-contiguous cortical areas, it is not expected to find the same information. From this point of view, it would be inconceivable that the motor and the parietal cortices, diversified by the information encoded and by the anatomical position in the brain, could show very similar neural dynamics. With the present thesis, by analyzing the population activity of parietal areas V6A and PEc with machine learning methods, we argue that a simplified view of the brain organization do not reflect the actual neural processes. We reliably detected a number of neural states that were tightly linked to distinct periods of the task sequence, i.e. the planning and execution of movement and the holding of target as already observed in motor cortices. The states before and after the movement could be further segmented into two states related to different stages of movement planning and arm posture processing. Rather unexpectedly, we found that activity during the movement could be parsed into two states of equal duration temporally linked to the acceleration and deceleration phases of the arm. Our findings suggest that, at least during arm reaching in 3D space, the posterior parietal cortex (PPC) shows low-level population neural dynamics remarkably similar to those found in the motor cortices. In addition, the present findings suggest that computational processes in PPC could be better understood if studied using a dynamical system approach rather than studying a mosaic of single units

    Dynamical structure in neural population activity

    Get PDF
    The question of how the collective activity of neural populations in the brain gives rise to complex behaviour is fundamental to neuroscience. At the core of this question lie considerations about how neural circuits can perform computations that enable sensory perception, motor control, and decision making. It is thought that such computations are implemented by the dynamical evolution of distributed activity in recurrent circuits. Thus, identifying and interpreting dynamical structure in neural population activity is a key challenge towards a better understanding of neural computation. In this thesis, I make several contributions in addressing this challenge. First, I develop two novel methods for neural data analysis. Both methods aim to extract trajectories of low-dimensional computational state variables directly from the unbinned spike-times of simultaneously recorded neurons on single trials. The first method separates inter-trial variability in the low-dimensional trajectory from variability in the timing of progression along its path, and thus offers a quantification of inter-trial variability in the underlying computational process. The second method simultaneously learns a low-dimensional portrait of the underlying nonlinear dynamics of the circuit, as well as the system's fixed points and locally linearised dynamics around them. This approach facilitates extracting interpretable low-dimensional hypotheses about computation directly from data. Second, I turn to the question of how low-dimensional dynamical structure may be embedded within a high-dimensional neurobiological circuit with excitatory and inhibitory cell-types. I analyse how such circuit-level features shape population activity, with particular focus on responses to targeted optogenetic perturbations of the circuit. Third, I consider the problem of implementing multiple computations in a single dynamical system. I address this in the framework of multi-task learning in recurrently connected networks and demonstrate that a careful organisation of low-dimensional, activity-defined subspaces within the network can help to avoid interference across tasks

    Applications

    Get PDF

    Population analysis of neural data -- developments in statistical methods and related computational models

    Get PDF
    A key goal of neuroscience is to understand how the remarkable computational abilities of our brain emerge as a result of interconnected neuronal populations. Recently, advances in technologies for recording neural activity have increased the number of simultaneously recorded neurons by orders of magnitude, and these technologies are becoming more widely adopted. At the same time, massive increases in computational power and improved algorithms have enabled advanced statistical analyses of neural population activity and promoted our understanding of population coding. Nevertheless, there are many unanswered emerging questions, when it comes to analyzing and interpreting neural recordings. There are two major parts to this study. First, we consider an issue of increasing importance: that many in vivo recordings are now made by calcium-dependent fluorescent imaging, which only indirectly reports neural activity. We compare measurements of extracellular single units with fluorescence changes extracted from single neurons (often used as a proxy for spike rates), both recorded from cortical neural populations of behaving mice. We perform identical analyses at the single cell level and population level, and compare the results, uncovering a number of differences, or biases. We propose a phenomenological model to transform spike trains into synthetic imaging data and test whether the transformation explains the biases found. We discover that the slow temporal dynamics of calcium imaging obscure rapid changes in neuronal selectivity and disperse dynamic features in time. As a result, spike rate modulation that is locked to temporally localized events can appear as a more sequence-like pattern of activity in the imaging data. In addition, calcium imaging is more sensitive to increases rather than decreases in spike rate, leading to biased estimates of neural selectivity. These biases need to be considered when interpreting calcium imaging data. The second part of this work embarks on a challenging yet fruitful study of latent variable analysis of simultaneously recorded neural activity in a decision-making task. To connect the neural dynamics in different stages of a decision-making task, we developed a time-varying latent dynamics system model that uncovers neural dynamics shared by neurons in a local decision-making circuit. The shared neural activity supports the dynamics of choice generation and memory in a fashion akin to drift diffusion models, and robustly maintains a decision signal in the post-decision period. Importantly, we find that error trials follow similar dynamics to those of correct trials, but their dynamics are separated in shared neural activity space, proving a more correct early decoding estimation of an animal's success or failure at a given trial. Overall, the shared neural activity dynamics can predict multiple measures of behavioral variability including performance, reaction time, and trial correctness, and therefore are a useful summary of the neural representation. Such an approach can be readily applied to study complex dynamics in other neural systems. In summary, this dissertation represents an important step towards developing model-based analysis of neuronal dynamics and understanding population codes in large-scale neural data

    Linear Dynamics of Evidence Integration in Contextual Decision Making

    Get PDF
    Individual neurons in Prefrontal Cortex (PFC) exhibit a vast complexity in their responses. Central in Neuroscience is to understand how their collective activity underlies powerful computations responsible for higher order cognitive processes. In a recent study (Mante et al., 2013) two monkeys were trained to perform a contextual decision-making task, which required to selectively integrate the relevant evidence –either the color or the motion coherence of a random dots stimulus– and disregard the irrelevant one. A non-linear RNN trained to solve the same task found a solution that accounted for the selective integration computation, which could be understood by linearizing the dynamics of the network in each context. In this study, we took a different approach by explicitly fitting a Linear Dynamical System (LDS) model to the data from each context. We also fitted a novel jointly-factored linear model (JF), equivalent to the LDS but with no dynamical constraints and able to capture arbitrary patterns in time. Both models performed analogously, indicating that PFC data display systematic dynamics consistent with the LDS prior. Motion and color input signals were inferred and spanned independent subspaces. The input subspaces largely overlapped across contexts along dimensions that captured coherence and coherence magnitude related variance. The dynamics changed in each context so that relevant stimuli were strongly amplified. In one of the monkeys, however, the integrated color signal emerged via direct input modulation. The integration took place within subspaces spanned by multiple slow modes. These strongly overlapped along a single dimension across contexts, which was consistent with a globally identified decision axis. Interestingly, irrelevant inputs were not dynamically discarded, but were also integrated, although in a much lower extent. Finally, the model reproduced the main dynamical features of the population trajectories and accurately captured individual PSTHs. Our study suggests that a whole space of sensory-related input signals invariantly modulates PFC responses and that decision signals emerge as the inputs are shaped by a changing circuit dynamics. Our findings imply a novel mechanism by which sensory-related information is selected and integrated for contextual computations
    corecore