97 research outputs found

    Integrating Statistical and Machine Learning Approaches to Identify Receptive Field Structure in Neural Populations

    Full text link
    Neurons can code for multiple variables simultaneously and neuroscientists are often interested in classifying neurons based on their receptive field properties. Statistical models provide powerful tools for determining the factors influencing neural spiking activity and classifying individual neurons. However, as neural recording technologies have advanced to produce simultaneous spiking data from massive populations, classical statistical methods often lack the computational efficiency required to handle such data. Machine learning (ML) approaches are known for enabling efficient large scale data analyses; however, they typically require massive training sets with balanced data, along with accurate labels to fit well. Additionally, model assessment and interpretation are often more challenging for ML than for classical statistical methods. To address these challenges, we develop an integrated framework, combining statistical modeling and machine learning approaches to identify the coding properties of neurons from large populations. In order to demonstrate this framework, we apply these methods to data from a population of neurons recorded from rat hippocampus to characterize the distribution of spatial receptive fields in this region

    Point process modeling and estimation: advances in the analysis of dynamic neural spiking data

    Full text link
    A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in population spiking data. Lastly, we proposed a general three-step paradigm that allows us to relate behavioral outcomes of various tasks to simultaneously recorded neural activity across multiple brain areas, which is a step towards closed-loop therapies for psychological diseases using real-time neural stimulation. These methods are suitable for real-time implementation for content-based feedback experiments

    A common goodness-of-fit framework for neural population models using marked point process time-rescaling

    Get PDF
    A critical component of any statistical modeling procedure is the ability to assess the goodness-of-fit between a model and observed data. For spike train models of individual neurons, many goodness-of-fit measures rely on the time-rescaling theorem and assess model quality using rescaled spike times. Recently, there has been increasing interest in statistical models that describe the simultaneous spiking activity of neuron populations, either in a single brain region or across brain regions. Classically, such models have used spike sorted data to describe relationships between the identified neurons, but more recently clusterless modeling methods have been used to describe population activity using a single model. Here we develop a generalization of the time-rescaling theorem that enables comprehensive goodness-of-fit analysis for either of these classes of population models. We use the theory of marked point processes to model population spiking activity, and show that under the correct model, each spike can be rescaled individually to generate a uniformly distributed set of events in time and the space of spike marks. After rescaling, multiple well-established goodness-of-fit procedures and statistical tests are available. We demonstrate the application of these methods both to simulated data and real population spiking in rat hippocampus. We have made the MATLAB and Python code used for the analyses in this paper publicly available through our Github repository at https://github.com/Eden-Kramer-Lab/popTRT.This work was supported by grants from the NIH (MH105174, NS094288) and the Simons Foundation (542971). (MH105174 - NIH; NS094288 - NIH; 542971 - Simons Foundation)Published versio

    Integrating statistical and machine learning approaches to identify receptive field structure in neural populations

    Full text link
    Neural coding is essential for understanding how the activity of individual neurons or ensembles of neurons relates to cognitive processing of the world. Neurons can code for multiple variables simultaneously and neuroscientists are interested in classifying neurons based on the variables they represent. Building a model identification paradigm to identify neurons in terms of their coding properties is essential to understanding how the brain processes information. Statistical paradigms are capable of methodologically determining the factors influencing neural observations and assessing the quality of the resulting models to characterize and classify individual neurons. However, as neural recording technologies develop to produce data from massive populations, classical statistical methods often lack the computational efficiency required to handle such data. Machine learning (ML) approaches are known for enabling efficient large scale data analysis; however, they require huge training data sets, and model assessment and interpretation are more challenging than for classical statistical methods. To address these challenges, we develop an integrated framework, combining statistical modeling and machine learning approaches to identify the coding properties of neurons from large populations. In order to evaluate our approaches, we apply them to data from a population of neurons in rat hippocampus and prefrontal cortex (PFC), to characterize how spatial learning and memory processes are represented in these areas. The data consist of local field potentials (LFP) and spiking data simultaneously recorded from the CA1 region of hippocampus and the PFC of a male Long Evans rat performing a spatial alternation task on a W-shaped track. We have examined this data in three separate but related projects. In one project, we build an improved class of statistical models for neural activity by expanding a common set of basis functions to increase the statistical power of the resulting models. In the second project, we identify the individual neurons in hippocampus and PFC and classify them based on their coding properties by using statistical model identification methods. We found that a substantial proportion of hippocampus and PFC cells are spatially selective, with position and velocity coding, and rhythmic firing properties. These methods identified clear differences between hippocampal and prefrontal populations, and allowed us to classify the coding properties of the full population of neurons in these two regions. For the third project, we develop a supervised machine learning classifier based on convolutional neural networks (CNNs), which use classification results from statistical models and additional simulated data as ground truth signals for training. This integration of statistical and ML approaches allows for statistically principled and computationally efficient classification of the coding properties of general neural populations

    The orbitofrontal cortex maps future navigational goals

    Get PDF
    Accurate navigation to a desired goal requires consecutive estimates of spatial relationships between the current position and future destination throughout the journey. Although neurons in the hippocampal formation can represent the position of an animal as well as its nearby trajectories their role in determining the destination of the animal has been questioned. It is, thus, unclear whether the brain can possess a precise estimate of target location during active environmental exploration. Here we describe neurons in the rat orbitofrontal cortex (OFC) that form spatial representations persistently pointing to the subsequent goal destination of an animal throughout navigation. This destination coding emerges before the onset of navigation, without direct sensory access to a distal goal, and even predicts the incorrect destination of an animal at the beginning of an error trial. Goal representations in the OFC are maintained by destination-specific neural ensemble dynamics, and their brief perturbation at the onset of a journey led to a navigational error. These findings suggest that the OFC is part of the internal goal map of the brain, enabling animals to navigate precisely to a chosen destination that is beyond the range of sensory perception

    Contributions to statistical analysis methods for neural spiking activity

    Full text link
    With the technical advances in neuroscience experiments in the past few decades, we have seen a massive expansion in our ability to record neural activity. These advances enable neuroscientists to analyze more complex neural coding and communication properties, and at the same time, raise new challenges for analyzing neural spiking data, which keeps growing in scale, dimension, and complexity. This thesis proposes several new statistical methods that advance statistical analysis approaches for neural spiking data, including sequential Monte Carlo (SMC) methods for efficient estimation of neural dynamics from membrane potential threshold crossings, state-space models using multimodal observation processes, and goodness-of-fit analysis methods for neural marked point process models. In a first project, we derive a set of iterative formulas that enable us to simulate trajectories from stochastic, dynamic neural spiking models that are consistent with a set of spike time observations. We develop a SMC method to simultaneously estimate the parameters of the model and the unobserved dynamic variables from spike train data. We investigate the performance of this approach on a leaky integrate-and-fire model. In another project, we define a semi-latent state-space model to estimate information related to the phenomenon of hippocampal replay. Replay is a recently discovered phenomenon where patterns of hippocampal spiking activity that typically occur during exploration of an environment are reactivated when an animal is at rest. This reactivation is accompanied by high frequency oscillations in hippocampal local field potentials. However, methods to define replay mathematically remain undeveloped. In this project, we construct a novel state-space model that enables us to identify whether replay is occurring, and if so to estimate the movement trajectories consistent with the observed neural activity, and to categorize the content of each event. The state-space model integrates information from the spiking activity from the hippocampal population, the rhythms in the local field potential, and the rat's movement behavior. Finally, we develop a new, general time-rescaling theorem for marked point processes, and use this to develop a general goodness-of-fit framework for neural population spiking models. We investigate this approach through simulation and a real data application

    Particle-filtering approaches for nonlinear Bayesian decoding of neuronal spike trains

    Full text link
    The number of neurons that can be simultaneously recorded doubles every seven years. This ever increasing number of recorded neurons opens up the possibility to address new questions and extract higher dimensional stimuli from the recordings. Modeling neural spike trains as point processes, this task of extracting dynamical signals from spike trains is commonly set in the context of nonlinear filtering theory. Particle filter methods relying on importance weights are generic algorithms that solve the filtering task numerically, but exhibit a serious drawback when the problem dimensionality is high: they are known to suffer from the 'curse of dimensionality' (COD), i.e. the number of particles required for a certain performance scales exponentially with the observable dimensions. Here, we first briefly review the theory on filtering with point process observations in continuous time. Based on this theory, we investigate both analytically and numerically the reason for the COD of weighted particle filtering approaches: Similarly to particle filtering with continuous-time observations, the COD with point-process observations is due to the decay of effective number of particles, an effect that is stronger when the number of observable dimensions increases. Given the success of unweighted particle filtering approaches in overcoming the COD for continuous- time observations, we introduce an unweighted particle filter for point-process observations, the spike-based Neural Particle Filter (sNPF), and show that it exhibits a similar favorable scaling as the number of dimensions grows. Further, we derive rules for the parameters of the sNPF from a maximum likelihood approach learning. We finally employ a simple decoding task to illustrate the capabilities of the sNPF and to highlight one possible future application of our inference and learning algorithm

    Replay as wavefronts and theta sequences as bump oscillations in a grid cell attractor network.

    Get PDF
    Grid cells fire in sequences that represent rapid trajectories in space. During locomotion, theta sequences encode sweeps in position starting slightly behind the animal and ending ahead of it. During quiescence and slow wave sleep, bouts of synchronized activity represent long trajectories called replays, which are well-established in place cells and have been recently reported in grid cells. Theta sequences and replay are hypothesized to facilitate many cognitive functions, but their underlying mechanisms are unknown. One mechanism proposed for grid cell formation is the continuous attractor network. We demonstrate that this established architecture naturally produces theta sequences and replay as distinct consequences of modulating external input. Driving inhibitory interneurons at the theta frequency causes attractor bumps to oscillate in speed and size, which gives rise to theta sequences and phase precession, respectively. Decreasing input drive to all neurons produces traveling wavefronts of activity that are decoded as replays

    Spatial learning and navigation in the rat:a biomimetic model

    Get PDF
    Animals behave in different ways depending on the specific task they are required to solve. In certain cases, if a cue marks the goal location, they can rely on simple stimulusresponse associations. In contrast, other tasks require the animal to be endowed with a representation of space. Such a representation (i.e. cognitive map) allows the animal to locate itself within a known environment and perform complex target-directed behaviour. In order to efficiently perform, the animal not only should be able to exhibit these types of behaviour, but it should be able to select which behaviour is the most appropriate at any given task conditions. Neurophysiological and behavioural experiments provide important information on how such processes may take place in the rodent's brain. Specifically, place- and orientation sensitive cells in the rat Hippocampus have been interpreted as a neural substrate for spatial abilities related to the theory of the cognitive map proposed in the late 1940s by Tolman. Moreover, recent dissociation experiments using selectively located lesions, as well as pharmacological studies have shown that different brain regions may be involved in different types of behaviour. Accordingly, one memory system involving the hippocampus and the ventral striatum would be responsible for cognitive navigation, while navigation based on stimulus-response associations would be mediated by the dorsolateral striatum. Based on these studies, the aim of this work is to develop a neural network model of the spatial abilities of the rat. The model, based on functional properties and anatomical inter-connections of the brain areas involved in spatial learning should be able to establish a distributed representation of space composed of place-sensitive units. Such a representation takes into account both internal and external sensory information, and the model reproduces physiological properties of place cells such as changes in their directional dependence. Moreover, the spatial representation may be used to perform cognitive navigation. Modelled place cells drive an extra-hippocampal population of action-coding cells, allowing the establishment of place-response associations. These associations encoded in synaptic connections between place- and action-cells are modified by means of reinforcement learning. In a similar way, simple sensory input can be used to establish stimulus-response associations. These associations are encoded in a different set of action cells which corresponds to a different neural substrate encoding for non-cognitive navigation strategies (i.e. taxon or praxic). Both cognitive and non-cognitive navigation strategies compete for action control to determine the actual behaviour of the agent. Tests of the performance of the model show that it is able to establish a representation of space, and modelled place cells reproduce some physiological properties of their biological counterparts. Furthermore, the model reproduces goal-based behaviour based on both cognitive and non-cognitive strategies as well as behaviour in conflicting situations reported in experimental studies in animals
    corecore