20,123 research outputs found

    Pooling and Correlated Neural Activity

    Get PDF
    Correlations between spike trains can strongly modulate neuronal activity and affect the ability of neurons to encode information. Neurons integrate inputs from thousands of afferents. Similarly, a number of experimental techniques are designed to record pooled cell activity. We review and generalize a number of previous results that show how correlations between cells in a population can be amplified and distorted in signals that reflect their collective activity. The structure of the underlying neuronal response can significantly impact correlations between such pooled signals. Therefore care needs to be taken when interpreting pooled recordings, or modeling networks of cells that receive inputs from large presynaptic populations. We also show that the frequently observed runaway synchrony in feedforward chains is primarily due to the pooling of correlated inputs

    Decoding the activity of neuronal populations in macaque primary visual cortex

    Get PDF
    Visual function depends on the accuracy of signals carried by visual cortical neurons. Combining information across neurons should improve this accuracy because single neuron activity is variable. We examined the reliability of information inferred from populations of simultaneously recorded neurons in macaque primary visual cortex. We considered a decoding framework that computes the likelihood of visual stimuli from a pattern of population activity by linearly combining neuronal responses and tested this framework for orientation estimation and discrimination. We derived a simple parametric decoder assuming neuronal independence and a more sophisticated empirical decoder that learned the structure of the measured neuronal response distributions, including their correlated variability. The empirical decoder used the structure of these response distributions to perform better than its parametric variant, indicating that their structure contains critical information for sensory decoding. These results show how neuronal responses can best be used to inform perceptual decision-making

    Single-Trial Phase Precession in the Hippocampus

    Get PDF
    During the crossing of the place field of a pyramidal cell in the rat hippocampus, the firing phase of the cell decreases with respect to the local theta rhythm. This phase precession is usually studied on the basis of data in which many place field traversals are pooled together. Here we study properties of phase precession in single trials. We found that single-trial and pooled-trial phase precession were different with respect to phase-position correlation, phase-time correlation, and phase range. Whereas pooled-trial phase precession may span 360°, the most frequent single-trial phase range was only ∼180°. In pooled trials, the correlation between phase and position (r = −0.58) was stronger than the correlation between phase and time (r = −0.27), whereas in single trials these correlations (r = −0.61 for both) were not significantly different. Next, we demonstrated that phase precession exhibited a large trial-to-trial variability. Overall, only a small fraction of the trial-to-trial variability in measures of phase precession (e.g., slope or offset) could be explained by other single-trial properties (such as running speed or firing rate), whereas the larger part of the variability remains to be explained. Finally, we found that surrogate single trials, created by randomly drawing spikes from the pooled data, are not equivalent to experimental single trials: pooling over trials therefore changes basic measures of phase precession. These findings indicate that single trials may be better suited for encoding temporally structured events than is suggested by the pooled data

    Convolutional Drift Networks for Video Classification

    Full text link
    Analyzing spatio-temporal data like video is a challenging task that requires processing visual and temporal information effectively. Convolutional Neural Networks have shown promise as baseline fixed feature extractors through transfer learning, a technique that helps minimize the training cost on visual information. Temporal information is often handled using hand-crafted features or Recurrent Neural Networks, but this can be overly specific or prohibitively complex. Building a fully trainable system that can efficiently analyze spatio-temporal data without hand-crafted features or complex training is an open challenge. We present a new neural network architecture to address this challenge, the Convolutional Drift Network (CDN). Our CDN architecture combines the visual feature extraction power of deep Convolutional Neural Networks with the intrinsically efficient temporal processing provided by Reservoir Computing. In this introductory paper on the CDN, we provide a very simple baseline implementation tested on two egocentric (first-person) video activity datasets.We achieve video-level activity classification results on-par with state-of-the art methods. Notably, performance on this complex spatio-temporal task was produced by only training a single feed-forward layer in the CDN.Comment: Published in IEEE Rebooting Computin
    corecore