10 research outputs found

    Nonlinear decoding of a complex movie from the mammalian retina

    Get PDF
    Retina is a paradigmatic system for studying sensory encoding: the transformation of light into spiking activity of ganglion cells. The inverse problem, where stimulus is reconstructed from spikes, has received less attention, especially for complex stimuli that should be reconstructed “pixel-by-pixel”. We recorded around a hundred neurons from a dense patch in a rat retina and decoded movies of multiple small randomly-moving discs. We constructed nonlinear (kernelized and neural network) decoders that improved significantly over linear results. An important contribution to this was the ability of nonlinear decoders to reliably separate between neural responses driven by locally fluctuating light signals, and responses at locally constant light driven by spontaneous-like activity. This improvement crucially depended on the precise, non-Poisson temporal structure of individual spike trains, which originated in the spike-history dependence of neural responses. We propose a general principle by which downstream circuitry could discriminate between spontaneous and stimulus-driven activity based solely on higher-order statistical structure in the incoming spike trains

    Robust Decoding of Rich Dynamical Visual Scenes With Retinal Spikes

    Get PDF
    Sensory information transmitted to the brain activates neurons to create a series of coping behaviors. Understanding the mechanisms of neural computation and reverse engineering the brain to build intelligent machines requires establishing a robust relationship between stimuli and neural responses. Neural decoding aims to reconstruct the original stimuli that trigger neural responses. With the recent upsurge of artificial intelligence, neural decoding provides an insightful perspective for designing novel algorithms of brain-machine interface. For humans, vision is the dominant contributor to the interaction between the external environment and the brain. In this study, utilizing the retinal neural spike data collected over multi trials with visual stimuli of two movies with different levels of scene complexity, we used a neural network decoder to quantify the decoded visual stimuli with six different metrics for image quality assessment establishing comprehensive inspection of decoding. With the detailed and systematical study of the effect and single and multiple trials of data, different noise in spikes, and blurred images, our results provide an in-depth investigation of decoding dynamical visual scenes using retinal spikes. These results provide insights into the neural coding of visual scenes and services as a guideline for designing next-generation decoding algorithms of neuroprosthesis and other devices of brain-machine interface.</p

    Simultaneous Neural Spike Encoding and Decoding Based on Cross-modal Dual Deep Generative Model

    Get PDF
    Neural encoding and decoding of retinal ganglion cells (RGCs) have been attached great importance in the research work of brain-machine interfaces. Much effort has been invested to mimic RGC and get insight into RGC signals to reconstruct stimuli. However, there remain two challenges. On the one hand, complex nonlinear processes in retinal neural circuits hinder encoding models from enhancing their ability to fit the natural stimuli and modelling RGCs accurately. On the other hand, current research of the decoding process is separate from that of the encoding process, in which the liaison of mutual promotion between them is neglected. In order to alleviate the above problems, we propose a cross-modal dual deep generative model (CDDG) in this paper. CDDG treats the RGC spike signals and the stimuli as two modalities, which learns a shared latent representation for the concatenated modality and two modal-specific latent representations. Then, it imposes distribution consistency restriction on different latent space, cross-consistency and cycle-consistency constraints on the generated variables. Thus, our model ensures cross-modal generation from RGC spike signals to stimuli and vice versa. In our framework, the generation from stimuli to RGC spike signals is equivalent to neural encoding while the inverse process is equivalent to neural decoding. Hence, the proposed method integrates neural encoding and decoding and exploits the reciprocity between them. The experimental results demonstrate that our proposed method can achieve excellent encoding and decoding performance compared with the state-of-the-art methods on three salamander RGC spike datasets with natural stimuli

    Linear response for spiking neuronal networks with unbounded memory

    Get PDF
    We establish a general linear response relation for spiking neuronal networks, based on chains with unbounded memory. This relation allows us to predict the influence of a weak amplitude time-dependent external stimuli on spatio-temporal spike correlations, from the spontaneous statistics (without stimulus) in a general context where the memory in spike dynamics can extend arbitrarily far in the past. Using this approach, we show how linear response is explicitly related to neuronal dynamics with an example, the gIF model, introduced by M. Rudolph and A. Destexhe. This example illustrates the collective effect of the stimuli, intrinsic neuronal dynamics, and network connectivity on spike statistics. We illustrate our results with numerical simulations.Comment: 60 pages, 8 figure

    Robust Transcoding Sensory Information With Neural Spikes

    Get PDF
    Neural coding, including encoding and decoding, is one of the key problems in neuroscience for understanding how the brain uses neural signals to relate sensory perception and motor behaviors with neural systems. However, most of the existed studies only aim at dealing with the continuous signal of neural systems, while lacking a unique feature of biological neurons, termed spike, which is the fundamental information unit for neural computation as well as a building block for brain-machine interface. Aiming at these limitations, we propose a transcoding framework to encode multi-modal sensory information into neural spikes and then reconstruct stimuli from spikes. Sensory information can be compressed into 10% in terms of neural spikes, yet re-extract 100% of information by reconstruction. Our framework can not only feasibly and accurately reconstruct dynamical visual and auditory scenes, but also rebuild the stimulus patterns from functional magnetic resonance imaging (fMRI) brain activities. More importantly, it has a superb ability of noise immunity for various types of artificial noises and background signals. The proposed framework provides efficient ways to perform multimodal feature representation and reconstruction in a high-throughput fashion, with potential usage for efficient neuromorphic computing in a noisy environment

    Multiplexed computations in retinal ganglion cells of a single type

    Get PDF
    In the early visual system, cells of the same type perform the same computation in different places of the visual field. How these cells code together a complex visual scene is unclear. A common assumption is that cells of a single-type extract a single-stimulus feature to form a feature map, but this has rarely been observed directly. Using large-scale recordings in the rat retina, we show that a homogeneous population of fast OFF ganglion cells simultaneously encodes two radically different features of a visual scene. Cells close to a moving object code quasilinearly for its position, while distant cells remain largely invariant to the object's position and, instead, respond nonlinearly to changes in the object's speed. We develop a quantitative model that accounts for this effect and identify a disinhibitory circuit that mediates it. Ganglion cells of a single type thus do not code for one, but two features simultaneously. This richer, flexible neural map might also be present in other sensory systems

    Neural Encoding and Decoding with a Flow-based Invertible Generative Model

    Get PDF
    Recent studies on visual neural encoding and decoding have made significant progress, benefiting from the latest advances in deep neural networks having powerful representations. However, two challenges remain. First, the current decoding algorithms based on deep generative models always struggle with information losses, which may cause blurry reconstruction. Second, most studies model the neural encoding and decoding processes separately, neglecting the inherent dual relationship between the two tasks. In this paper, we propose a novel neural encoding and decoding method with a two-stage flow-based invertible generative model to tackle the above issues. First, a convolutional auto-encoder is trained to bridge the stimuli space and the feature space. Second, an adversarial cross-modal normalizing flow is trained to build up a bijective transformation between image features and neural signals, with local and global constraints imposed on the latent space to render cross-modal alignment. The method eventually achieves bi-directional generation of visual stimuli and neural responses with a combination of the flow-based generator and the auto-encoder. The flow-based invertible generative model can minimize information losses and unify neural encoding and decoding into a single framework. Experimental results on different neural signals containing spike signals and functional magnetic resonance imaging demonstrate that our model achieves the best comprehensive performance among the comparison models

    Nonlinear decoding of a complex movie from the mammalian retina

    Get PDF
    This package contains data for the publication "Nonlinear decoding of a complex movie from the mammalian retina" by Deny S. et al, PLOS Comput Biol (2018). The data consists of (i) 91 spike sorted, isolated rat retinal ganglion cells that pass stability and quality criteria, recorded on the multi-electrode array, in response to the presentation of the complex movie with many randomly moving dark discs. The responses are represented as 648000 x 91 binary matrix, where the first index indicates the timebin of duration 12.5 ms, and the second index the neural identity. The matrix entry is 0/1 if the neuron didn't/did spike in the particular time bin. (ii) README file and a graphical illustration of the structure of the experiment, specifying how the 648000 timebins are split into epochs where 1, 2, 4, or 10 discs were displayed, and which stimulus segments are exact repeats or unique ball trajectories. (iii) a 648000 x 400 matrix of luminance traces for each of the 20 x 20 positions ("sites") in the movie frame, with time that is locked to the recorded raster. The luminance traces are produced as described in the manuscript by filtering the raw disc movie with a small gaussian spatial kernel

    Nonlinear decoding of a complex movie from the mammalian retina.

    Get PDF
    Retina is a paradigmatic system for studying sensory encoding: the transformation of light into spiking activity of ganglion cells. The inverse problem, where stimulus is reconstructed from spikes, has received less attention, especially for complex stimuli that should be reconstructed "pixel-by-pixel". We recorded around a hundred neurons from a dense patch in a rat retina and decoded movies of multiple small randomly-moving discs. We constructed nonlinear (kernelized and neural network) decoders that improved significantly over linear results. An important contribution to this was the ability of nonlinear decoders to reliably separate between neural responses driven by locally fluctuating light signals, and responses at locally constant light driven by spontaneous-like activity. This improvement crucially depended on the precise, non-Poisson temporal structure of individual spike trains, which originated in the spike-history dependence of neural responses. We propose a general principle by which downstream circuitry could discriminate between spontaneous and stimulus-driven activity based solely on higher-order statistical structure in the incoming spike trains
    corecore