348 research outputs found

    Representation of Time-Varying Stimuli by a Network Exhibiting Oscillations on a Faster Time Scale

    Get PDF
    Sensory processing is associated with gamma frequency oscillations (30–80 Hz) in sensory cortices. This raises the question whether gamma oscillations can be directly involved in the representation of time-varying stimuli, including stimuli whose time scale is longer than a gamma cycle. We are interested in the ability of the system to reliably distinguish different stimuli while being robust to stimulus variations such as uniform time-warp. We address this issue with a dynamical model of spiking neurons and study the response to an asymmetric sawtooth input current over a range of shape parameters. These parameters describe how fast the input current rises and falls in time. Our network consists of inhibitory and excitatory populations that are sufficient for generating oscillations in the gamma range. The oscillations period is about one-third of the stimulus duration. Embedded in this network is a subpopulation of excitatory cells that respond to the sawtooth stimulus and a subpopulation of cells that respond to an onset cue. The intrinsic gamma oscillations generate a temporally sparse code for the external stimuli. In this code, an excitatory cell may fire a single spike during a gamma cycle, depending on its tuning properties and on the temporal structure of the specific input; the identity of the stimulus is coded by the list of excitatory cells that fire during each cycle. We quantify the properties of this representation in a series of simulations and show that the sparseness of the code makes it robust to uniform warping of the time scale. We find that resetting of the oscillation phase at stimulus onset is important for a reliable representation of the stimulus and that there is a tradeoff between the resolution of the neural representation of the stimulus and robustness to time-warp. Author Summary Sensory processing of time-varying stimuli, such as speech, is associated with high-frequency oscillatory cortical activity, the functional significance of which is still unknown. One possibility is that the oscillations are part of a stimulus-encoding mechanism. Here, we investigate a computational model of such a mechanism, a spiking neuronal network whose intrinsic oscillations interact with external input (waveforms simulating short speech segments in a single acoustic frequency band) to encode stimuli that extend over a time interval longer than the oscillation's period. The network implements a temporally sparse encoding, whose robustness to time warping and neuronal noise we quantify. To our knowledge, this study is the first to demonstrate that a biophysically plausible model of oscillations occurring in the processing of auditory input may generate a representation of signals that span multiple oscillation cycles.National Science Foundation (DMS-0211505); Burroughs Wellcome Fund; U.S. Air Force Office of Scientific Researc

    Book reports

    Get PDF

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF

    27th annual computational neuroscience meeting (CNS*2018) : part one

    Get PDF

    Parallel computing for brain simulation

    Get PDF
    [Abstract] Background: The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. Aims: For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. Conclusion: This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing.Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; GRC2014/049Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; R2014/039Instituto de Salud Carlos III; PI13/0028

    Learning spatio-temporal spike train encodings with ReSuMe, DelReSuMe, and Reward-modulated Spike-timing Dependent Plasticity in Spiking Neural Networks

    Get PDF
    SNNs are referred to as the third generation of ANNs. Inspired from biological observations and recent advances in neuroscience, proposed methods increase the power of SNNs. Today, the main challenge is to discover efficient plasticity rules for SNNs. Our research aims are to explore/extend computational models of plasticity. We make various achievements using ReSuMe, DelReSuMe, and R-STDP based on the fundamental plasticity of STDP. The information in SNNs is encoded in the patterns of firing activities. For biological plausibility, it is necessary to use multi-spike learning instead of single-spike. Therefore, we focus on encoding inputs/outputs using multiple spikes. ReSuMe is capable of generating desired patterns with multiple spikes. The trained neuron in ReSuMe can fire at desired times in response to spatio-temporal inputs. We propose alternative architecture for ReSuMe dealing with heterogeneous synapses. It is demonstrated that the proposed topology exactly mimic the ReSuMe. A novel extension of ReSuMe, called DelReSuMe, has better accuracy using less iteration by using multi-delay plasticity in addition to weight learning under noiseless and noisy conditions. The proposed heterogeneous topology is also used for DelReSuMe. Another plasticity extension based on STDP takes into account reward to modulate synaptic strength named R-STDP. We use dopamine-inspired STDP in SNNs to demonstrate improvements in mapping spatio-temporal patterns of spike trains with the multi-delay mechanism versus single connection. From the viewpoint of Machine Learning, Reinforcement Learning is outlined through a maze task in order to investigate the mechanisms of reward and eligibility trace which are the fundamental in R-STDP. To develop the approach we implement Temporal-Difference learning and novel knowledge-based RL techniques on the maze task. We develop rule extractions which are combined with RL and wall follower algorithms. We demonstrate the improvements on the exploration efficiency of TD learning for maze navigation tasks
    corecore