208 research outputs found

    Neuronal Synchronization Can Control the Energy Efficiency of Inter-Spike Interval Coding

    Get PDF
    The role of synchronous firing in sensory coding and cognition remains controversial. While studies, focusing on its mechanistic consequences in attentional tasks, suggest that synchronization dynamically boosts sensory processing, others failed to find significant synchronization levels in such tasks. We attempt to understand both lines of evidence within a coherent theoretical framework. We conceptualize synchronization as an independent control parameter to study how the postsynaptic neuron transmits the average firing activity of a presynaptic population, in the presence of synchronization. We apply the Berger-Levy theory of energy efficient information transmission to interpret simulations of a Hodgkin-Huxley-type postsynaptic neuron model, where we varied the firing rate and synchronization level in the presynaptic population independently. We find that for a fixed presynaptic firing rate the simulated postsynaptic interspike interval distribution depends on the synchronization level and is well-described by a generalized extreme value distribution. For synchronization levels of 15% to 50%, we find that the optimal distribution of presynaptic firing rate, maximizing the mutual information per unit cost, is maximized at ~30% synchronization level. These results suggest that the statistics and energy efficiency of neuronal communication channels, through which the input rate is communicated, can be dynamically adapted by the synchronization level.Comment: 47 pages, 14 figures, 2 Table

    Synchrony in Neuronal Communications: An Energy Efficient Scheme

    Full text link
    We are interested in understanding the neural correlates of attentional processes using first principles. Here we apply a recently developed first principles approach that uses transmitted information in bits per joule to quantify the energy efficiency of information transmission for an inter-spike-interval (ISI) code that can be modulated by means of the synchrony in the presynaptic population. We simulate a single compartment conductance-based model neuron driven by excitatory and inhibitory spikes from a presynaptic population, where the rate and synchrony in the presynaptic excitatory population may vary independently from the average rate. We find that for a fixed input rate, the ISI distribution of the post synaptic neuron depends on the level of synchrony and is well-described by a Gamma distribution for synchrony levels less than 50%. For levels of synchrony between 15% and 50% (restricted for technical reasons), we compute the optimum input distribution that maximizes the mutual information per unit energy. This optimum distribution shows that an increased level of synchrony, as it has been reported experimentally in attention-demanding conditions, reduces the mode of the input distribution and the excitability threshold of post synaptic neuron. This facilitates a more energy efficient neuronal communication.Comment: 6 pages, 5 figures, Accepted for publication to IWCIT 201

    Minimum-error, energy-constrained source coding by sensory neurons

    Get PDF
    Neural coding, the process by which neurons represent, transmit, and manipulate physical signals, is critical to the function of the nervous system. Despite years of study, neural coding is still not fully understood. Efforts to model neural coding could improve both the understanding of the nervous system and the design of artificial devices which interact with neurons. Sensory receptors and neurons transduce physical signals into a sequence of action potentials, called a spike train. The principles which underly the translation from signal to spike train are still under investigation. From the perspective of an organism, neural codes which maximize the fidelity of the encoded signal (minimize encoding error), provide a competitive advantage. Selective pressure over evolutionary timescales has likely encouraged neural codes which minimize encoding error. At the same time, neural coding is metabolically expensive, which suggests that selective pressure would also encourage neural codes which minimize energy. Based on these assumptions, this work proposes a principle of neural coding which captures the trade-off between error and energy as a constrained optimization problem of minimizing encoding error while satisfying a constraint on energy. A solution to the proposed optimization problem is derived in the limit of high spike-rates. The solution is to track the instantaneous reconstruction error, and to time spikes when the error crosses a threshold value. In the limit of large signals, the threshold level is a constant, but in general it is signal dependent. This coding model, called the neural source coder, implies neurons should be able to track reconstruction error internally, using the error signal to precisely time spikes. Mathematically, this model is similar to existing adaptive threshold models, but it provides a new way to understand coding by sensory neurons. Comparing the predictions of the neural source coder to experimental data recorded from a peripheral neuron, the coder is able to predict spike times with considerable accuracy. Intriguingly, this is also true for a cortical neuron which has a low spike-rate. Reconstructions using the neural source coder show lower error than other spiking neuron models. The neural source coder also predicts the asymmetric spike-rate adaptation seen in sensory neurons (the primary-like response). An alternative expression for the neural source coder is as an instantaneous-rate coder of a rate function which depends on the signal, signal derivative, and encoding parameters. The instantaneous rate closely predicts experimental peri-stimulus time histograms. The addition of a stochastic threshold to the neural source coder accounts for the spike-time jitter observed in experimental datasets. Jittered spike-trains from the neural source coder show long-term interval statistics which closely match experimental recordings from a peripheral neuron. Moreover, the spike trains have strongly anti-correlated intervals, a feature observed in experimental data. Interestingly, jittered spike-trains do not improve reconstruction error for an individual neuron, but reconstruction error is reduced in simulations of small populations of independent neurons. This suggests that jittered spike-trains provide a method for small populations of sensory neurons to improve encoding error. Finally, a sound coding method for applying the neural source coder to timing spikes for cochlear implants is proposed. For each channel of the cochlear implant, a neural source coder can be used to time pulses to follow the patterns expected by peripheral neurons. Simulations show reduced reconstruction error compared to standard approaches using the signal envelope. Initial experiments with normal-hearing subjects show that a vocoder simulating this cochlear implant sound coding approach results in better speech perception thresholds when compared to a standard noise vocoder. Although further experiments with cochlear implant users are critical, initial results encourage further study of the proposed sound-coding method. Overall, the proposed principle of minimum-error, energy-constrained encoding for sensory neural coding can be implemented by a spike-timing model with a feedback loop which computes reconstruction error. This model of neural source coding predicts a wide range of experimental observations from both peripheral and cortical neurons. The close agreement between experimental data and the predictions of the neural source coder suggests a fundamental principle underlying neural coding

    Functional Brain Networks Develop from a “Local to Distributed” Organization

    Get PDF
    The mature human brain is organized into a collection of specialized functional networks that flexibly interact to support various cognitive functions. Studies of development often attempt to identify the organizing principles that guide the maturation of these functional networks. In this report, we combine resting state functional connectivity MRI (rs-fcMRI), graph analysis, community detection, and spring-embedding visualization techniques to analyze four separate networks defined in earlier studies. As we have previously reported, we find, across development, a trend toward ‘segregation’ (a general decrease in correlation strength) between regions close in anatomical space and ‘integration’ (an increased correlation strength) between selected regions distant in space. The generalization of these earlier trends across multiple networks suggests that this is a general developmental principle for changes in functional connectivity that would extend to large-scale graph theoretic analyses of large-scale brain networks. Communities in children are predominantly arranged by anatomical proximity, while communities in adults predominantly reflect functional relationships, as defined from adult fMRI studies. In sum, over development, the organization of multiple functional networks shifts from a local anatomical emphasis in children to a more “distributed” architecture in young adults. We argue that this “local to distributed” developmental characterization has important implications for understanding the development of neural systems underlying cognition. Further, graph metrics (e.g., clustering coefficients and average path lengths) are similar in child and adult graphs, with both showing “small-world”-like properties, while community detection by modularity optimization reveals stable communities within the graphs that are clearly different between young children and young adults. These observations suggest that early school age children and adults both have relatively efficient systems that may solve similar information processing problems in divergent ways

    High speed event-based visual processing in the presence of noise

    Get PDF
    Standard machine vision approaches are challenged in applications where large amounts of noisy temporal data must be processed in real-time. This work aims to develop neuromorphic event-based processing systems for such challenging, high-noise environments. The novel event-based application-focused algorithms developed are primarily designed for implementation in digital neuromorphic hardware with a focus on noise robustness, ease of implementation, operationally useful ancillary signals and processing speed in embedded systems

    Advances in Neural Signal Processing

    Get PDF
    Neural signal processing is a specialized area of signal processing aimed at extracting information or decoding intent from neural signals recorded from the central or peripheral nervous system. This has significant applications in the areas of neuroscience and neural engineering. These applications are famously known in the area of brain–machine interfaces. This book presents recent advances in this flourishing field of neural signal processing with demonstrative applications

    Advances in Neural Signal Processing

    Get PDF
    Neural signal processing is a specialized area of signal processing aimed at extracting information or decoding intent from neural signals recorded from the central or peripheral nervous system. This has significant applications in the areas of neuroscience and neural engineering. These applications are famously known in the area of brain–machine interfaces. This book presents recent advances in this flourishing field of neural signal processing with demonstrative applications

    The role of modelling and computer simulations at various levels of brain organisation

    Get PDF
    Computational modelling and simulations are critical analytical tools in contemporary neuroscience. Models at various levels of abstraction, corresponding to levels of organisation of the brain, attempt to capture different neuronal or cognitive phenomena. This thesis discusses several models and applies them to behavioural and electrophysiological data. First, we model a voluntary decision process in a task where two available options carry the same probability of a reward for the outcome. Trial-by-trial accumulation rates are modulated by single-trial EEG features. Hierarchical Bayesian parameter estimation shows that the probability of reward is associated with changes in the speed of accumulation of evidence. Second, we use a pairwise Maximum Entropy Model (pMEM) to quantify irregularities in the MEG resting-state networks between juvenile myoclonic epilepsy (JME) patients and healthy controls. The JME group exhibited on average fewer local minima of the pMEM energy landscape than controls in the fronto-parietal network. Our results show the pMEM to be descriptive, generative model for characterising atypical functional network properties in brain disorders. Next, we use a hierarchical drift-diffusion model (HDDM) to study the integration of information from multiple sources. We observe a non-perfect integration in the case of the accumulation of both congruent and incongruent evidence. Based on fitting the HDDM parameters, we hypothesise about the neuronal implementation by extending a biologically plausible neuronal mass model of decision making. Finally, we propose a spiking neuron model that unifies various components of inferential decision-making systems. The model includes populations corresponding to anatomical regions, e.g. the dorsolateral prefrontal cortex, orbitofrontal cortex, and basal ganglia. It consists of 8000 neurons and realises dedicated cognitive operations such as weighted valuation of inputs, competition between potential actions, and urgency-mediated modulation. Overall, this work paves the way for closer integration of theoretical models with behavioural and neuroimaging dat
    corecore