6,404 research outputs found

    Bits from Biology for Computational Intelligence

    Get PDF
    Computational intelligence is broadly defined as biologically-inspired computing. Usually, inspiration is drawn from neural systems. This article shows how to analyze neural systems using information theory to obtain constraints that help identify the algorithms run by such systems and the information they represent. Algorithms and representations identified information-theoretically may then guide the design of biologically inspired computing systems (BICS). The material covered includes the necessary introduction to information theory and the estimation of information theoretic quantities from neural data. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely, or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is decomposed into component processes of information storage, transfer, and modification -- locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems

    Neural signals encoding shifts in beliefs

    Get PDF
    Dopamine is implicated in a diverse range of cognitive functions including cognitive flexibility, task switching, signalling novel or unexpected stimuli as well as advance information. There is also longstanding line of thought that links dopamine with belief formation and, crucially, aberrant belief formation in psychosis. Integrating these strands of evidence would suggest that dopamine plays a central role in belief updating and more specifically in encoding of meaningful information content in observations. The precise nature of this relationship has remained unclear. To directly address this question we developed a paradigm that allowed us to decompose two distinct types of information content, information-theoretic surprise that reflects the unexpectedness of an observation, and epistemic value that induces shifts in beliefs or, more formally, Bayesian surprise. Using functional magnetic-resonance imaging in humans we show that dopamine-rich midbrain regions encode shifts in beliefs whereas surprise is encoded in prefrontal regions, including the pre-supplementary motor area and dorsal cingulate cortex. By linking putative dopaminergic activity to belief updating these data provide a link to false belief formation that characterises hyperdopaminergic states associated with idiopathic and drug induced psychosis

    Adaptive Filtering Enhances Information Transmission in Visual Cortex

    Full text link
    Sensory neuroscience seeks to understand how the brain encodes natural environments. However, neural coding has largely been studied using simplified stimuli. In order to assess whether the brain's coding strategy depend on the stimulus ensemble, we apply a new information-theoretic method that allows unbiased calculation of neural filters (receptive fields) from responses to natural scenes or other complex signals with strong multipoint correlations. In the cat primary visual cortex we compare responses to natural inputs with those to noise inputs matched for luminance and contrast. We find that neural filters adaptively change with the input ensemble so as to increase the information carried by the neural response about the filtered stimulus. Adaptation affects the spatial frequency composition of the filter, enhancing sensitivity to under-represented frequencies in agreement with optimal encoding arguments. Adaptation occurs over 40 s to many minutes, longer than most previously reported forms of adaptation.Comment: 20 pages, 11 figures, includes supplementary informatio

    Efficient transfer entropy analysis of non-stationary neural time series

    Full text link
    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these observations, available estimators assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that deals with the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method. We test the performance and robustness of our implementation on data from simulated stochastic processes and demonstrate the method's applicability to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscientific data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.Comment: 27 pages, 7 figures, submitted to PLOS ON

    EEG-fMRI Based Information Theoretic Characterization of the Human Perceptual Decision System

    Get PDF
    The modern metaphor of the brain is that of a dynamic information processing device. In the current study we investigate how a core cognitive network of the human brain, the perceptual decision system, can be characterized regarding its spatiotemporal representation of task-relevant information. We capitalize on a recently developed information theoretic framework for the analysis of simultaneously acquired electroencephalography (EEG) and functional magnetic resonance imaging data (fMRI) (Ostwald et al. (2010), NeuroImage 49: 498–516). We show how this framework naturally extends from previous validations in the sensory to the cognitive domain and how it enables the economic description of neural spatiotemporal information encoding. Specifically, based on simultaneous EEG-fMRI data features from n = 13 observers performing a visual perceptual decision task, we demonstrate how the information theoretic framework is able to reproduce earlier findings on the neurobiological underpinnings of perceptual decisions from the response signal features' marginal distributions. Furthermore, using the joint EEG-fMRI feature distribution, we provide novel evidence for a highly distributed and dynamic encoding of task-relevant information in the human brain

    Representation of acoustic communication signals by insect auditory receptor neurons

    Get PDF
    Despite their simple auditory systems, some insect species recognize certain temporal aspects of acoustic stimuli with an acuity equal to that of vertebrates; however, the underlying neural mechanisms and coding schemes are only partially understood. In this study, we analyze the response characteristics of the peripheral auditory system of grasshoppers with special emphasis on the representation of species-specific communication signals. We use both natural calling songs and artificial random stimuli designed to focus on two low-order statistical properties of the songs: their typical time scales and the distribution of their modulation amplitudes. Based on stimulus reconstruction techniques and quantified within an information-theoretic framework, our data show that artificial stimuli with typical time scales of >40 msec can be read from single spike trains with high accuracy. Faster stimulus variations can be reconstructed only for behaviorally relevant amplitude distributions. The highest rates of information transmission (180 bits/sec) and the highest coding efficiencies (40%) are obtained for stimuli that capture both the time scales and amplitude distributions of natural songs. Use of multiple spike trains significantly improves the reconstruction of stimuli that vary on time scales <40 msec or feature amplitude distributions as occur when several grasshopper songs overlap. Signal-to-noise ratios obtained from the reconstructions of natural songs do not exceed those obtained from artificial stimuli with the same low-order statistical properties. We conclude that auditory receptor neurons are optimized to extract both the time scales and the amplitude distribution of natural songs. They are not optimized, however, to extract higher-order statistical properties of the song-specific rhythmic patterns

    Neuronal Spike Train Analysis in Likelihood Space

    Get PDF
    Conventional methods for spike train analysis are predominantly based on the rate function. Additionally, many experiments have utilized a temporal coding mechanism. Several techniques have been used for analyzing these two sources of information separately, but using both sources in a single framework remains a challenging problem. Here, an innovative technique is proposed for spike train analysis that considers both rate and temporal information.Point process modeling approach is used to estimate the stimulus conditional distribution, based on observation of repeated trials. The extended Kalman filter is applied for estimation of the parameters in a parametric model. The marked point process strategy is used in order to extend this model from a single neuron to an entire neuronal population. Each spike train is transformed into a binary vector and then projected from the observation space onto the likelihood space. This projection generates a newly structured space that integrates temporal and rate information, thus improving performance of distribution-based classifiers. In this space, the stimulus-specific information is used as a distance metric between two stimuli. To illustrate the advantages of the proposed technique, spiking activity of inferior temporal cortex neurons in the macaque monkey are analyzed in both the observation and likelihood spaces. Based on goodness-of-fit, performance of the estimation method is demonstrated and the results are subsequently compared with the firing rate-based framework.From both rate and temporal information integration and improvement in the neural discrimination of stimuli, it may be concluded that the likelihood space generates a more accurate representation of stimulus space. Further, an understanding of the neuronal mechanism devoted to visual object categorization may be addressed in this framework as well
    corecore