335,986 research outputs found

    Applications of Information Theory to Analysis of Neural Data

    Full text link
    Information theory is a practical and theoretical framework developed for the study of communication over noisy channels. Its probabilistic basis and capacity to relate statistical structure to function make it ideally suited for studying information flow in the nervous system. It has a number of useful properties: it is a general measure sensitive to any relationship, not only linear effects; it has meaningful units which in many cases allow direct comparison between different experiments; and it can be used to study how much information can be gained by observing neural responses in single trials, rather than in averages over multiple trials. A variety of information theoretic quantities are commonly used in neuroscience - (see entry "Definitions of Information-Theoretic Quantities"). In this entry we review some applications of information theory in neuroscience to study encoding of information in both single neurons and neuronal populations.Comment: 8 pages, 2 figure

    Time series causality analysis and EEG data analysis on music improvisation

    Get PDF
    This thesis describes a PhD project on time series causality analysis and applications. The project is motivated by two EEG measurements of music improvisation experiments, where we aim to use causality measures to construct neural networks to identify the neural differences between improvisation and non-improvisation. The research is based on mathematical backgrounds of time series analysis, information theory and network theory. We first studied a series of popular causality measures, namely, the Granger causality, partial directed coherence (PDC) and directed transfer function (DTF), transfer entropy (TE), conditional mutual information from mixed embedding (MIME) and partial MIME (PMIME), from which we proposed our new measures: the direct transfer entropy (DTE) and the wavelet-based extensions of MIME and PMIME. The new measures improved the properties and applications of their father measures, which were verified by simulations and examples. By comparing the measures we studied, MIME was found to be the most useful causality measure for our EEG analysis. Thus, we used MIME to construct both the intra-brain and cross-brain neural networks for musicians and listeners during the music performances. Neural differences were identified in terms of direction and distribution of neural information flows and activity of the large brain regions. Furthermore, we applied MIME on other EEG and financial data applications, where reasonable causality results were obtained.Open Acces

    A toolbox for the fast information analysis of multiple-site LFP, EEG and spike train recordings

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Information theory is an increasingly popular framework for studying how the brain encodes sensory information. Despite its widespread use for the analysis of spike trains of single neurons and of small neural populations, its application to the analysis of other types of neurophysiological signals (EEGs, LFPs, BOLD) has remained relatively limited so far. This is due to the limited-sampling bias which affects calculation of information, to the complexity of the techniques to eliminate the bias, and to the lack of publicly available fast routines for the information analysis of multi-dimensional responses.</p> <p>Results</p> <p>Here we introduce a new C- and Matlab-based information theoretic toolbox, specifically developed for neuroscience data. This toolbox implements a novel computationally-optimized algorithm for estimating many of the main information theoretic quantities and bias correction techniques used in neuroscience applications. We illustrate and test the toolbox in several ways. First, we verify that these algorithms provide accurate and unbiased estimates of the information carried by analog brain signals (i.e. LFPs, EEGs, or BOLD) even when using limited amounts of experimental data. This test is important since existing algorithms were so far tested primarily on spike trains. Second, we apply the toolbox to the analysis of EEGs recorded from a subject watching natural movies, and we characterize the electrodes locations, frequencies and signal features carrying the most visual information. Third, we explain how the toolbox can be used to break down the information carried by different features of the neural signal into distinct components reflecting different ways in which correlations between parts of the neural signal contribute to coding. We illustrate this breakdown by analyzing LFPs recorded from primary visual cortex during presentation of naturalistic movies.</p> <p>Conclusion</p> <p>The new toolbox presented here implements fast and data-robust computations of the most relevant quantities used in information theoretic analysis of neural data. The toolbox can be easily used within Matlab, the environment used by most neuroscience laboratories for the acquisition, preprocessing and plotting of neural data. It can therefore significantly enlarge the domain of application of information theory to neuroscience, and lead to new discoveries about the neural code.</p

    Mathematical Foundations of Equivariant Neural Networks

    Get PDF
    Deep learning has revolutionized industry and academic research. Over the past decade, neural networks have been used to solve a multitude of previously unsolved problems and to significantly improve the state of the art on other tasks. However, training a neural network typically requires large amounts of data and computational resources. This is not only costly, it also prevents deep learning from being used for applications in which data is scarce. It is therefore important to simplify the learning task by incorporating inductive biases - prior knowledge and assumptions - into the neural network design.Geometric deep learning aims to reduce the amount of information that neural networks have to learn, by taking advantage of geometric properties in data. In particular, equivariant neural networks use symmetries to reduce the complexity of a learning task. Symmetries are properties that do not change under certain transformations. For example, rotation-equivariant neural networks trained to identify tumors in medical images are not sensitive to the orientation of a tumor within an image. Another example is graph neural networks, i.e., permutation-equivariant neural networks that operate on graphs, such as molecules or social networks. Permuting the ordering of vertices and edges either transforms the output of a graph neural network in a predictable way (equivariance), or has no effect on the output (invariance).In this thesis we study a fiber bundle theoretic framework for equivariant neural networks. Fiber bundles are often used in mathematics and theoretical physics to model nontrivial geometries, and offer a geometric approach to symmetry. This framework connects to many different areas of mathematics, including Fourier analysis, representation theory, and gauge theory, thus providing a large set of tools for analyzing equivariant neural networks

    Rough feature selection for intelligent classifiers

    Get PDF
    Abstract. The last two decades have seen many powerful classification systems being built for large-scale real-world applications. However, for all their accuracy, one of the persistent obstacles facing these systems is that of data dimensionality. To enable such systems to be effective, a redundancy-removing step is usually required to pre-process the given data. Rough set theory offers a useful, and formal, methodology that can be employed to reduce the dimensionality of datasets. It helps select the most information rich features in a dataset, without transforming the data, all the while attempting to minimise information loss during the selection process. Based on this observation, this paper discusses an approach for semantics-preserving dimensionality reduction, or feature selection, that simplifies domains to aid in developing fuzzy or neural classifiers. Computationally, the approach is highly efficient, relying on simple set operations only. The success of this work is illustrated by applying it to addressing two real-world problems: industrial plant monitoring and medical image analysis.
    • …
    corecore