1,457 research outputs found

    Blind source separation using temporal predictability

    Get PDF
    A measure of temporal predictability is defined and used to separate linear mixtures of signals. Given any set of statistically independent source signals, it is conjectured here that a linear mixture of those signals has the following property: the temporal predictability of any signal mixture is less than (or equal to) that of any of its component source signals. It is shown that this property can be used to recover source signals from a set of linear mixtures of those signals by finding an un-mixing matrix that maximizes a measure of temporal predictability for each recovered signal. This matrix is obtained as the solution to a generalized eigenvalue problem; such problems have scaling characteristics of O (N3), where N is the number of signal mixtures. In contrast to independent component analysis, the temporal predictability method requires minimal assumptions regarding the probability density functions of source signals. It is demonstrated that the method can separate signal mixtures in which each mixture is a linear combination of source signals with supergaussian, sub-gaussian, and gaussian probability density functions and on mixtures of voices and music

    Dynamic Decomposition of Spatiotemporal Neural Signals

    Full text link
    Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals

    Extensions of independent component analysis for natural image data

    Get PDF
    An understanding of the statistical properties of natural images is useful for any kind of processing to be performed on them. Natural image statistics are, however, in many ways as complex as the world which they depict. Fortunately, the dominant low-level statistics of images are sufficient for many different image processing goals. A lot of research has been devoted to second order statistics of natural images over the years. Independent component analysis is a statistical tool for analyzing higher than second order statistics of data sets. It attempts to describe the observed data as a linear combination of independent, latent sources. Despite its simplicity, it has provided valuable insights of many types of natural data. With natural image data, it gives a sparse basis useful for efficient description of the data. Connections between this description and early mammalian visual processing have been noticed. The main focus of this work is to extend the known results of applying independent component analysis on natural images. We explore different imaging techniques, develop algorithms for overcomplete cases, and study the dependencies between the components by using a model that finds a topographic ordering for the components as well as by conditioning the statistics of a component on the activity of another. An overview is provided of the associated problem field, and it is discussed how these relatively small results may eventually be a part of a more complete solution to the problem of vision.reviewe

    Signal-to-noise ratio of the MEG signal after preprocessing

    Get PDF
    Background Magnetoencephalography (MEG) provides a direct measure of brain activity with high combined spatiotemporal resolution. Preprocessing is necessary to reduce contributions from environmental interference and biological noise. New method The effect on the signal-to-noise ratio of different preprocessing techniques is evaluated. The signal-to-noise ratio (SNR) was defined as the ratio between the mean signal amplitude (evoked field) and the standard error of the mean over trials. Results Recordings from 26 subjects obtained during and event-related visual paradigm with an Elekta MEG scanner were employed. Two methods were considered as first-step noise reduction: Signal Space Separation and temporal Signal Space Separation, which decompose the signal into components with origin inside and outside the head. Both algorithm increased the SNR by approximately 100%. Epoch-based methods, aimed at identifying and rejecting epochs containing eye blinks, muscular artifacts and sensor jumps provided an SNR improvement of 5–10%. Decomposition methods evaluated were independent component analysis (ICA) and second-order blind identification (SOBI). The increase in SNR was of about 36% with ICA and 33% with SOBI. Comparison with existing methods No previous systematic evaluation of the effect of the typical preprocessing steps in the SNR of the MEG signal has been performed. Conclusions The application of either SSS or tSSS is mandatory in Elekta systems. No significant differences were found between the two. While epoch-based methods have been routinely applied the less often considered decomposition methods were clearly superior and therefore their use seems advisable

    Independent EEG Sources Are Dipolar

    Get PDF
    Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison)

    Methods and Approaches for Characterizing Learning Related Changes Observed in functional MRI Data — A Review

    Get PDF
    Brain imaging data have so far revealed a wealth of information about neuronal circuits involved in higher mental functions like memory, attention, emotion, language etc. Our efforts are toward understanding the learning related effects in brain activity during the acquisition of visuo-motor sequential skills. The aim of this paper is to survey various methods and approaches of analysis that allow the characterization of learning related changes in fMRI data. Traditional imaging analysis using the Statistical Parametric Map (SPM) approach averages out temporal changes and presents overall differences between different stages of learning. We outline other potential approaches for revealing learning effects such as statistical time series analysis, modelling of haemodynamic response function and independent component analysis. We present example case studies from our visuo-motor sequence learning experiments to describe application of SPM and statistical time series analyses. Our review highlights that the problem of characterizing learning induced changes in fMRI data remains an interesting and challenging open research problem

    Independent Component Analysis in a convoluted world

    Get PDF

    Computational models relating properties of visual neurons to natural stimulus statistics

    Get PDF
    The topic of this thesis is mathematical modeling of computations taking place in the visual system, the largest sensory system in the primate brain. While a great deal is known about how certain visual neurons respond to stimuli, a very profound question is why they respond as they do. Here this question is approached by formulating models of computation which might underlie the observed response properties. The main motivation is to improve our understanding of how the brain functions. A better understanding of the computational underpinnings of the visual system may also yield advances in medical technology or computer vision, such as development of visual prostheses, or design of computer vision algorithms. In this thesis several models of computation are examined. An underlying assumption in this work is that the statistical properties of visual stimuli are related to the structure of the visual system. The relationship has formed through the mechanisms of evolution and development. A model of computation specifies this relationship between the visual system and stimulus statistics. Such a model also contains free parameters which correspond to properties of visual neurons. The experimental evaluation of a model consists of estimation of these parameters from a large amount of natural visual data, and comparison of the resulting parameter values against neurophysiological knowledge of the properties of the neurons, or results obtained with other models. The main contribution of this thesis is the introduction of new models of computation in the primary visual cortex. The results obtained with these models suggest that one defining feature of the computations performed by a class of neurons called simple cells, is that the output of a neuron consists of periods of intense neuronal activity. It also seems that the activity levels of nearby simple cells are positively correlated over short time intervals. In addition, the probability of the occurrence of such regions of intense activity in the joint space of time and cortical area seems to be small. Another contribution of the thesis is the examination of the relationship between two previous computational models, namely independent component analysis and local spatial frequency analysis. This examination suggests that results obtained with independent component analysis share some important properties with wavelets, in the way their localization in space and frequency depends on their average spatial frequency.reviewe
    • …
    corecore