657 research outputs found

    Review of analytical instruments for EEG analysis

    Full text link
    Since it was first used in 1926, EEG has been one of the most useful instruments of neuroscience. In order to start using EEG data we need not only EEG apparatus, but also some analytical tools and skills to understand what our data mean. This article describes several classical analytical tools and also new one which appeared only several years ago. We hope it will be useful for those researchers who have only started working in the field of cognitive EEG

    Decoding the Encoding of Functional Brain Networks: an fMRI Classification Comparison of Non-negative Matrix Factorization (NMF), Independent Component Analysis (ICA), and Sparse Coding Algorithms

    Full text link
    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet mathematical constraints such as sparse coding and positivity both provide alternate biologically-plausible frameworks for generating brain networks. Non-negative Matrix Factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks for different constraints are used as basis functions to encode the observed functional activity at a given time point. These encodings are decoded using machine learning to compare both the algorithms and their assumptions, using the time series weights to predict whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. For classifying cognitive activity, the sparse coding algorithm of L1L1 Regularized Learning consistently outperformed 4 variations of ICA across different numbers of networks and noise levels (p<<0.001). The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy. Within each algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<<0.001). The success of sparse coding algorithms may suggest that algorithms which enforce sparse coding, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA

    Efficient fetal-maternal ECG signal separation from two channel maternal abdominal ECG via diffusion-based channel selection

    Full text link
    There is a need for affordable, widely deployable maternal-fetal ECG monitors to improve maternal and fetal health during pregnancy and delivery. Based on the diffusion-based channel selection, here we present the mathematical formalism and clinical validation of an algorithm capable of accurate separation of maternal and fetal ECG from a two channel signal acquired over maternal abdomen

    An Extension of Slow Feature Analysis for Nonlinear Blind Source Separation

    Get PDF
    We present and test an extension of slow feature analysis as a novel approach to nonlinear blind source separation. The algorithm relies on temporal correlations and iteratively reconstructs a set of statistically independent sources from arbitrary nonlinear instantaneous mixtures. Simulations show that it is able to invert a complicated nonlinear mixture of two audio signals with a reliability of more than 9090\%. The algorithm is based on a mathematical analysis of slow feature analysis for the case of input data that are generated from statistically independent sources

    Unmixing Binocular Signals

    Get PDF
    Incompatible images presented to the two eyes lead to perceptual oscillations in which one image at a time is visible. Early models portrayed this binocular rivalry as involving reciprocal inhibition between monocular representations of images, occurring at an early visual stage prior to binocular mixing. However, psychophysical experiments found conditions where rivalry could also occur at a higher, more abstract level of representation. In those cases, the rivalry was between image representations dissociated from eye-of-origin information, rather than between monocular representations from the two eyes. Moreover, neurophysiological recordings found the strongest rivalry correlate in inferotemporal cortex, a high-level, predominantly binocular visual area involved in object recognition, rather than early visual structures. An unresolved issue is how can the separate identities of the two images be maintained after binocular mixing in order for rivalry to be possible at higher levels? Here we demonstrate that after the two images are mixed, they can be unmixed at any subsequent stage using a physiologically plausible non-linear signal-processing algorithm, non-negative matrix factorization, previously proposed for parsing object parts during object recognition. The possibility that unmixed left and right images can be regenerated at late stages within the visual system provides a mechanism for creating various binocular representations and interactions de novo in different cortical areas for different purposes, rather than inheriting then from early areas. This is a clear example how non-linear algorithms can lead to highly non-intuitive behavior in neural information processing

    Finding optimal frequency and spatial filters accompanying blind signal separation of\\ EEG data for SSVEP-based BCI

    Get PDF
    Brain-computer interface (BCI) is a device which allows paralyzed people to navigate a robot, prosthesis or wheelchair using only their own brains’ reactions. By creating a direct communication pathway between the human brain and a machine, without muscles contractions or activity from within the peripheral nervous system, BCI makes mapping person’s intentions onto directive signals possible. One of the most commonly utilized phenomena in BCI is steady-state visually evoked potentials (SSVEP). If subject focuses attention on the flashing stimulus (with specified frequency) presented on the computer screen, a signal of the same frequency will appear in his or hers visual cortex and from there it can be measured. When there is more than one stimulus on the screen (each flashing with a different frequency) then based on the outcomes of the signal analysis we can predict at which of these objects (e.g., rectangles) subject was/is looking at that particular moment. Proper preprocessing steps have taken place in order to obtain maximally accurate stimuli recognition (as the specific frequency). In the current article, we compared various preprocessing and processing methods for BCI purposes. Combinations of spatial and temporal filtration methods and the proceeding blind source separation (BSS) were evaluated in terms of the resulting decoding accuracy. Canonical-correlation analysis (CCA) to signals classification was used

    Kurtosis-Based Blind Source Extraction of Complex Non-Circular Signals with Application in EEG Artifact Removal in Real-Time

    Get PDF
    A new class of complex domain blind source extraction algorithms suitable for the extraction of both circular and non-circular complex signals is proposed. This is achieved through sequential extraction based on the degree of kurtosis and in the presence of non-circular measurement noise. The existence and uniqueness analysis of the solution is followed by a study of fast converging variants of the algorithm. The performance is first assessed through simulations on well understood benchmark signals, followed by a case study on real-time artifact removal from EEG signals, verified using both qualitative and quantitative metrics. The results illustrate the power of the proposed approach in real-time blind extraction of general complex-valued sources
    • …
    corecore