15 research outputs found

    Robust detrending, rereferencing, outlier detection, and inpainting for multichannel data

    Get PDF
    Electroencephalography (EEG), magnetoencephalography (MEG) and related techniques are prone to glitches, slow drift, steps, etc., that contaminate the data and interfere with the analysis and interpretation. These artifacts are usually addressed in a preprocessing phase that attempts to remove them or minimize their impact. This paper offers a set of useful techniques for this purpose: robust detrending, robust rereferencing, outlier detection, data interpolation (inpainting), step removal, and filter ringing artifact removal. These techniques provide a less wasteful alternative to discarding corrupted trials or channels, and they are relatively immune to artifacts that disrupt alternative approaches such as filtering. Robust detrending allows slow drifts and common mode signals to be factored out while avoiding the deleterious effects of glitches. Robust rereferencing reduces the impact of artifacts on the reference. Inpainting allows corrupt data to be interpolated from intact parts based on the correlation structure estimated over the intact parts. Outlier detection allows the corrupt parts to be identified. Step removal fixes the high-amplitude flux jump artifacts that are common with some MEG systems. Ringing removal allows the ringing response of the antialiasing filter to glitches (steps, pulses) to be suppressed. The performance of the methods is illustrated and evaluated using synthetic data and data from real EEG and MEG systems. These methods, which are mainly automatic and require little tuning, can greatly improve the quality of the data

    Neural correlates of auditory pattern learning in the auditory cortex

    Get PDF
    Learning of new auditory stimuli often requires repetitive exposure to the stimulus. Fast and implicit learning of sounds presented at random times enables efficient auditory perception. However, it is unclear how such sensory encoding is processed on a neural level. We investigated neural responses that are developed from a passive, repetitive exposure to a specific sound in the auditory cortex of anesthetized rats, using electrocorticography. We presented a series of random sequences that are generated afresh each time, except for a specific reference sequence that remains constant and re-appears at random times across trials. We compared induced activity amplitudes between reference and fresh sequences. Neural responses from both primary and non-primary auditory cortical regions showed significantly decreased induced activity amplitudes for reference sequences compared to fresh sequences, especially in the beta band. This is the first study showing that neural correlates of auditory pattern learning can be evoked even in anesthetized, passive listening animal models

    Decoding the auditory brain with canonical component analysis

    Get PDF
    The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated “decoding” strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response

    Decoding the auditory brain with canonical component analysis

    Get PDF
    The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated “decoding” strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response

    A simulation study: comparing independent component analysis and signal-space projection – source-informed reconstruction for rejecting muscle artifacts evoked by transcranial magnetic stimulation

    Get PDF
    IntroductionThe combination of transcranial magnetic stimulation (TMS) and electroencephalography (EEG) allows researchers to explore cortico-cortical connections. To study effective connections, the first few tens of milliseconds of the TMS-evoked potentials are the most critical. Yet, TMS-evoked artifacts complicate the interpretation of early-latency data. Data-processing strategies like independent component analysis (ICA) and the combined signal-space projection–source-informed reconstruction approach (SSP–SIR) are designed to mitigate artifacts, but their objective assessment is challenging because the true neuronal EEG responses under large-amplitude artifacts are generally unknown. Through simulations, we quantified how the spatiotemporal properties of the artifacts affect the cleaning performances of ICA and SSP–SIR.MethodsWe simulated TMS-induced muscle artifacts and superposed them on pre-processed TMS–EEG data, serving as the ground truth. The simulated muscle artifacts were varied both in terms of their topography and temporal profiles. The signals were then cleaned using ICA and SSP–SIR, and subsequent comparisons were made with the ground truth data.ResultsICA performed better when the artifact time courses were highly variable across the trials, whereas the effectiveness of SSP–SIR depended on the congruence between the artifact and neuronal topographies, with the performance of SSP–SIR being better when difference between topographies was larger. Overall, SSP–SIR performed better than ICA across the tested conditions. Based on these simulations, SSP–SIR appears to be more effective in suppressing TMS-evoked muscle artifacts. These artifacts are shown to be highly time-locked to the TMS pulse and manifest in topographies that differ substantially from the patterns of neuronal potentials.DiscussionSelecting between ICA and SSP–SIR should be guided by the characteristics of the artifacts. SSP–SIR might be better equipped for suppressing time-locked artifacts, provided that their topographies are sufficiently different from the neuronal potential patterns of interest, and that the SSP–SIR algorithm can successfully find those artifact topographies from the high-pass-filtered data. ICA remains a powerful tool for rejecting artifacts that are not strongly time locked to the TMS pulse

    Interference suppression techniques for OPM-based MEG: Opportunities and challenges

    Get PDF
    One of the primary technical challenges facing magnetoencephalography (MEG) is that the magnitude of neuromagnetic fields is several orders of magnitude lower than interfering signals. Recently, a new type of sensor has been developed – the optically pumped magnetometer (OPM). These sensors can be placed directly on the scalp and move with the head during participant movement, making them wearable. This opens up a range of exciting experimental and clinical opportunities for OPM-based MEG experiments, including paediatric studies, and the incorporation of naturalistic movements into neuroimaging paradigms. However, OPMs face some unique challenges in terms of interference suppression, especially in situations involving mobile participants, and when OPMs are integrated with electrical equipment required for naturalistic paradigms, such as motion capture systems. Here we briefly review various hardware solutions for OPM interference suppression. We then outline several signal processing strategies aimed at increasing the signal from neuromagnetic sources. These include regression-based strategies, temporal filtering and spatial filtering approaches. The focus is on the practical application of these signal processing algorithms to OPM data. In a similar vein, we include two worked-through experiments using OPM data collected from a whole-head sensor array. These tutorial-style examples illustrate how the steps for suppressing external interference can be implemented, including the associated data and code so that researchers can try the pipelines for themselves. With the popularity of OPM-based MEG rising, there will be an increasing need to deal with interference suppression. We hope this practical paper provides a resource for OPM-based MEG researchers to build upon

    A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Get PDF
    The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies

    A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Get PDF
    The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies

    A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Get PDF
    The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies

    Phase separation of competing memories along the human hippocampal theta rhythm

    Get PDF
    Competition between overlapping memories is considered one of the major causes of forgetting, and it is still unknown how the human brain resolves such mnemonic conflict. In the present magnetoencephalography (MEG) study, we empirically tested a computational model that leverages an oscillating inhibition algorithm to minimise overlap between memories. We used a proactive interference task, where a reminder word could be associated with either a single image (non-competitive condition) or two competing images, and participants were asked to always recall the most recently learned word–image association. Time-resolved pattern classifiers were trained to detect the reactivated content of target and competitor memories from MEG sensor patterns, and the timing of these neural reactivations was analysed relative to the phase of the dominant hippocampal 3 Hz theta oscillation. In line with our pre-registered hypotheses, target and competitor reactivations locked to different phases of the hippocampal theta rhythm after several repeated recalls. Participants who behaviourally experienced lower levels of interference also showed larger phase separation between the two overlapping memories. The findings provide evidence that the temporal segregation of memories, orchestrated by slow oscillations, plays a functional role in resolving mnemonic competition by separating and prioritising relevant memories under conditions of high interference
    corecore