14 research outputs found

    Robust detrending, rereferencing, outlier detection, and inpainting for multichannel data

    Get PDF
    Electroencephalography (EEG), magnetoencephalography (MEG) and related techniques are prone to glitches, slow drift, steps, etc., that contaminate the data and interfere with the analysis and interpretation. These artifacts are usually addressed in a preprocessing phase that attempts to remove them or minimize their impact. This paper offers a set of useful techniques for this purpose: robust detrending, robust rereferencing, outlier detection, data interpolation (inpainting), step removal, and filter ringing artifact removal. These techniques provide a less wasteful alternative to discarding corrupted trials or channels, and they are relatively immune to artifacts that disrupt alternative approaches such as filtering. Robust detrending allows slow drifts and common mode signals to be factored out while avoiding the deleterious effects of glitches. Robust rereferencing reduces the impact of artifacts on the reference. Inpainting allows corrupt data to be interpolated from intact parts based on the correlation structure estimated over the intact parts. Outlier detection allows the corrupt parts to be identified. Step removal fixes the high-amplitude flux jump artifacts that are common with some MEG systems. Ringing removal allows the ringing response of the antialiasing filter to glitches (steps, pulses) to be suppressed. The performance of the methods is illustrated and evaluated using synthetic data and data from real EEG and MEG systems. These methods, which are mainly automatic and require little tuning, can greatly improve the quality of the data

    Neural correlates of auditory pattern learning in the auditory cortex

    Get PDF
    Learning of new auditory stimuli often requires repetitive exposure to the stimulus. Fast and implicit learning of sounds presented at random times enables efficient auditory perception. However, it is unclear how such sensory encoding is processed on a neural level. We investigated neural responses that are developed from a passive, repetitive exposure to a specific sound in the auditory cortex of anesthetized rats, using electrocorticography. We presented a series of random sequences that are generated afresh each time, except for a specific reference sequence that remains constant and re-appears at random times across trials. We compared induced activity amplitudes between reference and fresh sequences. Neural responses from both primary and non-primary auditory cortical regions showed significantly decreased induced activity amplitudes for reference sequences compared to fresh sequences, especially in the beta band. This is the first study showing that neural correlates of auditory pattern learning can be evoked even in anesthetized, passive listening animal models

    Decoding the auditory brain with canonical component analysis

    Get PDF
    The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated “decoding” strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response

    A simulation study: comparing independent component analysis and signal-space projection – source-informed reconstruction for rejecting muscle artifacts evoked by transcranial magnetic stimulation

    Get PDF
    IntroductionThe combination of transcranial magnetic stimulation (TMS) and electroencephalography (EEG) allows researchers to explore cortico-cortical connections. To study effective connections, the first few tens of milliseconds of the TMS-evoked potentials are the most critical. Yet, TMS-evoked artifacts complicate the interpretation of early-latency data. Data-processing strategies like independent component analysis (ICA) and the combined signal-space projection–source-informed reconstruction approach (SSP–SIR) are designed to mitigate artifacts, but their objective assessment is challenging because the true neuronal EEG responses under large-amplitude artifacts are generally unknown. Through simulations, we quantified how the spatiotemporal properties of the artifacts affect the cleaning performances of ICA and SSP–SIR.MethodsWe simulated TMS-induced muscle artifacts and superposed them on pre-processed TMS–EEG data, serving as the ground truth. The simulated muscle artifacts were varied both in terms of their topography and temporal profiles. The signals were then cleaned using ICA and SSP–SIR, and subsequent comparisons were made with the ground truth data.ResultsICA performed better when the artifact time courses were highly variable across the trials, whereas the effectiveness of SSP–SIR depended on the congruence between the artifact and neuronal topographies, with the performance of SSP–SIR being better when difference between topographies was larger. Overall, SSP–SIR performed better than ICA across the tested conditions. Based on these simulations, SSP–SIR appears to be more effective in suppressing TMS-evoked muscle artifacts. These artifacts are shown to be highly time-locked to the TMS pulse and manifest in topographies that differ substantially from the patterns of neuronal potentials.DiscussionSelecting between ICA and SSP–SIR should be guided by the characteristics of the artifacts. SSP–SIR might be better equipped for suppressing time-locked artifacts, provided that their topographies are sufficiently different from the neuronal potential patterns of interest, and that the SSP–SIR algorithm can successfully find those artifact topographies from the high-pass-filtered data. ICA remains a powerful tool for rejecting artifacts that are not strongly time locked to the TMS pulse

    Decoding the auditory brain with canonical component analysis

    Get PDF
    The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated “decoding” strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response

    A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Get PDF
    The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies

    A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Get PDF
    The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies

    A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Get PDF
    The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies

    Phase separation of competing memories along the human hippocampal theta rhythm

    Get PDF
    Competition between overlapping memories is considered one of the major causes of forgetting, and it is still unknown how the human brain resolves such mnemonic conflict. In the present magnetoencephalography (MEG) study, we empirically tested a computational model that leverages an oscillating inhibition algorithm to minimise overlap between memories. We used a proactive interference task, where a reminder word could be associated with either a single image (non-competitive condition) or two competing images, and participants were asked to always recall the most recently learned word–image association. Time-resolved pattern classifiers were trained to detect the reactivated content of target and competitor memories from MEG sensor patterns, and the timing of these neural reactivations was analysed relative to the phase of the dominant hippocampal 3 Hz theta oscillation. In line with our pre-registered hypotheses, target and competitor reactivations locked to different phases of the hippocampal theta rhythm after several repeated recalls. Participants who behaviourally experienced lower levels of interference also showed larger phase separation between the two overlapping memories. The findings provide evidence that the temporal segregation of memories, orchestrated by slow oscillations, plays a functional role in resolving mnemonic competition by separating and prioritising relevant memories under conditions of high interference

    An Electroencephalographic Investigation of the Encoding of Sound Source Elevation in the Human Cortex

    Get PDF
    Sound localization is of great ecological importance because it provides spa- tial perception outside the visual field. However, unlike other sensory systems, the auditory system does not represent the location of a stimulus on the level of the sensory epithelium in the cochlea. Instead, the position of a sound source has to be computed based on different localization cues. Different cues are informative of a sound sources azimuth and elevation, which, when taken together, describe the sources location in a polar coordinate system. There is a body of knowledge regarding the acoustical cues and the neural circuits in the brainstem required to perceive sound source azimuth and elevation. However, our understanding of the encoding of sound source location on the level of the cortex is lacking especially what concerns elevation. Within the scope of this thesis, we established an experimental setup to study auditory spatial perception while recording the listeners brain activity using electroencephalography. We conducted two experiments on the encoding of sound source elevation in the human cortex. Both experiments results are compatible with the hypothesis that the cortex represents sound source elevation in a population rate code where the response amplitude decreases linearly with increasing elevation. Decoding of the recorded brain activity revealed that a distinct neural representation of differently elevated sound sources was predictive of behavioral performance. An exploratory analysis indicated an increase in the amplitude of oscillations in visual areas when the subject localized sounds during eccentric eye positions. More research in this direction could help shed light on the interactions between the visual and auditory systems regarding spatial perception. The experiments presented in this dissertation are, to our knowledge, the first studies that demonstrate the encoding of sound source elevation in the human cortex by using a direct measure of neural activity (i.e., electroencephalography).:Abstract . . . . . . . . . . . . . . . . . . . . . . 1 Zusammenfassung . . . . . . . . . . . . . . . . . . . . . . 7 1 Electroencephalography 13 1.1 Event Related Potentials and Oscillations . . . . . . . . . . . . 13 1.2 Comparison to other Methods . . . . . . . . . . . . . . . . . . . 14 1.3 EEG Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4.1 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.4.2 Referencing . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.4.3 Eye Blinks . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.4.4 Epoch Rejection . . . . . . . . . . . . . . . . . . . . . . . 22 1.4.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.5 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.5.1 Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.5.2 Nonparametric Permutation Testing . . . . . . . . . . . 26 1.5.3 Source Separation . . . . . . . . . . . . . . . . . . . . . . 28 2 Sound Localization in the Brain . . . . . . . . . . . . . . . . . . . . 31 2.1 The Spatial Perception of Sound . . . . . . . . . . . . . . . . . . 32 2.1.1 Interaural Cues . . . . . . . . . . . . . . . . . . . . . . . 32 2.1.2 Spectral Cues . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2 Brain Mechanisms for Sound Localization . . . . . . . . . . . . 37 2.2.1 Auditory Pathway . . . . . . . . . . . . . . . . . . . . . 38 2.2.2 Extracting Localization Cues . . . . . . . . . . . . . . . 40 2.2.3 Neural Representation of Auditory Space . . . . . . . . 42 2.2.4 The Dual Pathway Model . . . . . . . . . . . . . . . . . 45 2.2.5 A Dominant Hemisphere for Sound Localization? . . . 47 2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3 A Free Field Setup for Psychoacoustics 51 3.1 Design of the Experimental Setup . . . . . . . . . . . . . . . . . 51 3.1.1 Loudspeakers . . . . . . . . . . . . . . . . . . . . . . . . 54 3.1.2 Processors . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.1.3 Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.1.4 Coordinate Systems . . . . . . . . . . . . . . . . . . . . 56 3.2 Operating the Setup . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.2.1 Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.2.2 Loudspeaker Equalization . . . . . . . . . . . . . . . . . 59 3.3 Head Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . 61 3.3.1 Landmark Detection . . . . . . . . . . . . . . . . . . . . 62 3.3.2 Perspective-n-Point Problem . . . . . . . . . . . . . . . 62 3.3.3 Camera-to-World Conversion . . . . . . . . . . . . . . . 63 3.4 A Toolbox for Psychoacoustics . . . . . . . . . . . . . . . . . . 64 4 A Linear Population Rate Code for Elevation 67 4.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.1.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.1.2 Experimental Protocol . . . . . . . . . . . . . . . . . . . 69 4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.2.1 Behavioral Performance . . . . . . . . . . . . . . . . . . 70 4.2.2 ERP Components . . . . . . . . . . . . . . . . . . . . . . 70 4.2.3 Elevation Encoding . . . . . . . . . . . . . . . . . . . . . 72 4.2.4 Effect of Eye-Position . . . . . . . . . . . . . . . . . . . . 74 4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5 Decoding of Brain Responses Predicts Localization Accuracy . . . 81 5.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.1.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.1.2 Experimental Protocol . . . . . . . . . . . . . . . . . . . 82 5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.2.1 Behavioral Performance . . . . . . . . . . . . . . . . . . 83 5.2.2 ERP Components . . . . . . . . . . . . . . . . . . . . . . 84 5.2.3 Decoding Brain Activity . . . . . . . . . . . . . . . . . . 86 5.2.4 Topography of Elevation Encoding . . . . . . . . . . . . 88 5.2.5 Elevation Tuning . . . . . . . . . . . . . . . . . . . . . . 89 5.2.6 Hemispheric Lateralization . . . . . . . . . . . . . . . . 91 5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 A Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 B Publication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
    corecore