1,802 research outputs found

    Validating and improving the correction of ocular artifacts in electro-encephalography

    Get PDF
    For modern applications of electro-encephalography, including brain computer interfaces and single-trial Event Related Potential detection, it is becoming increasingly important that artifacts are accurately removed from a recorded electro-encephalogram (EEG) without affecting the part of the EEG that reflects cerebral activity. Ocular artifacts are caused by movement of the eyes and the eyelids. They occur frequently in the raw EEG and are often the most prominent artifacts in EEG recordings. Their accurate removal is therefore an important procedure in nearly all electro-encephalographic research. As a result of this, a considerable number of ocular artifact correction methods have been introduced over the past decades. A selection of these methods, which contains some of the most frequently used correction methods, is given in Section 1.5. When two different correction methods are applied to the same raw EEG, this usually results in two different corrected EEGs. A measure for the accuracy of correction should indicate how well each of these corrected EEGs recovers the part of the raw EEG that truly reflects cerebral activity. The fact that this accuracy cannot be determined directly from a raw EEG is intrinsic to the need for artifact removal. If, based on a raw EEG, it would be possible to derive an exact reference on what the corrected EEG should be, then there would not be any need for adequate artifact correction methods. Estimating the accuracy of correction methods is mostly done either by using models to simulate EEGs and artifacts, or by manipulating the experimental data in such a way that the effects of artifacts to the raw EEG can be isolated. In this thesis, modeling of EEG and artifact is used to validate correction methods based on simulated data. A new correction method is introduced which, unlike all existing methods, uses a camera to monitor eye(lid) movements as a basis for ocular artifact correction. The simulated data is used to estimate the accuracy of this new correction method and to compare it against the estimated accuracy of existing correction methods. The results of this comparison suggest that the new method significantly increases correction accuracy compared to the other methods. Next, an experiment is performed, based on which the accuracy of correction can be estimated on raw EEGs. Results on this experimental data comply very well with the results on the simulated data. It is therefore concluded that using a camera during EEG recordings provides valuable extra information that can be used in the process of ocular artifact correction. In Chapter 2, a model is introduced that assists in estimating the accuracy of eye movement artifacts for simulated EEG recordings. This model simulates EEG and eye movement artifacts simultaneously. For this, the model uses a realistic representation of the head, multiple dipoles to model cerebral and ocular electrical activity, and the boundary element method to calculate changes in electrical potential at different positions on the scalp. With the model, it is possible to simulate different data sets as if they are recorded using different electrode configurations. Signal to noise ratios are used to assess the accuracy of these six correction methods for various electrode configurations before and after applying six different correction methods. Results show that out of the six methods, second order blind identification, SOBI, and multiple linear regression, MLR, correct most accurately overall as they achieve the highest rise in signal to noise ratio. The occurrence of ocular artifacts is linked to changes in eyeball orientation. In Chapter 2 an eye tracker is used to record pupil position, which is closely linked to eyeball orientation. The pupil position information is used in the model to simulate eye movements. Recognizing the potential benefit of using an eye tracker not only for simulations, but also for correction, Chapter 3 introduces an eye movement artifact correction method that exploits the pupil position information that is provided by an eye tracker. Other correction methods use the electrooculogram (EOG) and/or the EEG to estimate ocular artifacts. Because both the EEG and the EOG recordings are susceptive to cerebral activity as well as to ocular activity, these other methods are at risk of overcorrecting the raw EEG. Pupil position information provides a reference that is linked to the ocular artifact in the EEG but that cannot be affected by cerebral activity, and as a result the new correction method avoids having to solve traditionally problematic issues like forward/backward propagation and evaluating the accuracy of component extraction. By using both simulated and experimental data, it is determined how pupil position influences the raw EEG and it is found that this relation is linear or quadratic. A Kalman filter is used for tuning of the parameters that specify the relation. On simulated data, the new method performs very well, resulting in an SNR after correction of over 10 dB for various patterns of eye movements. When compared to the three methods that performed best in the evaluation of Chapter 2, only the SOBI method which performed best in that evaluation shows similar results for some of the eye movement patterns. However, a serious limitation of the correction method is its inability to correct blink artifacts. In order to increase the variety of applications for which the new method can be used, the new correction should be improved in a way that enables it to correct the raw EEG for blinking artifacts. Chapter 4 deals with implementing such improvements based on the idea that a more advanced eye-tracker should be able to detect both the pupil position and the eyelid position. The improved eye tracker-based ocular artifact correction method is named EYE. Driven by some practical limitations regarding the eye tracking device currently available to us, an alternative way to estimate eyelid position is suggested, based on an EOG recorded above one eye. The EYE method can be used with both the eye tracker information or with the EOG substitute. On simulated data, accuracy of the EYE method is estimated using the EOGbased eyelid reference. This accuracy is again compared against the six other correction methods. Two different SNR-based measures of accuracy are proposed. One of these quantifies the correction of the entire simulated data set and the other focuses on those segments containing simulated blinking artifacts. After applying EYE, an average SNR of at least 9 dB for both these measures is achieved. This implies that the power of the corrected signal is at least eight times the power of the remaining noise. The simulated data sets contain a wide range of eye movements and blink frequencies. For almost all of these data sets, 16 out of 20, the correction results for EYE are better than for any of the other evaluated correction method. On experimental data, the EYE method appears to adequately correct for ocular artifacts as well. As the detection of eyelid position from the EOG is in principle inferior to the detection of eyelid position with the use of an eye tracker, these results should also be considered as an indicator of even higher accuracies that could be obtained with a more advanced eye tracker. Considering the simplicity of the MLR method, this method also performs remarkably well, which may explain why EOG-based regression is still often used for correction. In Chapter 5, the simulation model of Chapter 2 is put aside and, alternatively, experimentally recorded data is manipulated in a way that correction inaccuracies can be highlighted. Correction accuracies of eight correction methods, including EYE, are estimated based on data that are recorded during stop-signal tasks. In the analysis of these tasks it is essential that ocular artifacts are adequately removed because the task-related ERPs, are located mostly at frontal electrode positions and are low-amplitude. These data are corrected and subsequently evaluated. For the eight methods, the overall ranking of estimated accuracy in Figure 5.3, corresponds very well with the correction accuracy of these methods on simulated data as was found in Chapter 4. In a single-trial correction comparison, results suggest that the EYE corrected EEG, is not susceptible to overcorrection, whereas the other corrected EEGs are

    Precision is in the Eye of the Beholder: Application of Eye Fixation-Related Potentials to Information Systems Research

    Get PDF
    This paper introduces the eye-fixation related potential (EFRP) method to IS research. The EFRP method allows one to synchronize eye tracking with electroencephalographic (EEG) recording to precisely capture users’ neural activity at the exact time at which they start to cognitively process a stimulus (e.g., event on the screen). This complements and overcomes some of the shortcomings of the traditional event related potential (ERP) method, which can only stamp the time at which a stimulus is presented to a user. Thus, we propose a method conjecture of the superiority of EFRP over ERP for capturing the cognitive processing of a stimulus when such cognitive processing is not necessarily synchronized with the time at which the stimulus appears. We illustrate the EFRP method with an experiment in a natural IS use context in which we asked users to read an industry report while email pop-up notifications arrived on their screen. The results support our proposed hypotheses and show three distinct neural processes associated with 1) the attentional reaction to email pop-up notification, 2) the cognitive processing of the email pop-up notification, and 3) the motor planning activity involved in opening or not the email. Furthermore, further analyses of the data gathered in the experiment serve to validate our method conjecture about the superiority of the EFRP method over the ERP in natural IS use contexts. In addition to the experiment, our study discusses important IS research questions that could be pursued with the aid of EFRP, and describes a set of guidelines to help IS researchers use this method

    Precision is in the Eye of the Beholder: Application of Eye Fixation-Related Potentials to Information Systems Research

    Get PDF
    This is the final version. Available from Association for Information Systems via the DOI in this recordThis paper introduces the eye-fixation related potential (EFRP) method to IS research. The EFRP method allows one to synchronize eye tracking with electroencephalographic (EEG) recording to precisely capture users’ neural activity at the exact time at which they start to cognitively process a stimulus (e.g., event on the screen). This complements and overcomes some of the shortcomings of the traditional event related potential (ERP) method, which can only stamp the time at which a stimulus is presented to a user. Thus, we propose a method conjecture of the superiority of EFRP over ERP for capturing the cognitive processing of a stimulus when such cognitive processing is not necessarily synchronized with the time at which the stimulus appears. We illustrate the EFRP method with an experiment in a natural IS use context in which we asked users to read an industry report while email pop-up notifications arrived on their screen. The results support our proposed hypotheses and show three distinct neural processes associated with 1) the attentional reaction to email pop-up notification, 2) the cognitive processing of the email pop-up notification, and 3) the motor planning activity involved in opening or not the email. Furthermore, further analyses of the data gathered in the experiment serve to validate our method conjecture about the superiority of the EFRP method over the ERP in natural IS use contexts. In addition to the experiment, our study discusses important IS research questions that could be pursued with the aid of EFRP, and describes a set of guidelines to help IS researchers use this method.Social Sciences and Humanities Research Council of Canada (SSHRC)Natural Sciences and Engineering Research Council of CanadaFonds Québécois pour la Recherche sur la Société et la Culture (FQRSC)Fonds de recherche Nature et Technologies (FQRNT

    Multidimensional en-face OCT imaging of the retina.

    Get PDF
    Fast T-scanning (transverse scanning, en-face) was used to build B-scan or C-scan optical coherence tomography (OCT) images of the retina. Several unique signature patterns of en-face (coronal) are reviewed in conjunction with associated confocal images of the fundus and B-scan OCT images. Benefits in combining T-scan OCT with confocal imaging to generate pairs of OCT and confocal images similar to those generated by scanning laser ophthalmoscopy (SLO) are discussed in comparison with the spectral OCT systems. The multichannel potential of the OCT/SLO system is demonstrated with the addition of a third hardware channel which acquires and generates indocyanine green (ICG) fluorescence images. The OCT, confocal SLO and ICG fluorescence images are simultaneously presented in a two or a three screen format. A fourth channel which displays a live mix of frames of the ICG sequence superimposed on the corresponding coronal OCT slices for immediate multidimensional comparison, is also included. OSA ISP software is employed to illustrate the synergy between the simultaneously provided perspectives. This synergy promotes interpretation of information by enhancing diagnostic comparisons and facilitates internal correction of movement artifacts within C-scan and B-scan OCT images using information provided by the SLO channel

    Comparison on performance of adaptive algorithms for eye blinks removal in electroencephalogram

    Get PDF
    The interference of eye blink artifacts can cause serious distortion to electroencephalogram (EEG) which could bias the signal interpretation and reduce the classification accuracy in a brain-computer interface (BCI) application. To overcome this problem, an algorithm to automatically detect and remove the artifacts from EEG signals is highly desirable. One of the methods that can be applied for automatic artifacts removal is adaptive filtering through an adaptive noise cancellation (ANC) system. In this paper, we compare the performance of three adaptive algorithms; namely LMS, RLS, and ANFIS, in removing the eye blink from EEG signals. To evaluate the results, the SNR, MSE and correlation coefficient values are calculated based on the results obtained by using one of the widely used methods for blinks removal, independent component analysis (ICA). The results show that RLS algorithm provides the best performance when comparing with the ICA method

    Predictive learning, prediction errors, and attention: evidence from event-related potentials and eye tracking

    Get PDF
    Prediction error (‘‘surprise’’) affects the rate of learning: We learn more rapidly about cues for which we initially make incorrect predictions than cues for which our initial predictions are correct. The current studies employ electrophysiological measures to reveal early attentional differentiation of events that differ in their previous involvement in errors of predictive judgment. Error-related events attract more attention, as evidenced by features of event-related scalp potentials previously implicated in selective visual attention (selection negativity, augmented anterior N1). The earliest differences detected occurred around 120 msec after stimulus onset, and distributed source localization (LORETA) indicated that the inferior temporal regions were one source of the earliest differences. In addition, stimuli associated with the production of prediction errors show higher dwell times in an eyetracking procedure. Our data support the view that early attentional processes play a role in human associative learning

    The Measurement of Eye Movements in Mild Traumatic Brain Injury: A Structured Review of an Emerging Area

    Get PDF
    Mild traumatic brain injury (mTBI), or concussion, occurs following a direct or indirect force to the head that causes a change in brain function. Many neurological signs and symptoms of mTBI can be subtle and transient, and some can persist beyond the usual recovery timeframe, such as balance, cognitive or sensory disturbance that may pre-dispose to further injury in the future. There is currently no accepted definition or diagnostic criteria for mTBI and therefore no single assessment has been developed or accepted as being able to identify those with an mTBI. Eye-movement assessment may be useful, as specific eye-movements and their metrics can be attributed to specific brain regions or functions, and eye-movement involves a multitude of brain regions. Recently, research has focused on quantitative eye-movement assessments using eye-tracking technology for diagnosis and monitoring symptoms of an mTBI. However, the approaches taken to objectively measure eye-movements varies with respect to instrumentation, protocols and recognition of factors that may influence results, such as cognitive function or basic visual function. This review aimed to examine previous work that has measured eye-movements within those with mTBI to inform the development of robust or standardized testing protocols. Medline/PubMed, CINAHL, PsychInfo and Scopus databases were searched. Twenty-two articles met inclusion/exclusion criteria and were reviewed, which examined saccades, smooth pursuits, fixations and nystagmus in mTBI compared to controls. Current methodologies for data collection, analysis and interpretation from eye-tracking technology in individuals following an mTBI are discussed. In brief, a wide range of eye-movement instruments and outcome measures were reported, but validity and reliability of devices and metrics were insufficiently reported across studies. Interpretation of outcomes was complicated by poor study reporting of demographics, mTBI-related features (e.g., time since injury), and few studies considered the influence that cognitive or visual functions may have on eye-movements. The reviewed evidence suggests that eye-movements are impaired in mTBI, but future research is required to accurately and robustly establish findings. Standardization and reporting of eye-movement instruments, data collection procedures, processing algorithms and analysis methods are required. Recommendations also include comprehensive reporting of demographics, mTBI-related features, and confounding variables

    Image quality evaluation of projection- and depth dose-based approaches to integrating proton radiography using a monolithic scintillator detector

    Get PDF
    The purpose of this study is to compare the image quality of an integrating proton radiography (PR) system, composed of a monolithic scintillator and two digital cameras, using integral lateral-dose and integral depth-dose image reconstruction techniques. Monte Carlo simulations were used to obtain the energy deposition in a 3D monolithic scintillator detector (30 × 30 × 30 cm3 poly vinyl toluene organic scintillator) to create radiographs of various phantoms—a slanted aluminum cube for spatial resolution analysis and a Las Vegas phantom for contrast analysis. The light emission of the scintillator was corrected using Birks scintillation model. We compared two integrating PR methods and the expected results from an idealized proton tracking radiography system. Four different image reconstruction methods were utilized in this study: integral scintillation light projected from the beams-eye view, depth-dose based reconstruction methods both with and without optimization, and single particle tracking PR was used for reference data. Results showed that heterogeneity artifact due to medium-interface mismatch was identified from the Las Vegas phantom simulated in air. Spatial resolution was found to be highest for single-event reconstruction. Contrast levels, ranked from best to worst, were found to correspond to particle tracking, optimized depth-dose, depth-dose, and projection-based image reconstructions. The image quality of a monolithic scintillator integrating PR system was sufficient to warrant further exploration. These results show promise for potential clinical use as radiographic techniques for visualizing internal patient anatomy during proton radiotherapy
    corecore