547 research outputs found

    Event Fixation Related Potential During Visual Emotion Stimulation

    Get PDF
    CĂ­lem tĂ©to diplomovĂ© prĂĄce je najĂ­t a popsat souvislost mezi fixacĂ­ očí v emočně zabarvenĂ©m stimulu, kterĂœm je obrĂĄzek či video, a EEG signĂĄlu. K tomuto studiu je tƙeba vyvinout softwarovĂ© nĂĄstroje v prostƙedĂ­ Matlab k Ășpravě a zpracovĂĄnĂ­ dat zĂ­skanĂœch z eye trackeru a propojenĂ­ s EEG signĂĄly pomocĂ­ nově vytvoƙenĂœch markerĆŻ. Na zĂĄkladě zĂ­skanĂœch znalostĂ­ o fixacĂ­ch, jsou v prostƙedĂ­ BrainVision Analyzeru EEG data zpracovĂĄny a nĂĄsledně jsou segmentovĂĄny a prĆŻměrovĂĄny jako evokovanĂ© potenciĂĄly pro jednotlivĂ© stimuly (ERP a EfRP). Tato prĂĄce je vypracovĂĄna ve spoluprĂĄci s Gipsa-lab v rĂĄmci vĂœzkumnĂ©ho projektu.This diploma thesis is a part of a ongoing research project concerning new joint technique of eye fixations and EEG. The goal of this work is to find and analyze a connection between eye fixation in a face expressing an emotion (static or dynamic). For this study certain software developments need to be done to adjust fixation data in Matlab and connect them to EEG signals with newly created markers. Based on the obtained information on fixations, EEG data are processed in BrainVision Analyzer and segmented to obtain ERPs and EfRPs for each stimuli.

    A Removal of Eye Movement and Blink Artifacts from EEG Data Using Morphological Component Analysis

    Get PDF
    EEG signals contain a large amount of ocular artifacts with different time-frequency properties mixing together in EEGs of interest. The artifact removal has been substantially dealt with by existing decomposition methods known as PCA and ICA based on the orthogonality of signal vectors or statistical independence of signal components. We focused on the signal morphology and proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA), which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of “dictionary.” MCA was applied to decompose the real EEG signal and clarified the best combination of dictionaries for this purpose. In our proposed semirealistic biological signal analysis with iEEGs recorded from the brain intracranially, those signals were successfully decomposed into original types by a linear expansion of waveforms, such as redundant transforms: UDWT, DCT, LDCT, DST, and DIRAC. Our result demonstrated that the most suitable combination for EEG data analysis was UDWT, DST, and DIRAC to represent the baseline envelope, multifrequency wave-forms, and spiking activities individually as representative types of EEG morphologies

    Identification of audio evoked response potentials in ambulatory EEG data

    Get PDF
    Electroencephalography (EEG) is commonly used for observing brain function over a period of time. It employs a set of invasive electrodes on the scalp to measure the electrical activity of the brain. EEG is mainly used by researchers and clinicians to study the brain’s responses to a specific stimulus - the event-related potentials (ERPs). Different types of undesirable signals, which are known as artefacts, contaminate the EEG signal. EEG and ERP signals are very small (in the order of microvolts); they are often obscured by artefacts with much larger amplitudes in the order of millivolts. This greatly increases the difficulty of interpreting EEG and ERP signals.Typically, ERPs are observed by averaging EEG measurements made with many repetitions of the stimulus. The average may require many tens of repetitions before the ERP signal can be observed with any confidence. This greatly limits the study and useof ERPs. This project explores more sophisticated methods of ERP estimation from measured EEGs. An Optimal Weighted Mean (OWM) method is developed that forms a weighted average to maximise the signal to noise ratio in the mean. This is developedfurther into a Bayesian Optimal Combining (BOC) method where the information in repetitions of ERP measures is combined to provide a sequence of ERP estimations with monotonically decreasing uncertainty. A Principal Component Analysis (PCA) isperformed to identify the basis of signals that explains the greatest amount of ERP variation. Projecting measured EEG signals onto this basis greatly reduces the noise in measured ERPs. The PCA filtering can be followed by OWM or BOC. Finally, crosschannel information can be used. The ERP signal is measured on many electrodes simultaneously and an improved estimate can be formed by combining electrode measurements. A MAP estimate, phrased in terms of Kalman Filtering, is developed using all electrode measurements.The methods developed in this project have been evaluated using both synthetic and measured EEG data. A synthetic, multi-channel ERP simulator has been developed specifically for this project.Numerical experiments on synthetic ERP data showed that Bayesian Optimal Combining of trial data filtered using a combination of PCA projection and Kalman Filtering, yielded the best estimates of the underlying ERP signal. This method has been applied to subsets of real Ambulatory Electroencephalography (AEEG) data, recorded while participants performed a range of activities in different environments. From this analysis, the number of trials that need to be collected to observe the P300 amplitude and delay has been calculated for a range of scenarios

    Toward the real time estimation of the attentional state through ocular activity analysis

    Get PDF
    L'analyse d'incidents aĂ©ronautiques et d'expĂ©riences en laboratoire a montrĂ© que la tunnĂ©lisation attentionnelle amĂšne les pilotes Ă  nĂ©gliger des alarmes critiques. Une piste intĂ©ressante pour rĂ©pondre Ă  ce problĂšme s'appuie sur les systĂšmes adaptatifs qui pourraient assister l'opĂ©rateur en temps rĂ©el (en changeant le comportement du pilote automatique par exemple). Ce type de systĂšmes adaptatifs requiert l'Ă©tat de l'opĂ©rateur en entrĂ©e. Pour cela, des mĂ©thodes d'infĂ©rence de l'Ă©tat de l'opĂ©rateur doublĂ©es de mĂ©triques de la tunnĂ©lisation attentionnelle doivent ĂȘtre proposĂ©es. Le but de cette thĂšse de doctorat est d'apporter la preuve que la dĂ©tection de la tunnĂ©lisation attentionnelle est possible en temps rĂ©el. Pour cela une mĂ©thode adaptative neuro-floue utilisant les mĂ©triques de la tunnĂ©lisation attentionnelle sera proposĂ©e, ainsi que de nouvelles mĂ©triques de la tunnĂ©lisation attentionnelle qui ne dĂ©pendent pas du contexte de l'opĂ©rateur, et qui sont calculables en temps rĂ©el. L'algorithme d'identification des Ă©tats de l'oeil (ESIA) est proposĂ© en ce sens. Les mĂ©triques attentionnelles en sont dĂ©rivĂ©es et testĂ©es dans le contexte d'une expĂ©rience robotique dont le design favorise la tunnĂ©lisation attentionnellle. Nous proposons Ă©galement une nouvelle dĂ©finition du ratio exploitation/exploration d'information dont la pertinence en tant que marqueur de la tunnĂ©lisation attentionnelle est dĂ©montrĂ©e statistiquement. Le travail est ensuite discutĂ© et appliquĂ© sur divers cas d'Ă©tude en aviation et robotique.The analysis of aerospace incidents and laboratory experiments have shown that attentional tunneling leads pilots to neglect critical alarms. One interesting avenue to deal with this issue is to consider adaptive systems that would help the operator in real time (for instance: switching the auto-pilot mode). Such adaptive systems require the operator's state as an input. Therefore, both attentional tunneling metrics and state inference techniques have to be proposed. The goal of the PhD Thesis is to provide attentional tunneling metrics that are real-time and context independent. The Eye State Identification Algorithm (ESIA) that analyses ocular activity is proposed. Metrics are then derived and tested on a robotic experiment meant for favouring attentional tunneling. We also propose a new definition of the explore/exploit ratio that was proven statistically to be a relevant attentional tunneling marker. This work is then discussed and applied to different case studies in aviation and robotics

    Work, aging, mental fatigue, and eye movement dynamics

    Get PDF

    ON THE INTERPLAY BETWEEN BRAIN-COMPUTER INTERFACES AND MACHINE LEARNING ALGORITHMS: A SYSTEMS PERSPECTIVE

    Get PDF
    Today, computer algorithms use traditional human-computer interfaces (e.g., keyboard, mouse, gestures, etc.), to interact with and extend human capabilities across all knowledge domains, allowing them to make complex decisions underpinned by massive datasets and machine learning. Machine learning has seen remarkable success in the past decade in obtaining deep insights and recognizing unknown patterns in complex data sets, in part by emulating how the brain performs certain computations. As we increase our understanding of the human brain, brain-computer interfaces can benefit from the power of machine learning, both as an underlying model of how the brain performs computations and as a tool for processing high-dimensional brain recordings. The technology (machine learning) has come full circle and is being applied back to understanding the brain and any electric residues of the brain activity over the scalp (EEG). Similarly, domains such as natural language processing, machine translation, and scene understanding remain beyond the scope of true machine learning algorithms and require human participation to be solved. In this work, we investigate the interplay between brain-computer interfaces and machine learning through the lens of end-user usability. Specifically, we propose the systems and algorithms to enable synergistic and user-friendly integration between computers (machine learning) and the human brain (brain-computer interfaces). In this context, we provide our research contributions in two interrelated aspects by, (i) applying machine learning to solve challenges with EEG-based BCIs, and (ii) enabling human-assisted machine learning with EEG-based human input and implicit feedback.Ph.D

    AUTOMATED ARTIFACT REMOVAL AND DETECTION OF MILD COGNITIVE IMPAIRMENT FROM SINGLE CHANNEL ELECTROENCEPHALOGRAPHY SIGNALS FOR REAL-TIME IMPLEMENTATIONS ON WEARABLES

    Get PDF
    Electroencephalogram (EEG) is a technique for recording asynchronous activation of neuronal firing inside the brain with non-invasive scalp electrodes. EEG signal is well studied to evaluate the cognitive state, detect brain diseases such as epilepsy, dementia, coma, autism spectral disorder (ASD), etc. In this dissertation, the EEG signal is studied for the early detection of the Mild Cognitive Impairment (MCI). MCI is the preliminary stage of Dementia that may ultimately lead to Alzheimers disease (AD) in the elderly people. Our goal is to develop a minimalistic MCI detection system that could be integrated to the wearable sensors. This contribution has three major aspects: 1) cleaning the EEG signal, 2) detecting MCI, and 3) predicting the severity of the MCI using the data obtained from a single-channel EEG electrode. Artifacts such as eye blink activities can corrupt the EEG signals. We investigate unsupervised and effective removal of ocular artifact (OA) from single-channel streaming raw EEG data. Wavelet transform (WT) decomposition technique was systematically evaluated for effectiveness of OA removal for a single-channel EEG system. Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), is studied with four WT basis functions: haar, coif3, sym3, and bior4.4. The performance of the artifact removal algorithm was evaluated by the correlation coefficients (CC), mutual information (MI), signal to artifact ratio (SAR), normalized mean square error (NMSE), and time-frequency analysis. It is demonstrated that WT can be an effective tool for unsupervised OA removal from single channel EEG data for real-time applications.For the MCI detection from the clean EEG data, we collected the scalp EEG data, while the subjects were stimulated with five auditory speech signals. We extracted 590 features from the Event-Related Potential (ERP) of the collected EEG signals, which included time and spectral domain characteristics of the response. The top 25 features, ranked by the random forest method, were used for classification models to identify subjects with MCI. Robustness of our model was tested using leave-one-out cross-validation while training the classifiers. Best results (leave-one-out cross-validation accuracy 87.9%, sensitivity 84.8%, specificity 95%, and F score 85%) were obtained using support vector machine (SVM) method with Radial Basis Kernel (RBF) (sigma = 10, cost = 102). Similar performances were also observed with logistic regression (LR), further validating the results. Our results suggest that single-channel EEG could provide a robust biomarker for early detection of MCI. We also developed a single channel Electro-encephalography (EEG) based MCI severity monitoring algorithm by generating the Montreal Cognitive Assessment (MoCA) scores from the features extracted from EEG. We performed multi-trial and single-trail analysis for the algorithm development of the MCI severity monitoring. We studied Multivariate Regression (MR), Ensemble Regression (ER), Support Vector Regression (SVR), and Ridge Regression (RR) for multi-trial and deep neural regression for the single-trial analysis. In the case of multi-trial, the best result was obtained from the ER. In our single-trial analysis, we constructed the time-frequency image from each trial and feed it to the convolutional deep neural network (CNN). Performance of the regression models was evaluated by the RMSE and the residual analysis. We obtained the best accuracy with the deep neural regression method

    Event Detection in Eye-Tracking Data for Use in Applications with Dynamic Stimuli

    Get PDF
    This doctoral thesis has signal processing of eye-tracking data as its main theme. An eye-tracker is a tool used for estimation of the point where one is looking. Automatic algorithms for classification of different types of eye movements, so called events, form the basis for relating the eye-tracking data to cognitive processes during, e.g., reading a text or watching a movie. The problems with the algorithms available today are that there are few algorithms that can handle detection of events during dynamic stimuli and that there is no standardized procedure for how to evaluate the algorithms. This thesis comprises an introduction and four papers describing methods for detection of the most common types of eye movements in eye-tracking data and strategies for evaluation of such methods. The most common types of eye movements are fixations, saccades, and smooth pursuit movements. In addition to these eye movements, the event post-saccadic oscillations, (PSO), is considered. The eye-tracking data in this thesis are recorded using both high- and low-speed eye-trackers. The first paper presents a method for detection of saccades and PSO. The saccades are detected using the acceleration signal and three specialized criteria based on directional information. In order to detect PSO, the interval after each saccade is modeled and the parameters of the model are used to determine whether PSO are present or not. The algorithm was evaluated by comparing the detection results to manual annotations and to the detection results of the most recent PSO detection algorithm. The results show that the algorithm is in good agreement with annotations, and has better performance than the compared algorithm. In the second paper, a method for separation of fixations and smooth pursuit movements is proposed. In the intervals between the detected saccades/PSO, the algorithm uses different spatial scales of the position signal in order to separate between the two types of eye movements. The algorithm is evaluated by computing five different performance measures, showing both general and detailed aspects of the discrimination performance. The performance of the algorithm is compared to the performance of a velocity and dispersion based algorithm, (I-VDT), to the performance of an algorithm based on principle component analysis, (I-PCA), and to manual annotations by two experts. The results show that the proposed algorithm performs considerably better than the compared algorithms. In the third paper, a method based on eye-tracking signals from both eyes is proposed for improved separation of fixations and smooth pursuit movements. The method utilizes directional clustering of the eye-tracking signals in combination with binary filters taking both temporal and spatial aspects of the eye-tracking signal into account. The performance of the method is evaluated using a novel evaluation strategy based on automatically detected moving objects in the video stimuli. The results show that the use of binocular information for separation of fixations and smooth pursuit movements is advantageous in static stimuli, without impairing the algorithm's ability to detect smooth pursuit movements in video and moving dot stimuli. The three first papers in this thesis are based on eye-tracking signals recorded using a stationary eye-tracker, while the fourth paper uses eye-tracking signals recorded using a mobile eye-tracker. In mobile eye-tracking, the user is allowed to move the head and the body, which affects the recorded data. In the fourth paper, a method for compensation of head movements using an inertial measurement unit, (IMU), combined with an event detector for lower sampling rate data is proposed. The event detection is performed by combining information from the eye-tracking signals with information about objects extracted from the scene video of the mobile eye-tracker. The results show that by introducing head movement compensation and information about detected objects in the scene video in the event detector, improved classification can be achieved. In summary, this thesis proposes an entire methodological framework for robust event detection which performs better than previous methods when analyzing eye-tracking signals recorded during dynamic stimuli, and also provides a methodology for performance evaluation of event detection algorithms

    Electroencephalography (EEG) as a Research Tool in the Information Systems Discipline: Foundations, Measurement, and Applications

    Get PDF
    The concept of neuro-information systems (neuroIS) has emerged in the IS discipline recently. Since the neuroIS field’s genesis, several neuroIS papers have been published. Investigating empirical papers published in scientific journals and conference proceedings reveals that electroencephalography (EEG) is a widely used tool. Thus, considering its relevance in contemporary research and the fact that it will also play a major role in future neuroIS research, we describe EEG from a layman’s perspective. Because previous EEG descriptions in the neuroIS literature have only scantily outlined theoretical and methodological aspects related to this tool, we urgently need a more thorough one. As such, we inform IS scholars about the fundamentals of EEG in a compact way and discuss EEG’s potential for IS research. Based on the knowledge base provided in this paper, IS researchers can make an informed decision about whether EEG could, or should, become part of their toolbox

    Optimizations and applications in head-mounted video-based eye tracking

    Get PDF
    Video-based eye tracking techniques have become increasingly attractive in many research fields, such as visual perception and human-computer interface design. The technique primarily relies on the positional difference between the center of the eye\u27s pupil and the first-surface reflection at the cornea, the corneal reflection (CR). This difference vector is mapped to determine an observer\u27s point of regard (POR). In current head-mounted video-based eye trackers, the systems are limited in several aspects, such as inadequate measurement range and misdetection of eye features (pupil and CR). This research first proposes a new `structured illumination\u27 configuration, using multiple IREDs to illuminate the eye, to ensure that eye positions can still be tracked even during extreme eye movements (up to ±45° horizontally and ±25° vertically). Then eye features are detected by a two-stage processing approach. First, potential CRs and the pupil are isolated based on statistical information in an eye image. Second, genuine CRs are distinguished by a novel CR location prediction technique based on the well-correlated relationship between the offset of the pupil and that of the CR. The optical relationship of the pupil and CR offsets derived in this thesis can be applied to two typical illumination configurations - collimated and near-source ones- in the video-based eye tracking system. The relationships from the optical derivation and that from an experimental measurement match well. Two application studies, smooth pursuit dynamics in controlled static (laboratory) and unconstrained vibrating (car) environments were conducted. In the first study, the extended stimuli (color photographs subtending 2° and 17°, respectively) were found to enhance smooth pursuit movements induced by realistic images, and the eye velocity for tracking a small dot (subtending \u3c0.1°) was saturated at about 64 deg/sec while the saturation velocity occurred at higher velocities for the extended images. The difference in gain due to target size was significant between dot and the two extended stimuli, while no statistical difference existed between the two extended stimuli. In the second study, twovisual stimuli same as in the first study were used. The visual performance was impaired dramatically due to the whole body motion in the car, even in the tracking of a slowly moving target (2 deg/sec); the eye was found not able to perform a pursuit task as smooth as in the static environment though the unconstrained head motion in the unstable condition was supposed to enhance the visual performance
    • 

    corecore