40 research outputs found

    An EEG-based brain-computer interface for dual task driving detection

    Full text link
    The development of brain-computer interfaces (BCI) for multiple applications has undergone extensive growth in recent years. Since distracted driving is a significant cause of traffic accidents, this study proposes one BCI system based on EEG for distracted driving. The removal of artifacts and the selection of useful brain sources are the essential and critical steps in the application of electroencephalography (EEG)-based BCI. In the first model, artifacts are removed, and useful brain sources are selected based on the independent component analysis (ICA). In the second model, all distracted and concentrated EEG epochs are recognized with a self-organizing map (SOM). This BCI system automatically identified independent components with artifacts for removal and detected distracted driving through the specific brain sources which are also selected automatically. The accuracy of the proposed system approached approximately 90% for the recognition of EEG epochs of distracted and concentrated driving according to the selected frontal and left motor components. Š 2013

    Removing Ocular Movement Artefacts by a Joint Smoothened Subspace Estimator

    Get PDF
    To cope with the severe masking of background cerebral activity in the electroencephalogram (EEG) by ocular movement artefacts, we present a method which combines lower-order, short-term and higher-order, long-term statistics. The joint smoothened subspace estimator (JSSE) calculates the joint information in both statistical models, subject to the constraint that the resulting estimated source should be sufficiently smooth in the time domain (i.e., has a large autocorrelation or self predictive power). It is shown that the JSSE is able to estimate a component from simulated data that is superior with respect to methodological artefact suppression to those of FastICA, SOBI, pSVD, or JADE/COM1 algorithms used for blind source separation (BSS). Interference and distortion suppression are of comparable order when compared with the above-mentioned methods. Results on patient data demonstrate that the method is able to suppress blinking and saccade artefacts in a fully automated way

    Validating and improving the correction of ocular artifacts in electro-encephalography

    Get PDF
    For modern applications of electro-encephalography, including brain computer interfaces and single-trial Event Related Potential detection, it is becoming increasingly important that artifacts are accurately removed from a recorded electro-encephalogram (EEG) without affecting the part of the EEG that reflects cerebral activity. Ocular artifacts are caused by movement of the eyes and the eyelids. They occur frequently in the raw EEG and are often the most prominent artifacts in EEG recordings. Their accurate removal is therefore an important procedure in nearly all electro-encephalographic research. As a result of this, a considerable number of ocular artifact correction methods have been introduced over the past decades. A selection of these methods, which contains some of the most frequently used correction methods, is given in Section 1.5. When two different correction methods are applied to the same raw EEG, this usually results in two different corrected EEGs. A measure for the accuracy of correction should indicate how well each of these corrected EEGs recovers the part of the raw EEG that truly reflects cerebral activity. The fact that this accuracy cannot be determined directly from a raw EEG is intrinsic to the need for artifact removal. If, based on a raw EEG, it would be possible to derive an exact reference on what the corrected EEG should be, then there would not be any need for adequate artifact correction methods. Estimating the accuracy of correction methods is mostly done either by using models to simulate EEGs and artifacts, or by manipulating the experimental data in such a way that the effects of artifacts to the raw EEG can be isolated. In this thesis, modeling of EEG and artifact is used to validate correction methods based on simulated data. A new correction method is introduced which, unlike all existing methods, uses a camera to monitor eye(lid) movements as a basis for ocular artifact correction. The simulated data is used to estimate the accuracy of this new correction method and to compare it against the estimated accuracy of existing correction methods. The results of this comparison suggest that the new method significantly increases correction accuracy compared to the other methods. Next, an experiment is performed, based on which the accuracy of correction can be estimated on raw EEGs. Results on this experimental data comply very well with the results on the simulated data. It is therefore concluded that using a camera during EEG recordings provides valuable extra information that can be used in the process of ocular artifact correction. In Chapter 2, a model is introduced that assists in estimating the accuracy of eye movement artifacts for simulated EEG recordings. This model simulates EEG and eye movement artifacts simultaneously. For this, the model uses a realistic representation of the head, multiple dipoles to model cerebral and ocular electrical activity, and the boundary element method to calculate changes in electrical potential at different positions on the scalp. With the model, it is possible to simulate different data sets as if they are recorded using different electrode configurations. Signal to noise ratios are used to assess the accuracy of these six correction methods for various electrode configurations before and after applying six different correction methods. Results show that out of the six methods, second order blind identification, SOBI, and multiple linear regression, MLR, correct most accurately overall as they achieve the highest rise in signal to noise ratio. The occurrence of ocular artifacts is linked to changes in eyeball orientation. In Chapter 2 an eye tracker is used to record pupil position, which is closely linked to eyeball orientation. The pupil position information is used in the model to simulate eye movements. Recognizing the potential benefit of using an eye tracker not only for simulations, but also for correction, Chapter 3 introduces an eye movement artifact correction method that exploits the pupil position information that is provided by an eye tracker. Other correction methods use the electrooculogram (EOG) and/or the EEG to estimate ocular artifacts. Because both the EEG and the EOG recordings are susceptive to cerebral activity as well as to ocular activity, these other methods are at risk of overcorrecting the raw EEG. Pupil position information provides a reference that is linked to the ocular artifact in the EEG but that cannot be affected by cerebral activity, and as a result the new correction method avoids having to solve traditionally problematic issues like forward/backward propagation and evaluating the accuracy of component extraction. By using both simulated and experimental data, it is determined how pupil position influences the raw EEG and it is found that this relation is linear or quadratic. A Kalman filter is used for tuning of the parameters that specify the relation. On simulated data, the new method performs very well, resulting in an SNR after correction of over 10 dB for various patterns of eye movements. When compared to the three methods that performed best in the evaluation of Chapter 2, only the SOBI method which performed best in that evaluation shows similar results for some of the eye movement patterns. However, a serious limitation of the correction method is its inability to correct blink artifacts. In order to increase the variety of applications for which the new method can be used, the new correction should be improved in a way that enables it to correct the raw EEG for blinking artifacts. Chapter 4 deals with implementing such improvements based on the idea that a more advanced eye-tracker should be able to detect both the pupil position and the eyelid position. The improved eye tracker-based ocular artifact correction method is named EYE. Driven by some practical limitations regarding the eye tracking device currently available to us, an alternative way to estimate eyelid position is suggested, based on an EOG recorded above one eye. The EYE method can be used with both the eye tracker information or with the EOG substitute. On simulated data, accuracy of the EYE method is estimated using the EOGbased eyelid reference. This accuracy is again compared against the six other correction methods. Two different SNR-based measures of accuracy are proposed. One of these quantifies the correction of the entire simulated data set and the other focuses on those segments containing simulated blinking artifacts. After applying EYE, an average SNR of at least 9 dB for both these measures is achieved. This implies that the power of the corrected signal is at least eight times the power of the remaining noise. The simulated data sets contain a wide range of eye movements and blink frequencies. For almost all of these data sets, 16 out of 20, the correction results for EYE are better than for any of the other evaluated correction method. On experimental data, the EYE method appears to adequately correct for ocular artifacts as well. As the detection of eyelid position from the EOG is in principle inferior to the detection of eyelid position with the use of an eye tracker, these results should also be considered as an indicator of even higher accuracies that could be obtained with a more advanced eye tracker. Considering the simplicity of the MLR method, this method also performs remarkably well, which may explain why EOG-based regression is still often used for correction. In Chapter 5, the simulation model of Chapter 2 is put aside and, alternatively, experimentally recorded data is manipulated in a way that correction inaccuracies can be highlighted. Correction accuracies of eight correction methods, including EYE, are estimated based on data that are recorded during stop-signal tasks. In the analysis of these tasks it is essential that ocular artifacts are adequately removed because the task-related ERPs, are located mostly at frontal electrode positions and are low-amplitude. These data are corrected and subsequently evaluated. For the eight methods, the overall ranking of estimated accuracy in Figure 5.3, corresponds very well with the correction accuracy of these methods on simulated data as was found in Chapter 4. In a single-trial correction comparison, results suggest that the EYE corrected EEG, is not susceptible to overcorrection, whereas the other corrected EEGs are

    Recent Applications in Graph Theory

    Get PDF
    Graph theory, being a rigorously investigated field of combinatorial mathematics, is adopted by a wide variety of disciplines addressing a plethora of real-world applications. Advances in graph algorithms and software implementations have made graph theory accessible to a larger community of interest. Ever-increasing interest in machine learning and model deployments for network data demands a coherent selection of topics rewarding a fresh, up-to-date summary of the theory and fruitful applications to probe further. This volume is a small yet unique contribution to graph theory applications and modeling with graphs. The subjects discussed include information hiding using graphs, dynamic graph-based systems to model and control cyber-physical systems, graph reconstruction, average distance neighborhood graphs, and pure and mixed-integer linear programming formulations to cluster networks

    Brain-Computer Interface

    Get PDF
    Brain-computer interfacing (BCI) with the use of advanced artificial intelligence identification is a rapidly growing new technology that allows a silently commanding brain to manipulate devices ranging from smartphones to advanced articulated robotic arms when physical control is not possible. BCI can be viewed as a collaboration between the brain and a device via the direct passage of electrical signals from neurons to an external system. The book provides a comprehensive summary of conventional and novel methods for processing brain signals. The chapters cover a range of topics including noninvasive and invasive signal acquisition, signal processing methods, deep learning approaches, and implementation of BCI in experimental problems

    Real-time noise cancellation with deep learning

    Get PDF
    Biological measurements are often contaminated with large amounts of non-stationary noise which require effective noise reduction techniques. We present a new real-time deep learning algorithm which produces adaptively a signal opposing the noise so that destructive interference occurs. As a proof of concept, we demonstrate the algorithm’s performance by reducing electromyogram noise in electroencephalograms with the usage of a custom, flexible, 3D-printed, compound electrode. With this setup, an average of 4dB and a maximum of 10dB improvement of the signal-to-noise ratio of the EEG was achieved by removing wide band muscle noise. This concept has the potential to not only adaptively improve the signal-to-noise ratio of EEG but can be applied to a wide range of biological, industrial and consumer applications such as industrial sensing or noise cancelling headphones

    Development of an Electroencephalography-Based Brain-Computer Interface Supporting Two-Dimensional Cursor Control

    Get PDF
    This study aims to explore whether human intentions to move or cease to move right and left hands can be decoded from spatiotemporal features in non-invasive electroencephalography (EEG) in order to control a discrete two-dimensional cursor movement for a potential multi-dimensional Brain-Computer interface (BCI). Five naĂŻve subjects performed either sustaining or stopping a motor task with time locking to a predefined time window by using motor execution with physical movement or motor imagery. Spatial filtering, temporal filtering, feature selection and classification methods were explored. The performance of the proposed BCI was evaluated by both offline classification and online two-dimensional cursor control. Event-related desynchronization (ERD) and post-movement event-related synchronization (ERS) were observed on the contralateral hemisphere to the hand moved for both motor execution and motor imagery. Feature analysis showed that EEG beta band activity in the contralateral hemisphere over the motor cortex provided the best detection of either sustained or ceased movement of the right or left hand. The offline classification of four motor tasks (sustain or cease to move right or left hand) provided 10-fold cross-validation accuracy as high as 88% for motor execution and 73% for motor imagery. The subjects participating in experiments with physical movement were able to complete the online game with motor execution at the average accuracy of 85.5Âą4.65%; Subjects participating in motor imagery study also completed the game successfully. The proposed BCI provides a new practical multi-dimensional method by noninvasive EEG signal associated with human natural behavior, which does not need long-term training

    C-Trend parameters and possibilities of federated learning

    Get PDF
    Abstract. In this observational study, federated learning, a cutting-edge approach to machine learning, was applied to one of the parameters provided by C-Trend Technology developed by Cerenion Oy. The aim was to compare the performance of federated learning to that of conventional machine learning. Additionally, the potential of federated learning for resolving the privacy concerns that prevent machine learning from realizing its full potential in the medical field was explored. Federated learning was applied to burst-suppression ratio’s machine learning and it was compared to the conventional machine learning of burst-suppression ratio calculated on the same dataset. A suitable aggregation method was developed and used in the updating of the global model. The performance metrics were compared and a descriptive analysis including box plots and histograms was conducted. As anticipated, towards the end of the training, federated learning’s performance was able to approach that of conventional machine learning. The strategy can be regarded to be valid because the performance metric values remained below the set test criterion levels. With this strategy, we will potentially be able to make use of data that would normally be kept confidential and, as we gain access to more data, eventually develop machine learning models that perform better. Federated learning has some great advantages and utilizing it in the context of qEEGs’ machine learning could potentially lead to models, which reach better performance by receiving data from multiple institutions without the difficulties of privacy restrictions. Some possible future directions include an implementation on heterogeneous data and on larger data volume.C-Trend-teknologian parametrit ja federoidun oppimisen mahdollisuudet. Tiivistelmä. Tässä havainnointitutkimuksessa federoitua oppimista, koneoppimisen huippuluokan lähestymistapaa, sovellettiin yhteen Cerenion Oy:n kehittämään C-Trend-teknologian tarjoamaan parametriin. Tavoitteena oli verrata federoidun oppimisen suorituskykyä perinteisen koneoppimisen suorituskykyyn. Lisäksi tutkittiin federoidun oppimisen mahdollisuuksia ratkaista yksityisyyden suojaan liittyviä rajoitteita, jotka estävät koneoppimista hyödyntämästä täyttä potentiaaliaan lääketieteen alalla. Federoitua oppimista sovellettiin purskevaimentumasuhteen koneoppimiseen ja sitä verrattiin purskevaimentumasuhteen laskemiseen, johon käytettiin perinteistä koneoppimista. Kummankin laskentaan käytettiin samaa dataa. Sopiva aggregointimenetelmä kehitettiin, jota käytettiin globaalin mallin päivittämisessä. Suorituskykymittareiden tuloksia verrattiin keskenään ja tehtiin kuvaileva analyysi, johon sisältyi laatikkokuvioita ja histogrammeja. Odotetusti opetuksen loppupuolella federoidun oppimisen suorituskyky pystyi lähestymään perinteisen koneoppimisen suorituskykyä. Menetelmää voidaan pitää pätevänä, koska suorituskykymittarin arvot pysyivät alle asetettujen testikriteerien tasojen. Tämän menetelmän avulla voimme ehkä hyödyntää dataa, joka normaalisti pidettäisiin salassa, ja kun saamme lisää dataa käyttöömme, voimme lopulta kehittää koneoppimismalleja, jotka saavuttavat paremman suorituskyvyn. Federoidulla oppimisella on joitakin suuria etuja, ja sen hyödyntäminen qEEG:n koneoppimisen yhteydessä voisi mahdollisesti johtaa malleihin, jotka saavuttavat paremman suorituskyvyn saamalla tietoja useista eri lähteistä ilman yksityisyyden suojaan liittyviä rajoituksia. Joitakin mahdollisia tulevia suuntauksia ovat muun muassa heterogeenisen datan ja suurempien tietomäärien käyttö

    Magnetoencephalography

    Get PDF
    This is a practical book on MEG that covers a wide range of topics. The book begins with a series of reviews on the use of MEG for clinical applications, the study of cognitive functions in various diseases, and one chapter focusing specifically on studies of memory with MEG. There are sections with chapters that describe source localization issues, the use of beamformers and dipole source methods, as well as phase-based analyses, and a step-by-step guide to using dipoles for epilepsy spike analyses. The book ends with a section describing new innovations in MEG systems, namely an on-line real-time MEG data acquisition system, novel applications for MEG research, and a proposal for a helium re-circulation system. With such breadth of topics, there will be a chapter that is of interest to every MEG researcher or clinician
    corecore