89,658 research outputs found

    Methods of pattern classification for the design of a NIRS-based brain computer interface.

    Get PDF
    Brain-Computer Interface (BCI) is a communication system that offers the possibility to act upon the surrounding environment without using our nervous systems efferent pathways. One of the most important parts of a BCI is the pattern classification system which allows to translate mental activities into commands for an external device. This work aims at providing new pattern classification methods for the development of a Brain Computer Interface based on Near Infrared Spectroscopy. To do so, a thorough study of machine learning techniques used for developing BCIs has been conducted

    Methods of pattern classification for the design of a NIRS-based brain computer interface.

    Get PDF
    Brain-Computer Interface (BCI) is a communication system that offers the possibility to act upon the surrounding environment without using our nervous systems efferent pathways. One of the most important parts of a BCI is the pattern classification system which allows to translate mental activities into commands for an external device. This work aims at providing new pattern classification methods for the development of a Brain Computer Interface based on Near Infrared Spectroscopy. To do so, a thorough study of machine learning techniques used for developing BCIs has been conducted

    Mental State Prediction Using Machine Learning and EEG Signal

    Get PDF
    One of the most exciting areas of computer science right now is brain-computer interface (BCI) research. A conduit for data flow between both the brain as well as an electronic device is the brain-computer interface (BCI). Researchers in several disciplines have benefited from the advancements made possible by brain-computer interfaces. Primary fields of study include healthcare and neuroergonomics. Brain signals could be used in a variety of ways to improve healthcare at every stage, from diagnosis to rehabilitation to eventual restoration. In this research, we demonstrate how to classify EEG signals of brain waves using machine learning algorithms for predicting mental health states. The XGBoost algorithm's results have an accuracy of 99.62%, which is higher than that of any other study of its kind and the best result to date for diagnosing people's mental states from their EEG signals. This discovery will aid in taking efforts [1] to predict mental state using EEG signals to the next level

    EEG Based Eye State Classification using Deep Belief Network and Stacked AutoEncoder

    Get PDF
    A Brain-Computer Interface (BCI) provides an alternative communication interface between the human brain and a computer. The Electroencephalogram (EEG) signals are acquired, processed and machine learning algorithms are further applied to extract useful information.  During  EEG acquisition,   artifacts  are induced due to involuntary eye movements or eye blink, casting adverse effects  on system performance. The aim of this research is to predict eye states from EEG signals using Deep learning architectures and present improved classifier models. Recent studies reflect that Deep Neural Networks are trending state of the art Machine learning approaches. Therefore, the current work presents the implementation of  Deep Belief Network (DBN) and Stacked AutoEncoders (SAE) as Classifiers with encouraging performance accuracy.  One of the designed  SAE models outperforms the  performance of DBN and the models presented in existing research by an impressive error rate of 1.1% on the test set bearing accuracy of 98.9%. The findings in this study,  may provide a contribution towards the state of  the  art performance on the problem of  EEG based eye state classification

    Brain-Computer Interfaces using Machine Learning

    Get PDF
    This thesis explores machine learning models for the analysis and classification of electroencephalographic (EEG) signals used in Brain-Computer Interface (BCI) systems. The goal is 1) to develop a system that allows users to control home-automation devices using their mind, and 2) to investigate whether it is possible to achieve this, using low-cost EEG equipment. The thesis includes both a theoretical and a practical part. In the theoretical part, we overview the underlying principles of Brain-Computer Interface systems, as well as, different approaches for the interpretation and the classification of brain signals. We also discuss the emergent launch of low-cost EEG equipment on the market and its use beyond clinical research. We then dive into more technical details that involve signal processing and classification of EEG patterns using machine leaning. Purpose of the practical part is to create a brain-computer interface that will be able to control a smart home environment. As a first step, we investigate the generalizability of different classification methods, conducting a preliminary study on two public datasets of brain encephalographic data. The obtained accuracy level of classification on 9 different subjects was similar and, in some cases, superior to the reported state of the art. Having achieved relatively good offline classification results during our study, we move on to the last part, designing and implementing an online BCI system using Python. Our system consists of three modules. The first module communicates with the MUSE (a low-cost EEG device) to acquire the EEG signals in real time, the second module process those signals using machine learning techniques and trains a learning model. The model is used by the third module, that takes control of cloud-based home automation devices. Experiments using the MUSE resulted in significantly lower classification results and revealed the limitations of the low-cost EEG signal acquisition device for online BCIs

    Classification of motor imaginary EEG signals using machine learning

    Get PDF
    Brain Computer Interface (BCI) is a term that was first introduced by Jacques Vidal in the 1970s when he created a system that can determine the human eye gaze direction, making the system able to determine the direction a person want to go or move something to using scalp-recorded visual evoked potential (VEP) over the visual cortex. Ever since that time, many researchers where captivated by the huge potential and list of possibilities that can be achieved if simply a digital machine can interpret human thoughts. In this work, we explore electroencephalography (EEG) signal classification, specifically for motor imagery (MI) tasks. Classification of MI tasks can be carried out by using machine learning and deep learning models, yet there is a trade between accuracy and computation time that needs to be maintained. The objective is to create a machine learning model that can be optimized for real-time classification while having a relatively acceptable classification accuracy. The proposed model relies on common spatial patter (CSP) for feature extraction as well as linear discriminant analysis (LDA) for classification. With simple pre-processing stage and a proper selection of data for training the model proved to have a balanced accuracy of +80% while maintaining small run-time (milliseconds) that is opted for real-time classifications

    An embedding for EEG signals learned using a triplet loss

    Full text link
    Neurophysiological time series recordings like the electroencephalogram (EEG) or local field potentials are obtained from multiple sensors. They can be decoded by machine learning models in order to estimate the ongoing brain state of a patient or healthy user. In a brain-computer interface (BCI), this decoded brain state information can be used with minimal time delay to either control an application, e.g., for communication or for rehabilitation after stroke, or to passively monitor the ongoing brain state of the subject, e.g., in a demanding work environment. A specific challenge in such decoding tasks is posed by the small dataset sizes in BCI compared to other domains of machine learning like computer vision or natural language processing. A possibility to tackle classification or regression problems in BCI despite small training data sets is through transfer learning, which utilizes data from other sessions, subjects or even datasets to train a model. In this exploratory study, we propose novel domain-specific embeddings for neurophysiological data. Our approach is based on metric learning and builds upon the recently proposed ladder loss. Using embeddings allowed us to benefit, both from the good generalisation abilities and robustness of deep learning and from the fast training of classical machine learning models for subject-specific calibration. In offline analyses using EEG data of 14 subjects, we tested the embeddings' feasibility and compared their efficiency with state-of-the-art deep learning models and conventional machine learning pipelines. In summary, we propose the use of metric learning to obtain pre-trained embeddings of EEG-BCI data as a means to incorporate domain knowledge and to reach competitive performance on novel subjects with minimal calibration requirements.Comment: 23 pages, 11 figures, 5 appendix pages, 6 appendix figures, work conducted in 2020-2021 during an ARPE (https://ens-paris-saclay.fr/en/masters/ens-paris-saclay-degree/year-pre-doctoral-research-abroad-arpe

    Brain Machine Interface Using Electroencephalograph Data as Control Signals for a Robotic Arm

    Get PDF
    Brain machine interface (BMI) also known as brain computer interface (BCI) is a field of research that has been explored in varying degrees throughout the last few decades. Initial research used invasive technology in order to read the signals from the human brain. These systems required surgery in order to connect the subjects to the sensors. Recent trends have moved toward non-invasive systems that make use of non-invasive physiological sensors such as electroencephalographs (EEG). EEG systems use a number of electrodes to read electrical signals on the scalp caused by brain activity. The patterns generated by certain thoughts can be classified and recognized by a BMI system using machine learning algorithms. These classified patterns can then be encoded as commands to prompt a certain response from a computer or machine. The completed system allows for control of the connected device using thought as the only input. The possible uses for a BMI system are as varied as the designs of computer programs and computer controlled devices. One of the most noteworthy applications of BMIs is in the field of medicine. BMIs offer the tools for the disabled to interact with the world, even if they are suffering from severe nerve damage between their brain and original limbs. In the case of a lost or paralyzed limb, BMIs offer the potential for patients to use a robotic limb, controlled with their natural thought patterns, to interact with the world. BMIs also offer potential modes of communication for patients who have no other way to convey their thoughts. With these applications in mind, this research focuses on control of a robotic arm using a 14-electrode EEG headset. Both pure EEG signals and electromyography (EMG) signals are encoded as controls for six possible actions performed by the robotic arm.https://ecommons.udayton.edu/stander_posters/1231/thumbnail.jp

    Brain informed transfer learning for categorizing construction hazards

    Full text link
    A transfer learning paradigm is proposed for "knowledge" transfer between the human brain and convolutional neural network (CNN) for a construction hazard categorization task. Participants' brain activities are recorded using electroencephalogram (EEG) measurements when viewing the same images (target dataset) as the CNN. The CNN is pretrained on the EEG data and then fine-tuned on the construction scene images. The results reveal that the EEG-pretrained CNN achieves a 9 % higher accuracy compared with a network with same architecture but randomly initialized parameters on a three-class classification task. Brain activity from the left frontal cortex exhibits the highest performance gains, thus indicating high-level cognitive processing during hazard recognition. This work is a step toward improving machine learning algorithms by learning from human-brain signals recorded via a commercially available brain-computer interface. More generalized visual recognition systems can be effectively developed based on this approach of "keep human in the loop"
    • …
    corecore