1,651 research outputs found

    SSL for Auditory ERP-Based BCI

    Get PDF
    A brain–computer interface (BCI) is a communication tool that analyzes neural activity and relays the translated commands to carry out actions. In recent years, semi-supervised learning (SSL) has attracted attention for visual event-related potential (ERP)-based BCIs and motor-imagery BCIs as an effective technique that can adapt to the variations in patterns among subjects and trials. The applications of the SSL techniques are expected to improve the performance of auditory ERP-based BCIs as well. However, there is no conclusive evidence supporting the positive effect of SSL techniques on auditory ERP-based BCIs. If the positive effect could be verified, it will be helpful for the BCI community. In this study, we assessed the effects of SSL techniques on two public auditory BCI datasets—AMUSE and PASS2D—using the following machine learning algorithms: step-wise linear discriminant analysis, shrinkage linear discriminant analysis, spatial temporal discriminant analysis, and least-squares support vector machine. These backbone classifiers were firstly trained by labeled data and incrementally updated by unlabeled data in every trial of testing data based on SSL approach. Although a few data of the datasets were negatively affected, most data were apparently improved by SSL in all cases. The overall accuracy was logarithmically increased with every additional unlabeled data. This study supports the positive effect of SSL techniques and encourages future researchers to apply them to auditory ERP-based BCIs

    CLASSIFICATION BASED ON SEMI-SUPERVISED LEARNING: A REVIEW

    Get PDF
    Semi-supervised learning is the class of machine learning that deals with the use of supervised and unsupervised learning to implement the learning process. Conceptually placed between labelled and unlabeled data. In certain cases, it enables the large numbers of unlabeled data required to be utilized in comparison with usually limited collections of labeled data. In standard classification methods in machine learning, only a labeled collection is used to train the classifier. In addition, labelled instances are difficult to acquire since they necessitate the assistance of annotators, who serve in an occupation that is identified by their label. A complete audit without a supervisor is fairly easy to do, but nevertheless represents a significant risk to the enterprise, as there have been few chances to safely experiment with it so far. By utilizing a large number of unsupervised inputs along with the supervised inputs, the semi-supervised learning solves this issue, to create a good training sample. Since semi-supervised learning requires fewer human effort and allows greater precision, both theoretically or in practice, it is of critical interest

    Bio-signals compression using auto-encoder

    Get PDF
    Latest developments in wearable devices permits un-damageable and cheapest way for gathering of medical data such as bio-signals like ECG, Respiration, Blood pressure etc. Gathering and analysis of various biomarkers are considered to provide anticipatory healthcare through customized applications for medical purpose. Wearable devices will rely on size, resources and battery capacity; we need a novel algorithm to robustly control memory and the energy of the device. The rapid growth of the technology has led to numerous auto encoders that guarantee the results by extracting feature selection from time and frequency domain in an efficient way. The main aim is to train the hidden layer to reconstruct the data similar to that of input. In the previous works, to accomplish the compression all features were needed but in our proposed framework bio-signals compression using auto-encoder (BCAE) will perform task by taking only important features and compress it. By doing this it can reduce power consumption at the source end and hence increases battery life. The performance of the result comparison is done for the 3 parameters compression ratio, reconstruction error and power consumption. Our proposed work outperforms with respect to the SURF method

    TEMPORAL DATA EXTRACTION AND QUERY SYSTEM FOR EPILEPSY SIGNAL ANALYSIS

    Get PDF
    The 2016 Epilepsy Innovation Institute (Ei2) community survey reported that unpredictability is the most challenging aspect of seizure management. Effective and precise detection, prediction, and localization of epileptic seizures is a fundamental computational challenge. Utilizing epilepsy data from multiple epilepsy monitoring units can enhance the quantity and diversity of datasets, which can lead to more robust epilepsy data analysis tools. The contributions of this dissertation are two-fold. One is the implementation of a temporal query for epilepsy data; the other is the machine learning approach for seizure detection, seizure prediction, and seizure localization. The three key components of our temporal query interface are: 1) A pipeline for automatically extract European Data Format (EDF) information and epilepsy annotation data from cross-site sources; 2) Data quantity monitoring for Epilepsy temporal data; 3) A web-based annotation query interface for preliminary research and building customized epilepsy datasets. The system extracted and stored about 450,000 epilepsy-related events of more than 2,497 subjects from seven institutes up to September 2019. Leveraging the epilepsy temporal events query system, we developed machine learning models for seizure detection, prediction, and localization. Using 135 extracted features from EEG signals, we trained a channel-based eXtreme Gradient Boosting model to detect seizures on 8-second EEG segments. A long-term EEG recording evaluation shows that the model can detect about 90.34% seizures on an existing EEG dataset with 961 hours of data. The model achieved 89.88% accuracy, 92.32% sensitivity, and 84.76% AUC based on the segments evaluation. We also introduced a transfer learning approach consisting of 1) a base deep learning model pre-trained by ImageNet dataset and 2) customized fully connected layers, to train the patient-specific pre-ictal and inter-ictal data from our database. Two convolutional neural network architectures were evaluated using 53 pre-ictal segments and 265 continuous hours of inter-ictal EEG data. The evaluation shows that our model reached 86.79% sensitivity and 3.38% false-positive rate. Another transfer learning model for seizure localization uses a pre-trained ResNext50 structure and was trained with an image augmentation dataset labeling by fingerprint. Our model achieved 88.22% accuracy, 34.99% sensitivity, 1.02% false-positive rate, and 34.3% positive likelihood rate

    Exploring machine learning techniques in epileptic seizure detection and prediction

    Get PDF
    Epilepsy is the most common neurological disorder, affecting between 0.6% and 0.8% of the global population. Among those affected by epilepsy whose primary method of seizure management is Anti Epileptic Drug therapy (AED), 30% go on to develop resistance to drugs which ultimately leads to poor seizure management. Currently, alternative therapeutic methods with successful outcome and wide applicability to various types of epilepsy are limited. During an epileptic seizure, the onset of which tends to be sudden and without prior warning, sufferers are highly vulnerable to injury, and methods that might accurately predict seizure episodes in advance are clearly of value, particularly to those who are resistant to other forms of therapy. In this thesis, we draw from the body of work behind automatic seizure prediction obtained from digitised Electroencephalography (EEG) data and use a selection of machine learning and data mining algorithms and techniques in an attempt to explore potential directions of improvement for automatic prediction of epileptic seizures. We start by adopting a set of EEG features from previous work in the field (Costa et al. 2008) and exploring these via seizure classification and feature selection studies on a large dataset. Guided by the results of these feature selection studies, we then build on Costa et al's work by presenting an expanded feature-set for EEG studies in this area. Next, we study the predictability of epileptic seizures several minutes (up to 25 minutes) in advance of the physiological onset. Furthermore, we look at the role of the various feature compositions on predicting epileptic seizures well in advance of their occurring. We focus on how predictability varies as a function of how far in advance we are trying to predict the seizure episode and whether the predictive patterns are translated across the entire dataset. Finally, we study epileptic seizure detection from a multiple-patient perspective. This entails conducting a comprehensive analysis of machine learning models trained on multiple patients and then observing how generalisation is affected by the number of patients and the underlying learning algorithm. Moreover, we improve multiple-patient performance by applying two state of the art machine learning algorithms

    Learning Sensory Representations with Minimal Supervision

    Get PDF

    Systems engineering approaches to safety in transport systems

    Get PDF
    openDuring driving, driver behavior monitoring may provide useful information to prevent road traffic accidents caused by driver distraction. It has been shown that 90% of road traffic accidents are due to human error and in 75% of these cases human error is the only cause. Car manufacturers have been interested in driver monitoring research for several years, aiming to enhance the general knowledge of driver behavior and to evaluate the functional state as it may drastically influence driving safety by distraction, fatigue, mental workload and attention. Fatigue and sleepiness at the wheel are well known risk factors for traffic accidents. The Human Factor (HF) plays a fundamental role in modern transport systems. Drivers and transport operators control a vehicle towards its destination in according to their own sense, physical condition, experience and ability, and safety strongly relies on the HF which has to take the right decisions. On the other hand, we are experiencing a gradual shift towards increasingly autonomous vehicles where HF still constitutes an important component, but may in fact become the "weakest link of the chain", requiring strong and effective training feedback. The studies that investigate the possibility to use biometrical or biophysical signals as data sources to evaluate the interaction between human brain activity and an electronic machine relate to the Human Machine Interface (HMI) framework. The HMI can acquire human signals to analyse the specific embedded structures and recognize the behavior of the subject during his/her interaction with the machine or with virtual interfaces as PCs or other communication systems. Based on my previous experience related to planning and monitoring of hazardous material transport, this work aims to create control models focused on driver behavior and changes of his/her physiological parameters. Three case studies have been considered using the interaction between an EEG system and external device, such as driving simulators or electronical components. A case study relates to the detection of the driver's behavior during a test driver. Another case study relates to the detection of driver's arm movements according to the data from the EEG during a driver test. The third case is the setting up of a Brain Computer Interface (BCI) model able to detect head movements in human participants by EEG signal and to control an electronic component according to the electrical brain activity due to head turning movements. Some videos showing the experimental results are available at https://www.youtube.com/channel/UCj55jjBwMTptBd2wcQMT2tg.openXXXIV CICLO - INFORMATICA E INGEGNERIA DEI SISTEMI/ COMPUTER SCIENCE AND SYSTEMS ENGINEERING - Ingegneria dei sistemiZero, Enric

    Co-adaptive control strategies in assistive Brain-Machine Interfaces

    Get PDF
    A large number of people with severe motor disabilities cannot access any of the available control inputs of current assistive products, which typically rely on residual motor functions. These patients are therefore unable to fully benefit from existent assistive technologies, including communication interfaces and assistive robotics. In this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a potential non-invasive solution to exploit a non-muscular channel for communication and control of assistive robotic devices, such as a wheelchair, a telepresence robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations, such as lack of precision, robustness and comfort, which prevent their practical implementation in assistive technologies. The goal of this PhD research is to produce scientific and technical developments to advance the state of the art of assistive interfaces and service robotics based on BMI paradigms. Two main research paths to the design of effective control strategies were considered in this project. The first one is the design of hybrid systems, based on the combination of the BMI together with gaze control, which is a long-lasting motor function in many paralyzed patients. Such approach allows to increase the degrees of freedom available for the control. The second approach consists in the inclusion of adaptive techniques into the BMI design. This allows to transform robotic tools and devices into active assistants able to co-evolve with the user, and learn new rules of behavior to solve tasks, rather than passively executing external commands. Following these strategies, the contributions of this work can be categorized based on the typology of mental signal exploited for the control. These include: 1) the use of active signals for the development and implementation of hybrid eyetracking and BMI control policies, for both communication and control of robotic systems; 2) the exploitation of passive mental processes to increase the adaptability of an autonomous controller to the user\u2019s intention and psychophysiological state, in a reinforcement learning framework; 3) the integration of brain active and passive control signals, to achieve adaptation within the BMI architecture at the level of feature extraction and classification

    Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing

    Full text link
    [EN] Emotions play a critical role in our daily lives, so the understanding and recognition of emotional responses is crucial for human research. Affective computing research has mostly used non-immersive two-dimensional (2D) images or videos to elicit emotional states. However, immersive virtual reality, which allows researchers to simulate environments in controlled laboratory conditions with high levels of sense of presence and interactivity, is becoming more popular in emotion research. Moreover, its synergy with implicit measurements and machine-learning techniques has the potential to impact transversely in many research areas, opening new opportunities for the scientific community. This paper presents a systematic review of the emotion recognition research undertaken with physiological and behavioural measures using head-mounted displays as elicitation devices. The results highlight the evolution of the field, give a clear perspective using aggregated analysis, reveal the current open issues and provide guidelines for future research.This research was funded by European Commission, grant number H2020-825585 HELIOS.Marín-Morales, J.; Llinares Millán, MDC.; Guixeres Provinciale, J.; Alcañiz Raya, ML. (2020). Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing. Sensors. 20(18):1-26. https://doi.org/10.3390/s20185163S126201
    • …
    corecore