4 research outputs found
Classification of Movement Intention Using Independent Components of Premovement EEG
Many previous studies on brain-machine interfaces (BMIs) have focused on electroencephalography (EEG) signals elicited during motor-command execution to generate device commands. However, exploiting pre-execution brain activity related to movement intention could improve the practical applicability of BMIs. Therefore, in this study we investigated whether EEG signals occurring before movement execution could be used to classify movement intention. Six subjects performed reaching tasks that required them to move a cursor to one of four targets distributed horizontally and vertically from the center. Using independent components of EEG acquired during a premovement phase, two-class classifications were performed for left vs. right trials and top vs. bottom trials using a support vector machine. Instructions were presented visually (test) and aurally (condition). In the test condition, accuracy for a single window was about 75%, and it increased to 85% in classification using two windows. In the control condition, accuracy for a single window was about 73%, and it increased to 80% in classification using two windows. Classification results showed that a combination of two windows from different time intervals during the premovement phase improved classification performance in the both conditions compared to a single window classification. By categorizing the independent components according to spatial pattern, we found that information depending on the modality can improve classification performance. We confirmed that EEG signals occurring during movement preparation can be used to control a BMI
Decoding Neural Correlates of Cognitive States to Enhance Driving Experience
Modern cars can support their drivers by assessing and autonomously performing different driving maneuvers based on information gathered by in-car sensors. We propose that brain–machine interfaces (BMIs) can provide complementary information that can ease the interaction with intelligent cars in order to enhance the driving experience. In our approach, the human remains in control, while a BMI is used to monitor the driver's cognitive state and use that information to modulate the assistance provided by the intelligent car. In this paper, we gather our proof-of-concept studies demonstrating the feasibility of decoding electroencephalography correlates of upcoming actions and those reflecting whether the decisions of driving assistant systems are in-line with the drivers' intentions. Experimental results while driving both simulated and real cars consistently showed neural signatures of anticipation, movement preparation, and error processing. Remarkably, despite the increased noise inherent to real scenarios, these signals can be decoded on a single-trial basis, reflecting some of the cognitive processes that take place while driving. However, moderate decoding performance compared to the controlled experimental BMI paradigms indicate there exists room for improvement of the machine learning methods typically used in the state-of-the-art BMIs. We foresee that neural fusion correlates with information extracted from other physiological measures, e.g., eye movements or electromyography as well as contextual information gathered by in-car sensors will allow intelligent cars to provide timely and tailored assistance only if it is required; thus, keeping the user in the loop and allowing him to fully enjoy the driving experience
Recommended from our members
Multi-Classifier Fusion Strategy for Activity and Intent Recognition of Torso Movements
As assistive, wearable robotic devices are being developed to physically assist their users, it has become crucial to develop safe, reliable methods to coordinate the device with the intentions and motions of the wearer. This dissertation investigates the recognition of user intent during flexion and extension of the human torso in the sagittal plane to be used for control of an assistive exoskeleton for the human torso. A multi-sensor intent recognition approach is developed that combines information from surface electromyogram (sEMG) signals from the user’s muscles and inertial sensors mounted on the user’s body. Intent recognition is implemented by following a pattern classification approach, wherein a linear discriminant analysis (LDA) based method of pattern classification is utilized. This method of classification builds on a traditional LDA by utilizing multiple classifiers from multiple sensors that are combined together using a majority voting based classifier fusion scheme, to deliver improved classification performance. Additionally, there is a focus on identification of suitable features for classification. Extraction of features in the time, frequency and time-frequency domains is discussed. Wavelet transform methods are employed for targeted extraction of nonlinear time-frequency domain features, and the effectiveness of these features in improving classification performance is emphasized. Experimental results using sEMG and inertial signals recorded from human subjects, to evaluate the pattern classification and feature extraction methods are presented. Results show that a combined sensor approach that utilizes both inertial and sEMG data leads to a 70% improvement in classification performance. Results also show that the use of multiple time-frequency domain features in conjunction with majority voting based classifier-fusion leads to an additional 75% improvement in classification performance, with a best case of up to 97% accuracy in recognizing user intent. This research has provided an effective demonstration of leveraging nonlinear time-frequency domain features with linear methods of classification to deliver accurate and computationally efficient intent recognition. In addition, the research effort has also developed a library of features that can serve as a starting point for future efforts in classifying torso motions