67 research outputs found
Entropy-based EEG Time Interval Selection for Improving Motor Imagery Classification
Classification of different motor imagery tasks using electroencephalogram (EEG) signals is challenging, since EEG presents individualized temporal and spatial characteristics that are contaminated by noise, artifacts and irrelevant mental activities. In most applications, the EEG time interval on which feature extraction algorithms operate is fixed for all subjects, whereas the start time and the duration of motor imagery-based brain activities can vary from subject to subject. To improve the classification accuracy, this paper proposes a novel entropy-based algorithm to accurately identify the time interval that motor imagery has been performed. The proposed algorithm searches through different time intervals across trials and finds the one with minimum irregularity. The hypothesis behind the proposed algorithm is that when motor imagery is performed, the activities of the neurons in the motor cortex tend to become more synchronized and less irregular. We evaluate our proposed algorithm using a publicly available motor imagery-based BCI dataset. The experimental results show that the proposed algorithm selects the EEG intervals leading to superior BCI performance compared to fixed EEG intervals that are commonly used for all subjects
Facial Expression Classification Using EEG and Gyroscope Signals
In this paper muscle and gyroscope signals provided by a low cost EEG headset were used to classify six different facial expressions. Muscle activities generated by facial expressions are seen in EEG data recorded from scalp. Using the already present EEG device to classify facial expressions allows for a new hybrid brain-computer interface (BCI) system without introducing new hardware such as separate electromyography (EMG) electrodes. To classify facial expressions, time domain and frequency domain EEG data with different sampling rates were used as inputs of the classifiers. The experimental results showed that with sampling rates and classification methods optimized for each participant and feature set, high accuracy classification of facial expressions was achieved. Moreover, adding information extracted from a gyroscope embedded into the used EEG headset increased the performance by an average of 9 to 16%
Estimation of Joint Angle Based on Surface Electromyogram Signals Recorded at Different Load Levels
To control upper-limb exoskeletons and prostheses, surface electromyogram (sEMG) is widely used for estimation of joint angles. However, the variations in the load carried by the user can substantially change the recorded sEMG and consequently degrade the accuracy of joint angle estimation. In this paper, we aim to deal with this problem by training classification models using a pool of sEMG data recorded from all different loads. The classification models are trained as either subject-specific or subject-independent, and their results are compared with the performance of classification models that have information about the carried load. To evaluate the proposed system, the sEMG signals are recorded during elbow flexion and extension from three participants at four different loads (i.e. 1, 2, 4 and 6 Kg) and six different angles (i.e. 0, 30, 60, 90, 120, 150 degrees). The results show while the loads were assumed unknown and the applied training data was relatively small, the proposed joint angle estimation model performed significantly above the chance level in both the subject-specific and subject-independent models. However, transferring from known to unknown load in the subject-specific classifiers leads to 20% to 32% loss in the average accuracy
Subject-to-subject adaptation to reduce calibration time in motor imagery-based brain-computer interface
In order to enhance the usability of a motor imagery-based brain-computer interface (BCI), it is highly desirable to reduce the calibration time. Due to inter-subject variability, typically a new subject has to undergo a 20-30 minutes calibration session to collect sufficient data for training a BCI model based on his/her brain patterns. This paper proposes a new subject-to-subject adaptation algorithm to reliably reduce the calibration time of a new subject to only 3-4 minutes. To reduce the calibration time, unlike several past studies, the proposed algorithm does not require a large pool of historic sessions. In the proposed algorithm, using only a few trials from the new subject, first, the new subject's data is adapted to each available historic session separately. This is done by a linear transformation minimizing the distribution difference between the two groups of EEG data. Thereafter, among the available historic sessions, the one matched the most to the new subject's adapted data is selected as the calibration session. Consequently, the previously trained model based on the selected historic session is entirely used for the classification of the new subject's data after adaptation. The proposed algorithm is evaluated on a publicly available dataset with 9 subjects. For each subject, the calibration session is selected only from the calibration sessions of the eight other subjects. The experimental results showed that our proposed algorithm not only reduced the calibration time by 85%, but also performed on average only 1.7% less accurate than the subject-dependent calibration results
Weighted multi-task learning in classification domain for improving brain-computer interface
One of the major limitations of brain computer interface (BCI) is its long calibration time. Due to between sessions/subjects nonstationarity, typically a big amount of training data needs to be collected at the beginning of each session in order to tune the parameters of the system for the target user. In this paper, a number of novel weighted multi-task transfer learning algorithms are proposed in the classification domain to reduce the calibration time without sacrificing the classification accuracy of the BCI system. The proposed algorithms use data from other subjects and combine them to estimate the classifier parameters for the target subject. This combination is done based on how similar the data from each subject is to the few trials available from the target subject. The proposed algorithms are evaluated using dataset 2a from BCI competition IV. According to the results, the proposed algorithms lead to reduce the calibration time by 75% and enhance the average classification accuracy at the same time
A novel three dimensional probability-based classifier for improving motor imagery-based BCI
Objective: Motor imagery BCI based assistive robotics solution has the potential to empower the upper mobility independence of a disabled person. The objective of this work was to compare the classification performance of well-established classifiers with a novel prototype classifier.
Approach: We developed an adaptive decision surface ADS classifier with the future objective to augment an assistive robotic prosthetic hand to open and close to grasp an object in cooperation with LIDAR sensors. The ADS was trained with a training data set from the BCI competition IV dataset 2a from Graz University of Technology.
Main results: The classification accuracy in the offline tests reached 76.06 % class 1 and 81.50 % class 2 using a non-adaptive ADS and 79.55 % class 1 and 99.69 % class 2 using an adaptive ADS classifiers. We show a prototype adaptive decision classifier used with motor imagery datasets
Domain-specific and domain-general processes underlying metacognitive judgments
Metacognition and self-awareness are commonly assumed to operate as global capacities. However, there have been few attempts to test this assumption across multiple cognitive domains and metacognitive evaluations. Here, we assessed the covariance between “online” metacognitive processes, as measured by decision confidence judgments in the domains of perception and memory, and error awareness in the domain of attention to action. Previous research investigating metacognition across task domains have not matched stimulus characteristics across tasks raising the possibility that any differences in metacognitive accuracy may be influenced by local task properties. The current experiment measured metacognition in perceptual, memorial and attention tasks that were closely matched for stimulus characteristics. We found that metacognitive accuracy across the three tasks was dissociated suggesting that domain specific networks support an individual's capacity for accurate metacognition. This finding was independent of objective performance, which was controlled using a staircase procedure. However, response times for metacognitive judgments and error awareness were associated suggesting that shared mechanisms determining how these meta-level evaluations unfold in time may underlie these different types of decision. In addition, the relationship between these laboratory measures of metacognition and reports of everyday functioning from participants and their significant others (informants) was investigated. We found that informant reports, but not self reports, predicted metacognitive accuracy on the perceptual task and participants who underreported cognitive difficulties relative to their informants also showed poorer metacognitive accuracy on the perceptual task. These results are discussed in the context of models of metacognitive regulation and neuropsychological evidence for dissociable metacognitive systems. The potential for the refinement of metacognitive assessment in clinical populations is also discussed
"You have reached your destination" : a single trial EEG classification study
Studies have established that it is possible to differentiate between the brain's responses to observing correct and incorrect movements in navigation tasks. Furthermore, these classifications can be used as feedback for a learning-based BCI, to allow real or virtual robots to find quasi-optimal routes to a target. However, when navigating it is important not only to know we are moving in the right direction toward a target, but also to know when we have reached it. We asked participants to observe a virtual robot performing a 1-dimensional navigation task. We recorded EEG and then performed neurophysiological analysis on the responses to two classes of correct movements: those that moved closer to the target but did not reach it, and those that did reach the target. Further, we used a stepwise linear classifier on time-domain features to differentiate the classes on a single-trial basis. A second data set was also used to further test this single-trial classification. We found that the amplitude of the P300 was significantly greater in cases where the movement reached the target. Interestingly, we were able to classify the EEG signals evoked when observing the two classes of correct movements against each other with mean overall accuracy of 66.5 and 68.0% for the two data sets, with greater than chance levels of accuracy achieved for all participants. As a proof of concept, we have shown that it is possible to classify the EEG responses in observing these different correct movements against each other using single-trial EEG. This could be used as part of a learning-based BCI and opens a new door toward a more autonomous BCI navigation system
Robust common spatial pattern estimation using dynamic time warping to improve BCI systems
Common spatial patterns (CSP) is one of the most popular feature extraction algorithms for brain-computer interfaces (BCI). However, CSP is known to be very sensitive to artifacts and prone to overfitting. This paper proposes a novel dynamic time warping (DTW)-based approach to improve CSP covariance matrix estimation and hence improve feature extraction. Dynamic time warping is widely used for finding an optimal alignment between two time-dependent signals under predefined conditions. The proposed approach reduces within class temporal variations and non-stationarity by aligning the training trials to the average of the trials from the same class. The proposed DTW-based CSP approach is applied to the support vector machines (SVM) classifier and evaluated using one of the publicly available motor imagery datasets. The results showed that the proposed approach, when compared to the classical CSP, improved the classification accuracy from 78% to 83% on average. Importantly, for some subjects, the improvement was around 10%
Brain-computer interface technology for speech recognition: A review
This paper presents an overview of the studies that have been conducted with the purpose of understanding the use of brain signals as input to a speech recogniser. The studies have been categorised based on the type of the technology used with a summary of the methodologies used and achieved results. In addition, the paper gives an insight into some studies that examined the effect of the chosen stimuli on brain activities as an important factor in the recognition process. The remaining part of this paper lists the limitations of the available studies and the challenges for future work in this area
- …