8,626 research outputs found

    A real time classification algorithm for EEG-based BCI driven by self-induced emotions

    Get PDF
    Background and objective: The aim of this paper is to provide an efficient, parametric, general, and completely automatic real time classification method of electroencephalography (EEG) signals obtained from self-induced emotions. The particular characteristics of the considered low-amplitude signals (a self-induced emotion produces a signal whose amplitude is about 15% of a really experienced emotion) require exploring and adapting strategies like the Wavelet Transform, the Principal Component Analysis (PCA) and the Support Vector Machine (SVM) for signal processing, analysis and classification. Moreover, the method is thought to be used in a multi-emotions based Brain Computer Interface (BCI) and, for this reason, an ad hoc shrewdness is assumed. Method: The peculiarity of the brain activation requires ad-hoc signal processing by wavelet decomposition, and the definition of a set of features for signal characterization in order to discriminate different self-induced emotions. The proposed method is a two stages algorithm, completely parameterized, aiming at a multi-class classification and may be considered in the framework of machine learning. The first stage, the calibration, is off-line and is devoted at the signal processing, the determination of the features and at the training of a classifier. The second stage, the real-time one, is the test on new data. The PCA theory is applied to avoid redundancy in the set of features whereas the classification of the selected features, and therefore of the signals, is obtained by the SVM. Results: Some experimental tests have been conducted on EEG signals proposing a binary BCI, based on the self-induced disgust produced by remembering an unpleasant odor. Since in literature it has been shown that this emotion mainly involves the right hemisphere and in particular the T8 channel, the classification procedure is tested by using just T8, though the average accuracy is calculated and reported also for the whole set of the measured channels. Conclusions: The obtained classification results are encouraging with percentage of success that is, in the average for the whole set of the examined subjects, above 90%. An ongoing work is the application of the proposed procedure to map a large set of emotions with EEG and to establish the EEG headset with the minimal number of channels to allow the recognition of a significant range of emotions both in the field of affective computing and in the development of auxiliary communication tools for subjects affected by severe disabilities

    Multi-Person Brain Activity Recognition via Comprehensive EEG Signal Analysis

    Full text link
    An electroencephalography (EEG) based brain activity recognition is a fundamental field of study for a number of significant applications such as intention prediction, appliance control, and neurological disease diagnosis in smart home and smart healthcare domains. Existing techniques mostly focus on binary brain activity recognition for a single person, which limits their deployment in wider and complex practical scenarios. Therefore, multi-person and multi-class brain activity recognition has obtained popularity recently. Another challenge faced by brain activity recognition is the low recognition accuracy due to the massive noises and the low signal-to-noise ratio in EEG signals. Moreover, the feature engineering in EEG processing is time-consuming and highly re- lies on the expert experience. In this paper, we attempt to solve the above challenges by proposing an approach which has better EEG interpretation ability via raw Electroencephalography (EEG) signal analysis for multi-person and multi-class brain activity recognition. Specifically, we analyze inter-class and inter-person EEG signal characteristics, based on which to capture the discrepancy of inter-class EEG data. Then, we adopt an Autoencoder layer to automatically refine the raw EEG signals by eliminating various artifacts. We evaluate our approach on both a public and a local EEG datasets and conduct extensive experiments to explore the effect of several factors (such as normalization methods, training data size, and Autoencoder hidden neuron size) on the recognition results. The experimental results show that our approach achieves a high accuracy comparing to competitive state-of-the-art methods, indicating its potential in promoting future research on multi-person EEG recognition.Comment: 10 page

    P300 detection and characterization for brain computer interface

    Get PDF
    Advances in cognitive neuroscience and brain imaging technologies have enabled the brain to directly interface with the computer. This technique is called as Brain Computer Interface (BCI). This ability is made possible through use of sensors that can monitor some of the physical processes that occur inside the brain. Researchers have used these kinds of technologies to build brain-computer interfaces (BCIs). Computers or communication devices can be controlled by using the signals produced in the brain. This can be a real boon for all those who are not able to communicate with the outside world directly. They can easily forecast their emotions or feelings using this technology. In BCI we use oddball paradigms to generate event-related potentials (ERPs), like the P300 wave, on targets which have been selected by the user. The basic principle of a P300 speller is detection of P300 waves that allows the user to write characters. Two classification problems are encountered in the P300 speller. The first is to detect the presence of a P300 in the electroencephalogram (EEG). The second one refers to the combination of different P300 signals for determining the right character to spell. In this thesis both parts i.e., the classification as well as characterization part are presented in a simple and lucid way. First data is obtained using data set 2 of the third BCI competition. The raw data was processed through matlab software and the corresponding feature matrices were obtained. Several techniques such as normalization, feature extraction and feature reduction of the data are explained through the contents of this thesis. Then ANN algorithm is used to classify the data into P300 and no-P300 waves. Finally character recognition is carried out through the use of multiclass classifiers that enable the user to determine the right character to spell

    Classification of functional brain data for multimedia retrieval

    Get PDF
    This study introduces new signal processing methods for extracting meaningful information from brain signals (functional magnetic resonance imaging and single unit recording) and proposes a content-based retrieval system for functional brain data. First, a new method that combines maximal overlapped discrete wavelet transforms (MODWT) and dynamic time warping (DTW) is presented as a solution for dynamically detecting the hemodynamic response from fMRI data. Second, a new method for neuron spike sorting is presented that uses the maximal overlap discrete wavelet transform and rotated principal component analysis. Third, a procedure to characterize firing patterns of neuron spikes from the human brain, in both the temporal domain and the frequency domain, is presented. The combination of multitaper spectral estimation and a polynomial curve-fitting method is employed to transform the firing patterns to the frequency domain. To generate temporal shapes, eight local maxima are smoothly connected by a cubic spline interpolation. A rotated principal component analysis is used to extract common firing patterns as templates from a training set of 4100 neuron spike signals. Dynamic time warping is then used to assign each neuron firing to the closest template without shift error. These techniques are utilized in the development of a content-based retrieval system for human brain data

    Current Source Density Estimation Enhances the Performance of Motor-Imagery Related Brain-Computer Interface

    Get PDF
    The objective is to evaluate the impact of EEG referencing schemes and spherical surface Laplacian (SSL) methods on the classification performance of motor-imagery (MI)-related brain-computer interface systems. Two EEG referencing schemes: common referencing and common average referencing and three surface Laplacian methods: current source density (CSD), finite difference method, and SSL using realistic head model were implemented separately for pre-processing of the EEG signals recorded at the scalp. A combination of filter bank common spatial filter for features extraction and support vector machine for classification was used for both pairwise binary classifications and four-class classification of MI tasks. The study provides three major outcomes: 1) the CSD method performs better than CR, providing a significant improvement of 3.02% and 5.59% across six binary classification tasks and four-class classification task, respectively; 2) the combination of a greater number of channels at the pre-processing stage as compared with the feature extraction stage yields better classification accuracies for all the Laplacian methods; and 3) the efficiency of all the surface Laplacian methods reduced significantly in the case of a fewer number of channels considered during the pre-processing

    Single-trial extraction of event-related potentials (ERPs) and classification of visual stimuli by ensemble use of discrete wavelet transform with Huffman coding and machine learning techniques

    Get PDF
    BackgroundPresentation of visual stimuli can induce changes in EEG signals that are typically detectable by averaging together data from multiple trials for individual participant analysis as well as for groups or conditions analysis of multiple participants. This study proposes a new method based on the discrete wavelet transform with Huffman coding and machine learning for single-trial analysis of evenal (ERPs) and classification of different visual events in the visual object detection task.MethodsEEG single trials are decomposed with discrete wavelet transform (DWT) up to the level of decomposition using a biorthogonal B-spline wavelet. The coefficients of DWT in each trial are thresholded to discard sparse wavelet coefficients, while the quality of the signal is well maintained. The remaining optimum coefficients in each trial are encoded into bitstreams using Huffman coding, and the codewords are represented as a feature of the ERP signal. The performance of this method is tested with real visual ERPs of sixty-eight subjects.ResultsThe proposed method significantly discards the spontaneous EEG activity, extracts the single-trial visual ERPs, represents the ERP waveform into a compact bitstream as a feature, and achieves promising results in classifying the visual objects with classification performance metrics: accuracies 93.60, sensitivities 93.55, specificities 94.85, precisions 92.50, and area under the curve (AUC) 0.93 using SVM and k-NN machine learning classifiers.ConclusionThe proposed method suggests that the joint use of discrete wavelet transform (DWT) with Huffman coding has the potential to efficiently extract ERPs from background EEG for studying evoked responses in single-trial ERPs and classifying visual stimuli. The proposed approach has O(N) time complexity and could be implemented in real-time systems, such as the brain-computer interface (BCI), where fast detection of mental events is desired to smoothly operate a machine with minds
    corecore