1,739 research outputs found

    Task sensitivity in EEG biometric recognition

    Get PDF
    This work explores the sensitivity of electroencephalographic-based biometric recognition to the type of tasks required by subjects to perform while their brain activity is being recorded. A novel wavelet-based feature is used to extract identity information from a database of 109 subjects who performed four different motor movement/imagery tasks while their data was recorded. Training and test of the system was performed using a number of experimental protocols to establish if training with one type of task and tested with another would significantly affect the recognition performance. Also, experiments were conducted to evaluate the performance when a mixture of data from different tasks was used for training. The results suggest that performance is not significantly affected when there is a mismatch between training and test tasks. Furthermore, as the amount of data used for training is increased using a combination of data from several tasks, the performance can be improved. These results indicate that a more flexible approach may be incorporated in data collection for EEG-based biometric systems which could facilitate their deployment and improved performance

    The Use of EEG Signals For Biometric Person Recognition

    Get PDF
    This work is devoted to investigating EEG-based biometric recognition systems. One potential advantage of using EEG signals for person recognition is the difficulty in generating artificial signals with biometric characteristics, thus making the spoofing of EEG-based biometric systems a challenging task. However, more works needs to be done to overcome certain drawbacks that currently prevent the adoption of EEG biometrics in real-life scenarios: 1) usually large number of employed sensors, 2) still relatively low recognition rates (compared with some other biometric modalities), 3) the template ageing effect. The existing shortcomings of EEG biometrics and their possible solutions are addressed from three main perspectives in the thesis: pre-processing, feature extraction and pattern classification. In pre-processing, task (stimuli) sensitivity and noise removal are investigated and discussed in separated chapters. For feature extraction, four novel features are proposed; for pattern classification, a new quality filtering method, and a novel instance-based learning algorithm are described in respective chapters. A self-collected database (Mobile Sensor Database) is employed to investigate some important biometric specified effects (e.g. the template ageing effect; using low-cost sensor for recognition). In the research for pre-processing, a training data accumulation scheme is developed, which improves the recognition performance by combining the data of different mental tasks for training; a new wavelet-based de-noising method is developed, its effectiveness in person identification is found to be considerable. Two novel features based on Empirical Mode Decomposition and Hilbert Transform are developed, which provided the best biometric performance amongst all the newly proposed features and other state-of-the-art features reported in the thesis; the other two newly developed wavelet-based features, while having slightly lower recognition accuracies, were computationally more efficient. The quality filtering algorithm is designed to employ the most informative EEG signal segments: experimental results indicate using a small subset of the available data for feature training could receive reasonable improvement in identification rate. The proposed instance-based template reconstruction learning algorithm has shown significant effectiveness when tested using both the publicly available and self-collected databases

    In-ear EEG biometrics for feasible and readily collectable real-world person authentication

    Full text link
    The use of EEG as a biometrics modality has been investigated for about a decade, however its feasibility in real-world applications is not yet conclusively established, mainly due to the issues with collectability and reproducibility. To this end, we propose a readily deployable EEG biometrics system based on a `one-fits-all' viscoelastic generic in-ear EEG sensor (collectability), which does not require skilled assistance or cumbersome preparation. Unlike most existing studies, we consider data recorded over multiple recording days and for multiple subjects (reproducibility) while, for rigour, the training and test segments are not taken from the same recording days. A robust approach is considered based on the resting state with eyes closed paradigm, the use of both parametric (autoregressive model) and non-parametric (spectral) features, and supported by simple and fast cosine distance, linear discriminant analysis and support vector machine classifiers. Both the verification and identification forensics scenarios are considered and the achieved results are on par with the studies based on impractical on-scalp recordings. Comprehensive analysis over a number of subjects, setups, and analysis features demonstrates the feasibility of the proposed ear-EEG biometrics, and its potential in resolving the critical collectability, robustness, and reproducibility issues associated with current EEG biometrics

    Overcoming Inter-Subject Variability in BCI Using EEG-Based Identification

    Get PDF
    The high dependency of the Brain Computer Interface (BCI) system performance on the BCI user is a well-known issue of many BCI devices. This contribution presents a new way to overcome this problem using a synergy between a BCI device and an EEG-based biometric algorithm. Using the biometric algorithm, the BCI device automatically identifies its current user and adapts parameters of the classification process and of the BCI protocol to maximize the BCI performance. In addition to this we present an algorithm for EEG-based identification designed to be resistant to variations in EEG recordings between sessions, which is also demonstrated by an experiment with an EEG database containing two sessions recorded one year apart. Further, our algorithm is designed to be compatible with our movement-related BCI device and the evaluation of the algorithm performance took place under conditions of a standard BCI experiment. Estimation of the mu rhythm fundamental frequency using the Frequency Zooming AR modeling is used for EEG feature extraction followed by a classifier based on the regularized Mahalanobis distance. An average subject identification score of 96 % is achieved

    EEG Classification based on Image Configuration in Social Anxiety Disorder

    Get PDF
    The problem of detecting the presence of Social Anxiety Disorder (SAD) using Electroencephalography (EEG) for classification has seen limited study and is addressed with a new approach that seeks to exploit the knowledge of EEG sensor spatial configuration. Two classification models, one which ignores the configuration (model 1) and one that exploits it with different interpolation methods (model 2), are studied. Performance of these two models is examined for analyzing 34 EEG data channels each consisting of five frequency bands and further decomposed with a filter bank. The data are collected from 64 subjects consisting of healthy controls and patients with SAD. Validity of our hypothesis that model 2 will significantly outperform model 1 is borne out in the results, with accuracy 66--7%7\% higher for model 2 for each machine learning algorithm we investigated. Convolutional Neural Networks (CNN) were found to provide much better performance than SVM and kNNs

    Unified Framework for Identity and Imagined Action Recognition from EEG patterns

    Full text link
    We present a unified deep learning framework for the recognition of user identity and the recognition of imagined actions, based on electroencephalography (EEG) signals, for application as a brain-computer interface. Our solution exploits a novel shifted subsampling preprocessing step as a form of data augmentation, and a matrix representation to encode the inherent local spatial relationships of multi-electrode EEG signals. The resulting image-like data is then fed to a convolutional neural network to process the local spatial dependencies, and eventually analyzed through a bidirectional long-short term memory module to focus on temporal relationships. Our solution is compared against several methods in the state of the art, showing comparable or superior performance on different tasks. Specifically, we achieve accuracy levels above 90% both for action and user classification tasks. In terms of user identification, we reach 0.39% equal error rate in the case of known users and gestures, and 6.16% in the more challenging case of unknown users and gestures. Preliminary experiments are also conducted in order to direct future works towards everyday applications relying on a reduced set of EEG electrodes

    Non-invasive multi-modal human identification system combining ECG, GSR, and airflow biosignals

    Get PDF
    A huge amount of data can be collected through a wide variety of sensor technologies. Data mining techniques are often useful for the analysis of gathered data. This paper studies the use of three wearable sensors that monitor the electrocardiogram, airflow, and galvanic skin response of a subject with the purpose of designing an efficient multi-modal human identification system. The proposed system, based on the rotation forest ensemble algorithm, offers a high accuracy (99.6 % true acceptance rate and just 0.1 % false positive rate). For its evaluation, the proposed system was testing against the characteristics commonly demanded in a biometric system, including universality, uniqueness, permanence, and acceptance. Finally, a proof-of-concept implementation of the system is demonstrated on a smartphone and its performance is evaluated in terms of processing speed and power consumption. The identification of a sample is extremely efficient, taking around 200 ms and consuming just a few millijoules. It is thus feasible to use the proposed system on a regular smartphone for user identification.This work was supported by MINECO grant TIN2013- 46469-R (SPINY: Security and Privacy in the Internet of You) and CAM grant S2013/ICE-3095 (CIBERDINE: Cybersecurity, Data, and Risks)
    corecore