511 research outputs found

    Influencing brain waves by evoked potentials as biometric approach: taking stock of the last six years of research

    Get PDF
    The scientific advances of recent years have made available to anyone affordable hardware devices capable of doing something unthinkable until a few years ago, the reading of brain waves. It means that through small wearable devices it is possible to perform an electroencephalography (EEG), albeit with less potential than those offered by high-cost professional devices. Such devices make it possible for researchers a huge number of experiments that were once impossible in many areas due to the high costs of the necessary hardware. Many studies in the literature explore the use of EEG data as a biometric approach for people identification, but, unfortunately, it presents problems mainly related to the difficulty of extracting unique and stable patterns from users, despite the adoption of sophisticated techniques. An approach to face this problem is based on the evoked potentials (EPs), external stimuli applied during the EEG reading, a noninvasive technique used for many years in clinical routine, in combination with other diagnostic tests, to evaluate the electrical activity related to some areas of the brain and spinal cord to diagnose neurological disorders. In consideration of the growing number of works in the literature that combine the EEG and EP approaches for biometric purposes, this work aims to evaluate the practical feasibility of such approaches as reliable biometric instruments for user identification by surveying the state of the art of the last 6 years, also providing an overview of the elements and concepts related to this research area

    Evidence of Task-Independent Person-Specific Signatures in EEG using Subspace Techniques

    Full text link
    Electroencephalography (EEG) signals are promising as alternatives to other biometrics owing to their protection against spoofing. Previous studies have focused on capturing individual variability by analyzing task/condition-specific EEG. This work attempts to model biometric signatures independent of task/condition by normalizing the associated variance. Toward this goal, the paper extends ideas from subspace-based text-independent speaker recognition and proposes novel modifications for modeling multi-channel EEG data. The proposed techniques assume that biometric information is present in the entire EEG signal and accumulate statistics across time in a high dimensional space. These high dimensional statistics are then projected to a lower dimensional space where the biometric information is preserved. The lower dimensional embeddings obtained using the proposed approach are shown to be task-independent. The best subspace system identifies individuals with accuracies of 86.4% and 35.9% on datasets with 30 and 920 subjects, respectively, using just nine EEG channels. The paper also provides insights into the subspace model's scalability to unseen tasks and individuals during training and the number of channels needed for subspace modeling.Comment: \copyright 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    Learning EEG Biometrics for Person Identification and Authentication

    Full text link
    EEG provides appealing biometrics by presenting some unique attributes, not possessed by common biometric modalities like fingerprints, retina and face scan, in terms of robustness against forgery, secrecy and privacy compliance, aliveness detection and potential of continuous authentication. Meanwhile, the use of EEG to provide cognitive indicators for human workload, fatigue and emotions has created an environment where EEG is well-integrated into systems, making it readily available for biometrics purposes. Yet, still, many challenges need to be properly addressed before any actual deployment of EEG-based biometric systems in real-life scenarios: 1) subjects' inconvenience during the signal acquisition process, 2) the relatively low recognition rates, and 3) the lack of robustness against diverse human states. To address the aforementioned issues, this thesis is devoted to learn biometric traits from EEG signals for stable person identification and authentication. State of the art studies of EEG biometrics are mainly divided into two categories, the event-related potential (ERP) category, which relies on a tight control of the cognitive states of the subjects, and the ongoing EEG category, which uses continuous EEG signals (mainly in resting state) naturally produced by the brain without any particular sensory stimulation. Studies in the ERP category focus more on the design of proper signal elicitation protocols or paradigms which usually require repetitive sensory stimulation. Ongoing EEG, on the contrary, is more flexible in terms of signal acquisition, but needs more advanced computational methods for feature extraction and classification. This study focuses on EEG biometrics using ongoing signals in diverse states. Such a flexible system could lead to an effective deployment in the real world. Specifically, this work focuses on ongoing EEG signals under diverse human states without strict task-specific controls in terms of brain response elicitation during signal acquisition. This is in contrast to previous studies that rely on specific sensory stimulation and synthetic cognitive tasks to tightly control the cognitive state of the subject being reflected in the resulting EEG activity, or to use resting state EEG signals. The relaxation of the reliance of the user's cognitive state makes the signal acquisition process streamlined, which in turn facilitates the actual deployment of the EEG biometrics system. Furthermore, not relying on sensory stimulation and cognitive tasks also allows for flexible and unobtrusive biometric systems that work in the background without interrupting the users, which is especially important in continuous scenarios. However, relaxing the system's reliance on the human state also means losing control of the EEG activity produced. As a result, EEG signals captured from the scalp may be contaminated by the active involvement of the tasks and cognitive states such as workload and emotion. Therefore, it becomes a challenge to learn identity-bearing information from the complicated signals to support high stability EEG biometrics. Possible solutions are proposed and investigated from two main perspectives, feature extraction and pattern classification. Specifically, graph features and learning models are proposed based on the brain connectivity, graph theory, and deep learning algorithms. A comprehensive investigation is conducted to assess the performance of proposed methods and existing methods in biometric identification and authentication, including in continuous scenarios. The methods and experiments are reported and detailed in the corresponding chapters, with the results obtained from data analysis

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others

    Improving classification of error related potentials using novel feature extraction and classification algorithms for an assistive robotic device

    Get PDF
    We evaluated the proposed feature extraction algorithm and the classifier, and we showed that the performance surpassed the state of the art algorithms in error detection. Advances in technology are required to improve the quality of life of a person with a severe disability who has lost their independence of movement in their daily life. Brain-computer interface (BCI) is a possible technology to re-establish a sense of independence for the person with a severe disability through direct communication between the brain and an electronic device. To enhance the symbiotic interface between the person and BCI its accuracy and robustness should be improved across all age groups. This thesis aims to address the above-mentioned issue by developing a novel feature extraction algorithm and a novel classification algorithm for the detection of erroneous actions made by either human or BCI. The research approach evaluated the state of the art error detection classifier using data from two different age groups, young and elderly. The performance showed a statistical difference between the aforementioned age groups; therefore, there needs to be an improvement in error detection and classification. The results showed that my proposed relative peak feature (RPF) and adaptive decision surface (ADS) classifier outperformed the state of the art algorithms in detecting errors using EEG for both elderly and young groups. In addition, the novel classification algorithm has been applied to motor imagery to improve the detection of when a person imagines moving a limb. Finally, this thesis takes a brief look at object recognition for a shared control task of identifying utensils in cooperation with a prosthetic robotic hand

    On Tackling Fundamental Constraints in Brain-Computer Interface Decoding via Deep Neural Networks

    Get PDF
    A Brain-Computer Interface (BCI) is a system that provides a communication and control medium between human cortical signals and external devices, with the primary aim to assist or to be used by patients who suffer from a neuromuscular disease. Despite significant recent progress in the area of BCI, there are numerous shortcomings associated with decoding Electroencephalography-based BCI signals in real-world environments. These include, but are not limited to, the cumbersome nature of the equipment, complications in collecting large quantities of real-world data, the rigid experimentation protocol and the challenges of accurate signal decoding, especially in making a system work in real-time. Hence, the core purpose of this work is to investigate improving the applicability and usability of BCI systems, whilst preserving signal decoding accuracy. Recent advances in Deep Neural Networks (DNN) provide the possibility for signal processing to automatically learn the best representation of a signal, contributing to improved performance even with a noisy input signal. Subsequently, this thesis focuses on the use of novel DNN-based approaches for tackling some of the key underlying constraints within the area of BCI. For example, recent technological improvements in acquisition hardware have made it possible to eliminate the pre-existing rigid experimentation procedure, albeit resulting in noisier signal capture. However, through the use of a DNN-based model, it is possible to preserve the accuracy of the predictions from the decoded signals. Moreover, this research demonstrates that by leveraging DNN-based image and signal understanding, it is feasible to facilitate real-time BCI applications in a natural environment. Additionally, the capability of DNN to generate realistic synthetic data is shown to be a potential solution in reducing the requirement for costly data collection. Work is also performed in addressing the well-known issues regarding subject bias in BCI models by generating data with reduced subject-specific features. The overall contribution of this thesis is to address the key fundamental limitations of BCI systems. This includes the unyielding traditional experimentation procedure, the mandatory extended calibration stage and sustaining accurate signal decoding in real-time. These limitations lead to a fragile BCI system that is demanding to use and only suited for deployment in a controlled laboratory. Overall contributions of this research aim to improve the robustness of BCI systems and enable new applications for use in the real-world

    Deep Learning in EEG: Advance of the Last Ten-Year Critical Period

    Get PDF
    Deep learning has achieved excellent performance in a wide range of domains, especially in speech recognition and computer vision. Relatively less work has been done for EEG, but there is still significant progress attained in the last decade. Due to the lack of a comprehensive and topic widely covered survey for deep learning in EEG, we attempt to summarize recent progress to provide an overview, as well as perspectives for future developments. We first briefly mention the artifacts removal for EEG signal and then introduce deep learning models that have been utilized in EEG processing and classification. Subsequently, the applications of deep learning in EEG are reviewed by categorizing them into groups such as brain-computer interface, disease detection, and emotion recognition. They are followed by the discussion, in which the pros and cons of deep learning are presented and future directions and challenges for deep learning in EEG are proposed. We hope that this paper could serve as a summary of past work for deep learning in EEG and the beginning of further developments and achievements of EEG studies based on deep learning

    REPRESENTATION LEARNING FOR ACTION RECOGNITION

    Get PDF
    The objective of this research work is to develop discriminative representations for human actions. The motivation stems from the fact that there are many issues encountered while capturing actions in videos like intra-action variations (due to actors, viewpoints, and duration), inter-action similarity, background motion, and occlusion of actors. Hence, obtaining a representation which can address all the variations in the same action while maintaining discrimination with other actions is a challenging task. In literature, actions have been represented either using either low-level or high-level features. Low-level features describe the motion and appearance in small spatio-temporal volumes extracted from a video. Due to the limited space-time volume used for extracting low-level features, they are not able to account for viewpoint and actor variations or variable length actions. On the other hand, high-level features handle variations in actors, viewpoints, and duration but the resulting representation is often high-dimensional which introduces the curse of dimensionality. In this thesis, we propose new representations for describing actions by combining the advantages of both low-level and high-level features. Specifically, we investigate various linear and non-linear decomposition techniques to extract meaningful attributes in both high-level and low-level features. In the first approach, the sparsity of high-level feature descriptors is leveraged to build action-specific dictionaries. Each dictionary retains only the discriminative information for a particular action and hence reduces inter-action similarity. Then, a sparsity-based classification method is proposed to classify the low-rank representation of clips obtained using these dictionaries. We show that this representation based on dictionary learning improves the classification performance across actions. Also, a few of the actions consist of rapid body deformations that hinder the extraction of local features from body movements. Hence, we propose to use a dictionary which is trained on convolutional neural network (CNN) features of the human body in various poses to reliably identify actors from the background. Particularly, we demonstrate the efficacy of sparse representation in the identification of the human body under rapid and substantial deformation. In the first two approaches, sparsity-based representation is developed to improve discriminability using class-specific dictionaries that utilize action labels. However, developing an unsupervised representation of actions is more beneficial as it can be used to both recognize similar actions and localize actions. We propose to exploit inter-action similarity to train a universal attribute model (UAM) in order to learn action attributes (common and distinct) implicitly across all the actions. Using maximum aposteriori (MAP) adaptation, a high-dimensional super action-vector (SAV) for each clip is extracted. As this SAV contains redundant attributes of all other actions, we use factor analysis to extract a novel lowvi dimensional action-vector representation for each clip. Action-vectors are shown to suppress background motion and highlight actions of interest in both trimmed and untrimmed clips that contributes to action recognition without the help of any classifiers. It is observed during our experiments that action-vector cannot effectively discriminate between actions which are visually similar to each other. Hence, we subject action-vectors to supervised linear embedding using linear discriminant analysis (LDA) and probabilistic LDA (PLDA) to enforce discrimination. Particularly, we show that leveraging complimentary information across action-vectors using different local features followed by discriminative embedding provides the best classification performance. Further, we explore non-linear embedding of action-vectors using Siamese networks especially for fine-grained action recognition. A visualization of the hidden layer output in Siamese networks shows its ability to effectively separate visually similar actions. This leads to better classification performance than linear embedding on fine-grained action recognition. All of the above approaches are presented on large unconstrained datasets with hundreds of examples per action. However, actions in surveillance videos like snatch thefts are difficult to model because of the diverse variety of scenarios in which they occur and very few labeled examples. Hence, we propose to utilize the universal attribute model (UAM) trained on large action datasets to represent such actions. Specifically, we show that there are similarities between certain actions in the large datasets with snatch thefts which help in extracting a representation for snatch thefts using the attributes from the UAM. This representation is shown to be effective in distinguishing snatch thefts from regular actions with high accuracy.In summary, this thesis proposes both supervised and unsupervised approaches for representing actions which provide better discrimination than existing representations. The first approach presents a dictionary learning based sparse representation for effective discrimination of actions. Also, we propose a sparse representation for the human body based on dictionaries in order to recognize actions with rapid body deformations. In the next approach, a low-dimensional representation called action-vector for unsupervised action recognition is presented. Further, linear and non-linear embedding of action-vectors is proposed for addressing inter-action similarity and fine-grained action recognition, respectively. Finally, we propose a representation for locating snatch thefts among thousands of regular interactions in surveillance videos

    Biomedical Signal and Image Processing

    Get PDF
    Written for senior-level and first year graduate students in biomedical signal and image processing, this book describes fundamental signal and image processing techniques that are used to process biomedical information. The book also discusses application of these techniques in the processing of some of the main biomedical signals and images, such as EEG, ECG, MRI, and CT. New features of this edition include the technical updating of each chapter along with the addition of many more examples, the majority of which are MATLAB based
    corecore