27 research outputs found

    Discriminative Tandem Features for HMM-based EEG Classification

    Get PDF
    Abstract—We investigate the use of discriminative feature extractors in tandem configuration with generative EEG classification system. Existing studies on dynamic EEG classification typically use hidden Markov models (HMMs) which lack discriminative capability. In this paper, a linear and a non-linear classifier are discriminatively trained to produce complementary input features to the conventional HMM system. Two sets of tandem features are derived from linear discriminant analysis (LDA) projection output and multilayer perceptron (MLP) class-posterior probability, before appended to the standard autoregressive (AR) features. Evaluation on a two-class motor-imagery classification task shows that both the proposed tandem features yield consistent gains over the AR baseline, resulting in significant relative improvement of 6.2% and 11.2 % for the LDA and MLP features respectively. We also explore portability of these features across different subjects. Index Terms- Artificial neural network-hidden Markov models, EEG classification, brain-computer-interface (BCI)

    Investigating the use of pretrained convolutional neural network on cross-subject and cross-dataset EEG emotion recognition

    Get PDF
    The electroencephalogram (EEG) has great attraction in emotion recognition studies due to its resistance to deceptive actions of humans. This is one of the most significant advantages of brain signals in comparison to visual or speech signals in the emotion recognition context. A major challenge in EEG-based emotion recognition is that EEG recordings exhibit varying distributions for different people as well as for the same person at different time instances. This nonstationary nature of EEG limits the accuracy of it when subject independency is the priority. The aim of this study is to increase the subject-independent recognition accuracy by exploiting pretrained state-of-the-art Convolutional Neural Network (CNN) architectures. Unlike similar studies that extract spectral band power features from the EEG readings, raw EEG data is used in our study after applying windowing, pre-adjustments and normalization. Removing manual feature extraction from the training system overcomes the risk of eliminating hidden features in the raw data and helps leverage the deep neural network’s power in uncovering unknown features. To improve the classification accuracy further, a median filter is used to eliminate the false detections along a prediction interval of emotions. This method yields a mean cross-subject accuracy of 86.56% and 78.34% on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED) for two and three emotion classes, respectively. It also yields a mean cross-subject accuracy of 72.81% on the Database for Emotion Analysis using Physiological Signals (DEAP) and 81.8% on the Loughborough University Multimodal Emotion Dataset (LUMED) for two emotion classes. Furthermore, the recognition model that has been trained using the SEED dataset was tested with the DEAP dataset, which yields a mean prediction accuracy of 58.1% across all subjects and emotion classes. Results show that in terms of classification accuracy, the proposed approach is superior to, or on par with, the reference subject-independent EEG emotion recognition studies identified in literature and has limited complexity due to the elimination of the need for feature extraction.<br

    Real-time Hybrid Locomotion Mode Recognition for Lower-limb Wearable Robots

    Get PDF
    Real-time recognition of locomotion-related activities is a fundamental skill that the controller of lower-limb wearable robots should possess. Subject-specific training and reliance on electromyographic interfaces are the main limitations of existing approaches. This study presents a novel methodology for real-time locomotion mode recognition of locomotion-related activities in lower-limb wearable robotics. A hybrid classifier can distinguish among seven locomotion-related activities. First, a time-based approach classifies between static and dynamical states based on gait kinematics data. Second, an event-based fuzzy logic method triggered by foot pressure sensors operates in a subject-independent fashion on a minimal set of relevant biomechanical features to classify among dynamical modes. The locomotion mode recognition algorithm is implemented on the controller of a portable powered orthosis for hip assistance. An experimental protocol is designed to evaluate the controller performance in an out-of-lab scenario without the need for a subject-specific training. Experiments are conducted on six healthy volunteers performing locomotion-related activities at slow, normal, and fast speeds under the zero-torque and assistive mode of the orthosis. The overall accuracy rate of the controller is 99.4% over more than 10,000 steps, including seamless transitions between different modes. The experimental results show a successful subject-independent performance of the controller for wearable robots assisting locomotion-related activities

    Can appliances understand the behavior of elderly via machine learning? A feasibility study

    Get PDF

    Development of an equipment to detect and quantify muscular spasticity

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia BiomédicaSpasticity consists of a muscular tonus alteration caused by a flawed central nervous system which results in a hypertonic phenomenon. The presence of spasticity is normally noticeable by the appearance of a denoted velocity dependent “rigidity” throughout the passive mobilization of an affected limb which can be a potential source of constraints in subject independency by negatively affecting the accomplishment of daily basic tasks. Spasticity treatment usually comprises high cost methods and materials. There is also a strict relation between the spasticity grade and the dose that has to be applied to attain the desired effective result. These two facts justify the need for a more precise equipment to detect and quantify muscular spasticity. In the present days, three main groups of spasticity quantification methods coexist: the clinical scales, electrophysiological measurements and the biomechanical measurements. The most used ones are the clinical scales, especially the Modified Ashworth Scale. These scales quantify spasticity based on the perception of muscular response sensed by an operator. In a different field of approach, many instruments have been built to quantify biomechanical magnitudes that have shown direct relation with spasticity. Unfortunately, most of these instruments had either inappropriate size for clinical use, weak result correlation both inter and intra-subject, or a noticeable result dependence on the operator. The objective of this project was to create a reliable method for spasticity detection and quantification that could: be of easy and fast application, have no need for a specialized operator, be portable and present good repeatability and independency from the operator in the produced results. The resulting prototype, named SpastiMed, is a motorized and electronically controlled device which through analysis of the produced signal presented irrefutable proof of its capacity to detect and possibly quantify spasticity while gathering the important characteristics mentioned

    Physiological signal-based emotion recognition from wearable devices

    Get PDF
    The interest in computers recognizing human emotions has been increasing recently. Many studies have been done about recognizing emotions from physical signals such as facial expressions or from written text with good results. However, recognizing emotions from physiological signals such as heart rate, from wearable devices without physical signals have been challenging. Some studies have given good, or at least promising results. The challenge for emotion recognition is to understand how human body actually reacts to different emotional triggers and to find a common factors among people. The aim of this study is to find out whether it is possible to accurately recognize human emotions and stress from physiological signals using supervised machine learning. Further, we consider the question what type of biosignals are most informative for making such predictions. The performance of Support Vector Machines and Random Forest classifiers are experimentally evaluated on the task of separating stress and no-stress signals from three different biosignals: ECG, PPG and EDA. The challenges with these biosginals from acquiring them to pre-processing the signals are addressed and their connection to emotional experience is discussed. In addition, the challenges and problems on experimental setups used in previous studies are addressed and especially the usability problems of the dataset. The models implemented in this thesis were not able to accurately classify emotions using supervised machine learning from the dataset used. The models did not perform remarkably better than just randomly choosing labels. PPG signal however performed slightly better than ECG or EDA for stress detection

    Intention Understanding in Human-Robot Interaction Based on Visual-NLP Semantics

    Get PDF
    With the rapid development of robotic and AI technology in recent years, human-robot interaction has made great advancement, making practical social impact. Verbal commands are one of the most direct and frequently used means for human-robot interaction. Currently, such technology can enable robots to execute pre-defined tasks based on simple and direct and explicit language instructions, e.g., certain keywords must be used and detected. However, that is not the natural way for human to communicate. In this paper, we propose a novel task-based framework to enable the robot to comprehend human intentions using visual semantics information, such that the robot is able to satisfy human intentions based on natural language instructions (total three types, namely clear, vague, and feeling, are defined and tested). The proposed framework includes a language semantics module to extract the keywords despite the explicitly of the command instruction, a visual object recognition module to identify the objects in front of the robot, and a similarity computation algorithm to infer the intention based on the given task. The task is then translated into the commands for the robot accordingly. Experiments are performed and validated on a humanoid robot with a defined task: to pick the desired item out of multiple objects on the table, and hand over to one desired user out of multiple human participants. The results show that our algorithm can interact with different types of instructions, even with unseen sentence structures
    corecore