7,398 research outputs found

    Personalized Automatic Estimation of Self-reported Pain Intensity from Facial Expressions

    Full text link
    Pain is a personal, subjective experience that is commonly evaluated through visual analog scales (VAS). While this is often convenient and useful, automatic pain detection systems can reduce pain score acquisition efforts in large-scale studies by estimating it directly from the participants' facial expressions. In this paper, we propose a novel two-stage learning approach for VAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs) to automatically estimate Prkachin and Solomon Pain Intensity (PSPI) levels from face images. The estimated scores are then fed into the personalized Hidden Conditional Random Fields (HCRFs), used to estimate the VAS, provided by each person. Personalization of the model is performed using a newly introduced facial expressiveness score, unique for each person. To the best of our knowledge, this is the first approach to automatically estimate VAS from face images. We show the benefits of the proposed personalized over traditional non-personalized approach on a benchmark dataset for pain analysis from face images.Comment: Computer Vision and Pattern Recognition Conference, The 1st International Workshop on Deep Affective Learning and Context Modelin

    A latent discriminative model-based approach for classification of imaginary motor tasks from EEG data

    Get PDF
    We consider the problem of classification of imaginary motor tasks from electroencephalography (EEG) data for brain-computer interfaces (BCIs) and propose a new approach based on hidden conditional random fields (HCRFs). HCRFs are discriminative graphical models that are attractive for this problem because they (1) exploit the temporal structure of EEG; (2) include latent variables that can be used to model different brain states in the signal; and (3) involve learned statistical models matched to the classification task, avoiding some of the limitations of generative models. Our approach involves spatial filtering of the EEG signals and estimation of power spectra based on auto-regressive modeling of temporal segments of the EEG signals. Given this time-frequency representation, we select certain frequency bands that are known to be associated with execution of motor tasks. These selected features constitute the data that are fed to the HCRF, parameters of which are learned from training data. Inference algorithms on the HCRFs are used for classification of motor tasks. We experimentally compare this approach to the best performing methods in BCI competition IV as well as a number of more recent methods and observe that our proposed method yields better classification accuracy

    Biosignal Generation and Latent Variable Analysis with Recurrent Generative Adversarial Networks

    Full text link
    The effectiveness of biosignal generation and data augmentation with biosignal generative models based on generative adversarial networks (GANs), which are a type of deep learning technique, was demonstrated in our previous paper. GAN-based generative models only learn the projection between a random distribution as input data and the distribution of training data.Therefore, the relationship between input and generated data is unclear, and the characteristics of the data generated from this model cannot be controlled. This study proposes a method for generating time-series data based on GANs and explores their ability to generate biosignals with certain classes and characteristics. Moreover, in the proposed method, latent variables are analyzed using canonical correlation analysis (CCA) to represent the relationship between input and generated data as canonical loadings. Using these loadings, we can control the characteristics of the data generated by the proposed method. The influence of class labels on generated data is analyzed by feeding the data interpolated between two class labels into the generator of the proposed GANs. The CCA of the latent variables is shown to be an effective method of controlling the generated data characteristics. We are able to model the distribution of the time-series data without requiring domain-dependent knowledge using the proposed method. Furthermore, it is possible to control the characteristics of these data by analyzing the model trained using the proposed method. To the best of our knowledge, this work is the first to generate biosignals using GANs while controlling the characteristics of the generated data

    Automated sleep stage classification in sleep apnoea using convolutional neural networks

    Get PDF
    A sleep disorder is a condition that adversely impacts one\u27s ability to sleep well on a regular schedule. It also occurs as a consequence of numerous neurological sicknesses. These types of disorders can be investigated using laboratory-based polysomnography (PSG) signals. The detection of neurological disorders is exact and efficient thanks to the automated monitoring of sleep relegation stages. This automation method publicly presents a flexible deep learning model and machine learning approach utilizing raw electroencephalogram (EEG) signals. The deep learning model is a Deep Convolutional Neural Network (CNN) that analyses invariant time capacities and frequency actualities and collects assessment adaptations. It also captures the inviolate and long brief length setting conditions between the epochs and the degree of sleep stage relegation. This method uses an innovative function to calculate data loss and misclassified errors found while training the network for the sleep stage, considering the restrictions found in the publicly available sleep datasets. It is used in conjunction with machine learning techniques to forecast the best approach for the process. Its effectiveness is determined by using two open-source, public databases available from PhysioNet: two recordings with 5402 epoch counts. The technique used in this approach achieves an accuracy of 90.70%, precision of 90.50%, recall of 92.70%, and F-measure of 90.60%. The proposed method is more significant than existing models like AlexNet, ResNet, VGGNet, and LeNet. The comparative study of the models could be adopted for clinical use and modified based on the requirements

    Hidden conditional random fields for classification of imaginary motor tasks from EEG data

    Get PDF
    Brain-computer interfaces (BCIs) are systems that allow the control of external devices using information extracted from brain signals. Such systems find application in rehabilitation of patients with limited or no muscular control. One mechanism used in BCIs is the imagination of motor activity, which produces variations on the power of the electroencephalography (EEG) signals recorded over the motor cortex. In this paper, we propose a new approach for classification of imaginary motor tasks based on hidden conditional random fields (HCRFs). HCRFs are discriminative graphical models that are attractive for this problem because they involve learned statistical models matched to the classification problem; they do not suffer from some of the limitations of generative models; and they include latent variables that can be used to model different brain states in the signal. Our approach involves auto-regressive modeling of the EEG signals, followed by the computation of the power spectrum. Frequency band selection is performed on the resulting time-frequency representation through feature selection methods. These selected features constitute the data that are fed to the HCRF, parameters of which are learned from training data. Inference algorithms on the HCRFs are used for classification of motor tasks. We experimentally compare this approach to the best performing methods in BCI competition IV and the results show that our approach overperforms all methods proposed in the competition. In addition, we present a comparison with an HMM-based method, and observe that the proposed method produces better classification accuracy

    SleepXAI: An explainable deep learning approach for multi-class sleep stage identification

    Get PDF
    Extensive research has been conducted on the automatic classification of sleep stages utilizing deep neural networks and other neurophysiological markers. However, for sleep specialists to employ models as an assistive solution, it is necessary to comprehend how the models arrive at a particular outcome, necessitating the explainability of these models. This work proposes an explainable unified CNN-CRF approach (SleepXAI) for multi-class sleep stage classification designed explicitly for univariate time-series signals using modified gradient-weighted class activation mapping (Grad-CAM). The proposed approach significantly increases the overall accuracy of sleep stage classification while demonstrating the explainability of the multi-class labeling of univariate EEG signals, highlighting the parts of the signals emphasized most in predicting sleep stages. We extensively evaluated our approach to the sleep-EDF dataset, and it demonstrates the highest overall accuracy of 86.8% in identifying five sleep stage classes. More importantly, we achieved the highest accuracy when classifying the crucial sleep stage N1 with the lowest number of instances, outperforming the state-of-the-art machine learning approaches by 16.3%. These results motivate us to adopt the proposed approach in clinical practice as an aid to sleep experts.publishedVersionPaid Open Acces
    corecore