343 research outputs found

    Personalized Automatic Estimation of Self-reported Pain Intensity from Facial Expressions

    Full text link
    Pain is a personal, subjective experience that is commonly evaluated through visual analog scales (VAS). While this is often convenient and useful, automatic pain detection systems can reduce pain score acquisition efforts in large-scale studies by estimating it directly from the participants' facial expressions. In this paper, we propose a novel two-stage learning approach for VAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs) to automatically estimate Prkachin and Solomon Pain Intensity (PSPI) levels from face images. The estimated scores are then fed into the personalized Hidden Conditional Random Fields (HCRFs), used to estimate the VAS, provided by each person. Personalization of the model is performed using a newly introduced facial expressiveness score, unique for each person. To the best of our knowledge, this is the first approach to automatically estimate VAS from face images. We show the benefits of the proposed personalized over traditional non-personalized approach on a benchmark dataset for pain analysis from face images.Comment: Computer Vision and Pattern Recognition Conference, The 1st International Workshop on Deep Affective Learning and Context Modelin

    Estimating Carotid Pulse and Breathing Rate from Near-infrared Video of the Neck

    Full text link
    Objective: Non-contact physiological measurement is a growing research area that allows capturing vital signs such as heart rate (HR) and breathing rate (BR) comfortably and unobtrusively with remote devices. However, most of the approaches work only in bright environments in which subtle photoplethysmographic and ballistocardiographic signals can be easily analyzed and/or require expensive and custom hardware to perform the measurements. Approach: This work introduces a low-cost method to measure subtle motions associated with the carotid pulse and breathing movement from the neck using near-infrared (NIR) video imaging. A skin reflection model of the neck was established to provide a theoretical foundation for the method. In particular, the method relies on template matching for neck detection, Principal Component Analysis for feature extraction, and Hidden Markov Models for data smoothing. Main Results: We compared the estimated HR and BR measures with ones provided by an FDA-cleared device in a 12-participant laboratory study: the estimates achieved a mean absolute error of 0.36 beats per minute and 0.24 breaths per minute under both bright and dark lighting. Significance: This work advances the possibilities of non-contact physiological measurement in real-life conditions in which environmental illumination is limited and in which the face of the person is not readily available or needs to be protected. Due to the increasing availability of NIR imaging devices, the described methods are readily scalable.Comment: 21 pages, 15 figure

    Emotion research by the people, for the people

    Get PDF
    Emotion research will leap forward when its focus changes from comparing averaged statistics of self-report data across people experiencing emotion in laboratories to characterizing patterns of data from individuals and clusters of similar individuals experiencing emotion in real life. Such an advance will come about through engineers and psychologists collaborating to create new ways for people to measure, share, analyze, and learn from objective emotional responses in situations that truly matter to people. This approach has the power to greatly advance the science of emotion while also providing personalized help to participants in the research

    Understanding Ambulatory and Wearable Data for Health and Wellness

    Get PDF
    In our research, we aim (1) to recognize human internal states and behaviors (stress level, mood and sleep behaviors etc), (2) to reveal which features in which data can work as predictors and (3) to use them for intervention. We collect multi-modal (physiological, behavioral, environmental, and social) ambulatory data using wearable sensors and mobile phones, combining with standardized questionnaires and data measured in the laboratory. In this paper, we introduce our approach and some of our projects

    Recognition of Sleep Dependent Memory Consolidation with Multi-modal Sensor Data

    Get PDF
    This paper presents the possibility of recognizing sleep dependent memory consolidation using multi-modal sensor data. We collected visual discrimination task (VDT) performance before and after sleep at laboratory, hospital and home for N=24 participants while recording EEG (electroencepharogram), EDA (electrodermal activity) and ACC (accelerometer) or actigraphy data during sleep. We extracted features and applied machine learning techniques (discriminant analysis, support vector machine and k-nearest neighbor) from the sleep data to classify whether the participants showed improvement in the memory task. Our results showed 60–70% accuracy in a binary classification of task performance using EDA or EDA+ACC features, which provided an improvement over the more traditional use of sleep stages (the percentages of slow wave sleep (SWS) in the 1st quarter and rapid eye movement (REM) in the 4th quarter of the night) to predict VDT improvement
    • …
    corecore