34 research outputs found

    Home-based physical therapy with an interactive computer vision system

    Full text link
    In this paper, we present ExerciseCheck. ExerciseCheck is an interactive computer vision system that is sufficiently modular to work with different sources of human pose estimates, i.e., estimates from deep or traditional models that interpret RGB or RGB-D camera input. In a pilot study, we first compare the pose estimates produced by four deep models based on RGB input with those of the MS Kinect based on RGB-D data. The results indicate a performance gap that required us to choose the MS Kinect when we tested ExerciseCheck with Parkinson’s disease patients in their homes. ExerciseCheck is capable of customizing exercises, capturing exercise information, evaluating patient performance, providing therapeutic feedback to the patient and the therapist, checking the progress of the user over the course of the physical therapy, and supporting the patient throughout this period. We conclude that ExerciseCheck is a user-friendly computer vision application that can assist patients by providing motivation and guidance to ensure correct execution of the required exercises. Our results also suggest that while there has been considerable progress in the field of pose estimation using deep learning, current deep learning models are not fully ready to replace RGB-D sensors, especially when the exercises involved are complex, and the patient population being accounted for has to be carefully tracked for its “active range of motion.”Published versio

    Emotion Recognition from Facial Expressions using Images with Pose, Illumination and Age Variation for Human-Computer/Robot Interaction

    Get PDF
    A technique for emotion recognition from facial expressions in images with simultaneous pose, illumination and age variation in real time is proposed in this paper. The basic emotions considered are anger, disgust, happy, surprise, and neutral. Feature vectors that were formed from images from the CMU-MultiPIE database for pose and illumination were used for training the classifier. For real-time implementation, Raspberry Pi II was used, which can be placed on a robot to recognize emotions in interactive real-time applications. The proposed method includes face detection using Viola Jones Haar cascade, Active Shape Model (ASM) for feature extraction, and AdaBoost for classification in real- time. Performance of the proposed method was validated in real time by testing with subjects from different age groups expressing basic emotions with varying pose and illumination. 96% recognition accuracy at an average time of 120 ms was obtained. The results are encouraging, as the proposed method gives better accuracy with higher speed compared to existing methods from the literature. The major contribution and strength of the proposed method lie in marking suitable feature points on the face, its speed and invariance to pose, illumination and age in real time
    corecore