98 research outputs found

    An integrated platform for hip joint osteoarthritis analysis: design, implementation and results

    Get PDF
    Purpose: We present a software designed to improve hip joint osteoarthritis (OA) understanding using 3D anatomical models, magnetic resonance imaging (MRI) and motion capture. Methods: In addition to a standard static clinical evaluation (anamnesis, medical images examination), the software provides a dynamic assessment of the hip joint. The operator can compute automatically and in real-time the hip joint kinematics from optical motion capture data. From the estimated motion, the software allows for the calculation of the active range of motion, the congruency and the center of rotation of the hip joint and the detection and localization of the femoroacetabular impingement region. All these measurements cannot be performed clinically. Moreover, to improve the subjective reading of medical images, the software provides a set of 3D measurement tools based on MRI and 3D anatomical models to assist and improve the analysis of hip morphological abnormalities. Finally, the software is driven by a medical ontology to support data storage, processing and analysis. Results: We performed an in vivo assessment of the software in a clinical study conducted with 30 professional ballet dancers, a population who are at high risk of developing OA. We studied the causes of OA in this selected population. Our results show that extreme motion exposes the morphologically "normal” dancer's hip to recurrent superior or posterosuperior FAI and to joint subluxation. Conclusion: Our new hip software includes all the required materials and knowledge (images data, 3D models, motion, morphological measurements, etc.) to improve orthopedists' performances in hip joint OA analysi

    A computational study of expressive facial dynamics in children with autism

    Get PDF
    Several studies have established that facial expressions of children with autism are often perceived as atypical, awkward or less engaging by typical adult observers. Despite this clear deficit in the quality of facial expression production, very little is understood about its underlying mechanisms and characteristics. This paper takes a computational approach to studying details of facial expressions of children with high functioning autism (HFA). The objective is to uncover those characteristics of facial expressions, notably distinct from those in typically developing children, and which are otherwise difficult to detect by visual inspection. We use motion capture data obtained from subjects with HFA and typically developing subjects while they produced various facial expressions. This data is analyzed to investigate how the overall and local facial dynamics of children with HFA differ from their typically developing peers. Our major observations include reduced complexity in the dynamic facial behavior of the HFA group arising primarily from the eye region

    Learning Inverse Rig Mappings by Nonlinear Regression

    Get PDF

    From motions to emotions: Classification of Affect from Dance Movements using Deep Learning

    Get PDF
    This work investigates classification of emotions from MoCap full-body data by using Convolutional Neural Networks (CNN). Rather than addressing regular day to day activities, we focus on a more complex type of full-body movement - dance. For this purpose, a new dataset was created which contains short excerpts of the performances of professional dancers who interpreted four emotional states: anger, happiness, sadness, and insecurity. Fourteen minutes of motion capture data are used to explore different CNN architectures and data representations. The results of the four-class classification task are up to 0.79 (F1 score) on test data of other performances by the same dancers. Hence, through deep learning, this paper proposes a novel and effective method of emotion classification which can be exploited in affective interfaces

    A Data-Driven Appearance Model for Human Fatigue

    Get PDF
    Humans become visibly tired during physical activity. After a set of squats, jumping jacks or walking up a flight of stairs, individuals start to pant, sweat, loose their balance, and flush. Simulating these physiological changes due to exertion and exhaustion on an animated character greatly enhances a motion’s realism. These fatigue factors depend on the mechanical, physical, and biochemical function states of the human body. The difficulty of simulating fatigue for character animation is due in part to the complex anatomy of the human body. We present a multi-modal capturing technique for acquiring synchronized biosignal data and motion capture data to enhance character animation. The fatigue model utilizes an anatomically derived model of the human body that includes a torso, organs, face, and rigged body. This model is then driven by biosignal output. Our animations show the wide range of exhaustion behaviors synthesized from real biological data output. We demonstrate the fatigue model by augmenting standard motion capture with exhaustion effects to produce more realistic appearance changes during three exercise examples. We compare the fatigue model with both simple procedural methods and a dense marker set data capture of exercise motions

    Measuring Affect for the Study and Enhancement of Co-Present Creative Collaboration

    Get PDF
    © 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

    Facial Capture Lip-Sync

    Get PDF
    Facial model lip-sync is a large field of research within the animation industry. The mouth is a complex facial feature to animate, thus multiple techniques have arisen to simplify this process. These techniques, however, can lead to unappealing flat animation that lack full facial expression or eerie over-expressive animations that make the viewer uneasy. This thesis proposes an animation system that produces natural speech movements while conveying facial expression and compares them to previous techniques. This system used a text input of the dialogue to generate a phoneme-to-blend shape map to automate the facial model. An actor was motion captured to record the audio, provide speech motion data, and to directly control the facial expression in the regions of the face other than the mouth. The actor\u27s speech motion and the phoneme-to-blend shape map worked in conjunction to create a final lip-synced animation that viewers compared to phonetic driven animation and animation created with just motion capture. In this comparison, this system\u27s resultant animation was the least favorite, while the dampened motion capture animation gained the most preference

    A new dataset for smartphone gesture-based authentication

    Get PDF
    In this paper, we consider the problem of authentication on a smartphone, based on gestures. Specifically, the gestures consist of users holding a smartphone while writing their initials in the air. Accelerometer data from 80 subjects was collected and we provide a preliminary analysis of this data using machine learning techniques. The machine learning techniques considered include principal component analysis (PCA) and support vector machines (SVM). The results presented here are intended to provide a baseline for additional research based on our dataset

    Motion capture and human pose reconstruction from a single-view video sequence

    Get PDF
    Cataloged from PDF version of article.We propose a framework to reconstruct the 3D pose of a human for animation from a sequence of single-view video frames. The framework for pose construction starts with background estimation and the performer's silhouette is extracted using image subtraction for each frame. Then the body silhouettes are automatically labeled using a model-based approach. Finally, the 3D pose is constructed from the labeled human silhouette by assuming orthographic projection. The proposed approach does not require camera calibration. It assumes that the input video has a static background, it has no significant perspective effects, and the performer is in an upright position. The proposed approach requires minimal user interaction. (C) 2013 Elsevier Inc. All rights reserved
    corecore