60 research outputs found

    What a Feeling: Learning Facial Expressions and Emotions.

    Get PDF
    People with Autism Spectrum Disorders (ASD) find it difficult to understand facial expressions. We present a new approach that targets one of the core symptomatic deficits in ASD: the ability to recognize the feeling states of others. What a Feeling is a videogame that aims to improve the ability of socially and emotionally impaired individuals to recognize and respond to emotions conveyed by the face in a playful way. It enables people from all ages to interact with 3D avatars and learn facial expressions through a set of exercises. The game engine is based on real-time facial synthesis. This paper describes the core mechanics of our learning methodology and discusses future evaluation directions

    A semantic feature for human motion retrieval

    Get PDF
    With the explosive growth of motion capture data, it becomes very imperative in animation production to have an efficient search engine to retrieve motions from large motion repository. However, because of the high dimension of data space and complexity of matching methods, most of the existing approaches cannot return the result in real time. This paper proposes a high level semantic feature in a low dimensional space to represent the essential characteristic of different motion classes. On the basis of the statistic training of Gauss Mixture Model, this feature can effectively achieve motion matching on both global clip level and local frame level. Experiment results show that our approach can retrieve similar motions with rankings from large motion database in real-time and also can make motion annotation automatically on the fly. Copyright © 2013 John Wiley & Sons, Ltd

    Kiip - A Skeletal Motion Capture Library

    Get PDF
    This report details the design and implementation of the Computer Science Major Qualifying Project to create Kiip. Kiip is a library for the real-time conversion of optical motion capture data into hierarchical skeleton data. Unlike popular commercial products, this library was designed to be free and easily integrated into external applications, where online conversion of the motion capture data is needed. This report discusses the research and mathematics involved in Kiip\u27s design, the current state of the library, and potential future work

    Real-time motion data annotation via action string

    Get PDF
    Even though there is an explosive growth of motion capture data, there is still a lack of efficient and reliable methods to automatically annotate all the motions in a database. Moreover, because of the popularity of mocap devices in home entertainment systems, real-time human motion annotation or recognition becomes more and more imperative. This paper presents a new motion annotation method that achieves both the aforementioned two targets at the same time. It uses a probabilistic pose feature based on the Gaussian Mixture Model to represent each pose. After training a clustered pose feature model, a motion clip could be represented as an action string. Then, a dynamic programming-based string matching method is introduced to compare the differences between action strings. Finally, in order to achieve the real-time target, we construct a hierarchical action string structure to quickly label each given action string. The experimental results demonstrate the efficacy and efficiency of our method

    Everybody Dance Now

    Full text link
    This paper presents a simple method for "do as I do" motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We approach this problem as video-to-video translation using pose as an intermediate representation. To transfer the motion, we extract poses from the source subject and apply the learned pose-to-appearance mapping to generate the target subject. We predict two consecutive frames for temporally coherent video results and introduce a separate pipeline for realistic face synthesis. Although our method is quite simple, it produces surprisingly compelling results (see video). This motivates us to also provide a forensics tool for reliable synthetic content detection, which is able to distinguish videos synthesized by our system from real data. In addition, we release a first-of-its-kind open-source dataset of videos that can be legally used for training and motion transfer.Comment: In ICCV 201

    Human Motion Transfer with 3D Constraints and Detail Enhancement

    Full text link
    We propose a new method for realistic human motion transfer using a generative adversarial network (GAN), which generates a motion video of a target character imitating actions of a source character, while maintaining high authenticity of the generated results. We tackle the problem by decoupling and recombining the posture information and appearance information of both the source and target characters. The innovation of our approach lies in the use of the projection of a reconstructed 3D human model as the condition of GAN to better maintain the structural integrity of transfer results in different poses. We further introduce a detail enhancement net to enhance the details of transfer results by exploiting the details in real source frames. Extensive experiments show that our approach yields better results both qualitatively and quantitatively than the state-of-the-art methods

    MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration

    Get PDF
    Remote collaborative work has become pervasive in many settings, from engineering to medical professions. Users are immersed in virtual environments and communicate through life-sized avatars that enable face-to-face collaboration. Within this context, users often collaboratively view and interact with virtual 3D models, for example, to assist in designing new devices such as customized prosthetics, vehicles, or buildings. However, discussing shared 3D content face-to-face has various challenges, such as ambiguities, occlusions, and different viewpoints that all decrease mutual awareness, leading to decreased task performance and increased errors. To address this challenge, we introduce MAGIC, a novel approach for understanding pointing gestures in a face-to-face shared 3D space, improving mutual understanding and awareness. Our approach distorts the remote user\'s gestures to correctly reflect them in the local user\'s reference space when face-to-face. We introduce a novel metric called pointing agreement to measure what two users perceive in common when using pointing gestures in a shared 3D space. Results from a user study suggest that MAGIC significantly improves pointing agreement in face-to-face collaboration settings, improving co-presence and awareness of interactions performed in the shared space. We believe that MAGIC improves remote collaboration by enabling simpler communication mechanisms and better mutual awareness.Comment: Presented at IEEE VR 202

    Cyclic animation using Partial differential Equations

    Get PDF
    YesThis work presents an efficient and fast method for achieving cyclic animation using Partial Differential Equations (PDEs). The boundary-value nature associ- ated with elliptic PDEs offers a fast analytic solution technique for setting up a framework for this type of animation. The surface of a given character is thus cre- ated from a set of pre-determined curves, which are used as boundary conditions so that a number of PDEs can be solved. Two different approaches to cyclic ani- mation are presented here. The first consists of using attaching the set of curves to a skeletal system hold- ing the animation for cyclic motions linked to a set mathematical expressions, the second one exploits the spine associated with the analytic solution of the PDE as a driving mechanism to achieve cyclic animation, which is also manipulated mathematically. The first of these approaches is implemented within a framework related to cyclic motions inherent to human-like char- acters, whereas the spine-based approach is focused on modelling the undulatory movement observed in fish when swimming. The proposed method is fast and ac- curate. Additionally, the animation can be either used in the PDE-based surface representation of the model or transferred to the original mesh model by means of a point to point map. Thus, the user is offered with the choice of using either of these two animation repre- sentations of the same object, the selection depends on the computing resources such as storage and memory capacity associated with each particular application

    A human motion feature based on semi-supervised learning of GMM

    Get PDF
    Using motion capture to create naturally looking motion sequences for virtual character animation has become a standard procedure in the games and visual effects industry. With the fast growth of motion data, the task of automatically annotating new motions is gaining an importance. In this paper, we present a novel statistic feature to represent each motion according to the pre-labeled categories of key-poses. A probabilistic model is trained with semi-supervised learning of the Gaussian mixture model (GMM). Each pose in a given motion could then be described by a feature vector of a series of probabilities by GMM. A motion feature descriptor is proposed based on the statistics of all pose features. The experimental results and comparison with existing work show that our method performs more accurately and efficiently in motion retrieval and annotation
    corecore