250 research outputs found

    What a Feeling: Learning Facial Expressions and Emotions.

    Get PDF
    People with Autism Spectrum Disorders (ASD) find it difficult to understand facial expressions. We present a new approach that targets one of the core symptomatic deficits in ASD: the ability to recognize the feeling states of others. What a Feeling is a videogame that aims to improve the ability of socially and emotionally impaired individuals to recognize and respond to emotions conveyed by the face in a playful way. It enables people from all ages to interact with 3D avatars and learn facial expressions through a set of exercises. The game engine is based on real-time facial synthesis. This paper describes the core mechanics of our learning methodology and discusses future evaluation directions

    Lessons from digital puppetry - Updating a design framework for a perceptual user interface

    Get PDF
    While digital puppeteering is largely used just to augment full body motion capture in digital production, its technology and traditional concepts could inform a more naturalized multi-modal human computer interaction than is currently used with the new perceptual systems such as Kinect. Emerging immersive social media networks with their fully live virtual or augmented environments and largely inexperienced users would benefit the most from this strategy. This paper intends to define digital puppeteering as it is currently understood, and summarize its broad shortcomings based on expert evaluation. Based on this evaluation it will suggest updates and experiments using current perceptual technology and concepts in cognitive processing for existing human computer interaction taxonomy. This updated framework may be more intuitive and suitable in developing extensions to an emerging perceptual user interface for the general public

    Video-driven Neural Physically-based Facial Asset for Production

    Full text link
    Production-level workflows for producing convincing 3D dynamic human faces have long relied on an assortment of labor-intensive tools for geometry and texture generation, motion capture and rigging, and expression synthesis. Recent neural approaches automate individual components but the corresponding latent representations cannot provide artists with explicit controls as in conventional tools. In this paper, we present a new learning-based, video-driven approach for generating dynamic facial geometries with high-quality physically-based assets. For data collection, we construct a hybrid multiview-photometric capture stage, coupling with ultra-fast video cameras to obtain raw 3D facial assets. We then set out to model the facial expression, geometry and physically-based textures using separate VAEs where we impose a global MLP based expression mapping across the latent spaces of respective networks, to preserve characteristics across respective attributes. We also model the delta information as wrinkle maps for the physically-based textures, achieving high-quality 4K dynamic textures. We demonstrate our approach in high-fidelity performer-specific facial capture and cross-identity facial motion retargeting. In addition, our multi-VAE-based neural asset, along with the fast adaptation schemes, can also be deployed to handle in-the-wild videos. Besides, we motivate the utility of our explicit facial disentangling strategy by providing various promising physically-based editing results with high realism. Comprehensive experiments show that our technique provides higher accuracy and visual fidelity than previous video-driven facial reconstruction and animation methods.Comment: For project page, see https://sites.google.com/view/npfa/ Notice: You may not copy, reproduce, distribute, publish, display, perform, modify, create derivative works, transmit, or in any way exploit any such content, nor may you distribute any part of this content over any network, including a local area network, sell or offer it for sale, or use such content to construct any kind of databas

    A multi-resolution approach for adapting close character interaction

    Get PDF
    Synthesizing close interactions such as dancing and fighting between characters is a challenging problem in computer animation. While encouraging results are presented in [Ho et al. 2010], the high computation cost makes the method unsuitable for interactive motion editing and synthesis. In this paper, we propose an efficient multiresolution approach in the temporal domain for editing and adapting close character interactions based on the Interaction Mesh framework. In particular, we divide the original large spacetime optimization problem into multiple smaller problems such that the user can observe the adapted motion while playing-back the movements during run-time. Our approach is highly parallelizable, and achieves high performance by making use of multi-core architectures. The method can be applied to a wide range of applications including motion editing systems for animators and motion retargeting systems for humanoid robots

    Example based retargeting human motion to arbitrary mesh models

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013.Thesis (Master's) -- Bilkent University, 2013.Includes bibliographical references leaves 51-55.Animation of mesh models can be accomplished in many ways, including character animation with skinned skeletons, deformable models, or physic-based simulation. Generating animations with all of these techniques is time consuming and laborious for novice users; however adapting already available wide-range human motion capture data might simplify the process signi cantly. This thesis presents a method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion retargeting systems try to preserve original motion as is, while satisfying several motion constraints. In our approach, we use a few pose-to-pose examples provided by the user to extract desired semantics behind retargeting process by not limiting the transfer to be only literal. Hence, mesh models, which have di erent structures and/or motion semantics from humanoid skeleton, become possible targets. Also considering mesh models which are widely available and without any additional structure (e.g. skeleton), our method does not require such a structure by providing a build-in surface-based deformation system. Since deformation for animation purpose can require more than rigid behaviour, we augment existing rigid deformation approaches to provide volume preserving and cartoon-like deformation. For demonstrating results of our approach, we retarget several motion capture data to three well-known models, and also investigate how automatic retargeting methods developed considering humanoid models work on our models.Yaz, İlker OM.S

    A framework for automatic and perceptually valid facial expression generation

    Get PDF
    Facial expressions are facial movements reflecting the internal emotional states of a character or in response to social communications. Realistic facial animation should consider at least two factors: believable visual effect and valid facial movements. However, most research tends to separate these two issues. In this paper, we present a framework for generating 3D facial expressions considering both the visual the dynamics effect. A facial expression mapping approach based on local geometry encoding is proposed, which encodes deformation in the 1-ring vector. This method is capable of mapping subtle facial movements without considering those shape and topological constraints. Facial expression mapping is achieved through three steps: correspondence establishment, deviation transfer and movement mapping. Deviation is transferred to the conformal face space through minimizing the error function. This function is formed by the source neutral and the deformed face model related by those transformation matrices in 1-ring neighborhood. The transformation matrix in 1-ring neighborhood is independent of the face shape and the mesh topology. After the facial expression mapping, dynamic parameters are then integrated with facial expressions for generating valid facial expressions. The dynamic parameters were generated based on psychophysical methods. The efficiency and effectiveness of the proposed methods have been tested using various face models with different shapes and topological representations

    Video-Based Character Animation

    Full text link

    A tutorial on motion capture driven character animation

    Get PDF
    Motion capture (MoCap) is an increasingly important technique to create realistic human motion for animation. However MoCap data are noisy, the resulting animation is often inaccurate and unrealistic without elaborate manual processing of the data. In this paper, we will discuss practical issues for MoCap driven character animation, particularly when using commercial toolkits. We highlight open topics in this field for future research. MoCap animations created in this project will be demonstrated at the conference

    A Survey on Video-based Graphics and Video Visualization

    Get PDF
    corecore