2,189 research outputs found

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1

    Animating Virtual Human for Virtual Batik Modeling

    Get PDF
    This research paper describes a development of animating virtual human for virtual batik modeling project. The objectives of this project are to animate the virtual human, to map the cloth with the virtual human body, to present the batik cloth, and to evaluate the application in terms of realism of virtual human look, realism of virtual human movement, realism of 3D scene, application suitability, application usability, fashion suitability and user acceptance. The final goal is to accomplish an animated virtual human for virtual batik modeling. There are 3 essential phases which research and analysis (data collection of modeling and animating technique), development (model and animate virtual human, map cloth to body and add a music) and evaluation (evaluation of realism of virtual human look, realism of virtual human movement, realism of props, application suitability, application usability, fashion suitability and user acceptance). The result for application usability is the highest percentage which 90%. Result show that this application is useful to the people. In conclusion, this project has met the objective, which the realism is achieved by used a suitable technique for modeling and animating

    Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image

    Full text link
    We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. The problem is challenging because of the complexity of the human body, articulation, occlusion, clothing, lighting, and the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D body joint locations. We then fit (top-down) a recently published statistical body shape model, called SMPL, to the 2D joints. We do so by minimizing an objective function that penalizes the error between the projected 3D model joints and detected 2D joints. Because SMPL captures correlations in human shape across the population, we are able to robustly fit it to very little data. We further leverage the 3D model to prevent solutions that cause interpenetration. We evaluate our method, SMPLify, on the Leeds Sports, HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect to the state of the art.Comment: To appear in ECCV 201

    Skin deformation and animation of character models based on static and dynamic ordinary differential equations.

    Get PDF
    Animated characters play an important role in the field of computer animation, simulation and games. The basic criterion of good character animation is that the animated characters should appear realistic. This can be achieve through proper skin deformations for characters. Although various skin deformation approaches (Joint-based, Example-based, Physics-based, Curve-based and PDE-based) have been developed, the problem of generating realistic skin deformations efficiently with a small data set is a big challenge. In order to address the limitations of skin deformation, this thesis presents a workflow consisting of three main steps. First, the research has developed a new statistical method to determine the positions of joints based on available X-ray images. Second, an effective method for transferring the deformations of the curves to the polygonal model with high accuracy has been developed. Lastly, the research has produced a simple and efficient method to animate skin deformations by introducing a curved-based surface manipulation method combined with physics and data-driven approaches. The novelty of this method depends on a new model of dynamic deformations and an efficient finite difference solution of the model. The application examples indicate that the curve-based dynamic method developed in this thesis can achieve good realism and high computational efficiency with small data sets in the creation of skin deformations

    Data-driven techniques for animating virtual characters

    Get PDF
    One of the key goals of current research in data-driven computer animation is the synthesis of new motion sequences from existing motion data. This thesis presents three novel techniques for synthesising the motion of a virtual character from existing motion data and develops a framework of solutions to key character animation problems. The first motion synthesis technique presented is based on the character’s locomotion composition process. This technique examines the ability of synthesising a variety of character’s locomotion behaviours while easily specified constraints (footprints) are placed in the three-dimensional space. This is achieved by analysing existing motion data, and by assigning the locomotion behaviour transition process to transition graphs that are responsible for providing information about this process. However, virtual characters should also be able to animate according to different style variations. Therefore, a second technique to synthesise real-time style variations of character’s motion. A novel technique is developed that uses correlation between two different motion styles, and by assigning the motion synthesis process to a parameterised maximum a posteriori (MAP) framework retrieves the desire style content of the input motion in real-time, enhancing the realism of the new synthesised motion sequence. The third technique presents the ability to synthesise the motion of the character’s fingers either o↵-line or in real-time during the performance capture process. The advantage of both techniques is their ability to assign the motion searching process to motion features. The presented technique is able to estimate and synthesise a valid motion of the character’s fingers, enhancing the realism of the input motion. To conclude, this thesis demonstrates that these three novel techniques combine in to a framework that enables the realistic synthesis of virtual character movements, eliminating the post processing, as well as enabling fast synthesis of the required motion

    Expressive movement generation with machine learning

    Get PDF
    Movement is an essential aspect of our lives. Not only do we move to interact with our physical environment, but we also express ourselves and communicate with others through our movements. In an increasingly computerized world where various technologies and devices surround us, our movements are essential parts of our interaction with and consumption of computational devices and artifacts. In this context, incorporating an understanding of our movements within the design of the technologies surrounding us can significantly improve our daily experiences. This need has given rise to the field of movement computing – developing computational models of movement that can perceive, manipulate, and generate movements. In this thesis, we contribute to the field of movement computing by building machine-learning-based solutions for automatic movement generation. In particular, we focus on using machine learning techniques and motion capture data to create controllable, generative movement models. We also contribute to the field by creating datasets, tools, and libraries that we have developed during our research. We start our research by reviewing the works on building automatic movement generation systems using machine learning techniques and motion capture data. Our review covers background topics such as high-level movement characterization, training data, features representation, machine learning models, and evaluation methods. Building on our literature review, we present WalkNet, an interactive agent walking movement controller based on neural networks. The expressivity of virtual, animated agents plays an essential role in their believability. Therefore, WalkNet integrates controlling the expressive qualities of movement with the goal-oriented behaviour of an animated virtual agent. It allows us to control the generation based on the valence and arousal levels of affect, the movement’s walking direction, and the mover’s movement signature in real-time. Following WalkNet, we look at controlling movement generation using more complex stimuli such as music represented by audio signals (i.e., non-symbolic music). Music-driven dance generation involves a highly non-linear mapping between temporally dense stimuli (i.e., the audio signal) and movements, which renders a more challenging modelling movement problem. To this end, we present GrooveNet, a real-time machine learning model for music-driven dance generation

    Muscle activation mapping of skeletal hand motion: an evolutionary approach.

    Get PDF
    Creating controlled dynamic character animation consists of mathe- matical modelling of muscles and solving the activation dynamics that form the key to coordination. But biomechanical simulation and control is com- putationally expensive involving complex di erential equations and is not suitable for real-time platforms like games. Performing such computations at every time-step reduces frame rate. Modern games use generic soft- ware packages called physics engines to perform a wide variety of in-game physical e ects. The physics engines are optimized for gaming platforms. Therefore, a physics engine compatible model of anatomical muscles and an alternative control architecture is essential to create biomechanical charac- ters in games. This thesis presents a system that generates muscle activations from captured motion by borrowing principles from biomechanics and neural con- trol. A generic physics engine compliant muscle model primitive is also de- veloped. The muscle model primitive forms the motion actuator and is an integral part of the physical model used in the simulation. This thesis investigates a stochastic solution to create a controller that mimics the neural control system employed in the human body. The control system uses evolutionary neural networks that evolve its weights using genetic algorithms. Examples and guidance often act as templates in muscle training during all stages of human life. Similarly, the neural con- troller attempts to learn muscle coordination through input motion samples. The thesis also explores the objective functions developed that aids in the genetic evolution of the neural network. Character interaction with the game world is still a pre-animated behaviour in most current games. Physically-based procedural hand ani- mation is a step towards autonomous interaction of game characters with the game world. The neural controller and the muscle primitive developed are used to animate a dynamic model of a human hand within a real-time physics engine environment
    corecore