59 research outputs found

    Functionality-Driven Musculature Retargeting

    Full text link
    We present a novel retargeting algorithm that transfers the musculature of a reference anatomical model to new bodies with different sizes, body proportions, muscle capability, and joint range of motion while preserving the functionality of the original musculature as closely as possible. The geometric configuration and physiological parameters of musculotendon units are estimated and optimized to adapt to new bodies. The range of motion around joints is estimated from a motion capture dataset and edited further for individual models. The retargeted model is simulation-ready, so we can physically simulate muscle-actuated motor skills with the model. Our system is capable of generating a wide variety of anatomical bodies that can be simulated to walk, run, jump and dance while maintaining balance under gravity. We will also demonstrate the construction of individualized musculoskeletal models from bi-planar X-ray images and medical examinations.Comment: 15 pages, 20 figure

    Composing quadrilateral meshes for animation

    Get PDF
    The modeling-by-composition paradigm can be a powerful tool in modern animation pipelines. We propose two novel interactive techniques to compose 3D assets that enable the artists to freely remove, detach and combine components of organic models. The idea behind our methods is to preserve most of the original information in the input characters and blend accordingly where necessary. The first method, QuadMixer, provides a robust tool to compose the quad layouts of watertight pure quadrilateral meshes, exploiting the boolean operations defined on triangles. Quad Layout is a crucial property for many applications since it conveys important information that would otherwise be destroyed by techniques that aim only at preserving the shape. Our technique keeps untouched all the quads in the patches which are not involved in the blending. The resulting meshes preserve the originally designed edge flows that, by construction, are captured and incorporated into the new quads. SkinMixer extends this approach to compose skinned models, taking into account not only the surface but also the data structures for animating the character. We propose a new operation-based technique that preserves and smoothly merges meshes, skeletons, and skinning weights. The retopology approach of QuadMixer is extended to work on quad-dominant and arbitrary complex surfaces. Instead of relying on boolean operations on triangle meshes, we manipulate signed distance fields to generate an implicit surface. The results preserve most of the information in the input assets, blending accordingly in the intersection regions. The resulting characters are ready to be used in animation pipelines. Given the high quality of the results generated, we believe that our methods could have a huge impact on the entertainment industry. Integrated into current software for 3D modeling, they would certainly provide a powerful tool for the artists. Allowing them to automatically reuse parts of their well-designed characters could lead to a new approach for creating models, which would significantly reduce the cost of the process

    Human Shape Estimation using Statistical Body Models

    Get PDF
    Human body estimation methods transform real-world observations into predictions about human body state. These estimation methods benefit a variety of health, entertainment, clothing, and ergonomics applications. State may include pose, overall body shape, and appearance. Body state estimation is underconstrained by observations; ambiguity presents itself both in the form of missing data within observations, and also in the form of unknown correspondences between observations. We address this challenge with the use of a statistical body model: a data-driven virtual human. This helps resolve ambiguity in two ways. First, it fills in missing data, meaning that incomplete observations still result in complete shape estimates. Second, the model provides a statistically-motivated penalty for unlikely states, which enables more plausible body shape estimates. Body state inference requires more than a body model; we therefore build obser- vation models whose output is compared with real observations. In this thesis, body state is estimated from three types of observations: 3D motion capture markers, depth and color images, and high-resolution 3D scans. In each case, a forward process is proposed which simulates observations. By comparing observations to the results of the forward process, state can be adjusted to minimize the difference between simulated and observed data. We use gradient-based methods because they are critical to the precise estimation of state with a large number of parameters. The contributions of this work include three parts. First, we propose a method for the estimation of body shape, nonrigid deformation, and pose from 3D markers. Second, we present a concise approach to differentiating through the rendering process, with application to body shape estimation. And finally, we present a statistical body model trained from human body scans, with state-of-the-art fidelity, good runtime performance, and compatibility with existing animation packages

    Sketch-based skeleton-driven 2D animation and motion capture.

    Get PDF
    This research is concerned with the development of a set of novel sketch-based skeleton-driven 2D animation techniques, which allow the user to produce realistic 2D character animation efficiently. The technique consists of three parts: sketch-based skeleton-driven 2D animation production, 2D motion capture and a cartoon animation filter. For 2D animation production, the traditional way is drawing the key-frames by experienced animators manually. It is a laborious and time-consuming process. With the proposed techniques, the user only inputs one image ofa character and sketches a skeleton for each subsequent key-frame. The system then deforms the character according to the sketches and produces animation automatically. To perform 2D shape deformation, a variable-length needle model is developed, which divides the deformation into two stages: skeleton driven deformation and nonlinear deformation in joint areas. This approach preserves the local geometric features and global area during animation. Compared with existing 2D shape deformation algorithms, it reduces the computation complexity while still yielding plausible deformation results. To capture the motion of a character from exiting 2D image sequences, a 2D motion capture technique is presented. Since this technique is skeleton-driven, the motion of a 2D character is captured by tracking the joint positions. Using both geometric and visual features, this problem can be solved by ptimization, which prevents self-occlusion and feature disappearance. After tracking, the motion data are retargeted to a new character using the deformation algorithm proposed in the first part. This facilitates the reuse of the characteristics of motion contained in existing moving images, making the process of cartoon generation easy for artists and novices alike. Subsequent to the 2D animation production and motion capture,"Cartoon Animation Filter" is implemented and applied. Following the animation principles, this filter processes two types of cartoon input: a single frame of a cartoon character and motion capture data from an image sequence. It adds anticipation and follow-through to the motion with related squash and stretch effect

    HIGH QUALITY HUMAN 3D BODY MODELING, TRACKING AND APPLICATION

    Get PDF
    Geometric reconstruction of dynamic objects is a fundamental task of computer vision and graphics, and modeling human body of high fidelity is considered to be a core of this problem. Traditional human shape and motion capture techniques require an array of surrounding cameras or subjects wear reflective markers, resulting in a limitation of working space and portability. In this dissertation, a complete process is designed from geometric modeling detailed 3D human full body and capturing shape dynamics over time using a flexible setup to guiding clothes/person re-targeting with such data-driven models. As the mechanical movement of human body can be considered as an articulate motion, which is easy to guide the skin animation but has difficulties in the reverse process to find parameters from images without manual intervention, we present a novel parametric model, GMM-BlendSCAPE, jointly taking both linear skinning model and the prior art of BlendSCAPE (Blend Shape Completion and Animation for PEople) into consideration and develop a Gaussian Mixture Model (GMM) to infer both body shape and pose from incomplete observations. We show the increased accuracy of joints and skin surface estimation using our model compared to the skeleton based motion tracking. To model the detailed body, we start with capturing high-quality partial 3D scans by using a single-view commercial depth camera. Based on GMM-BlendSCAPE, we can then reconstruct multiple complete static models of large pose difference via our novel non-rigid registration algorithm. With vertex correspondences established, these models can be further converted into a personalized drivable template and used for robust pose tracking in a similar GMM framework. Moreover, we design a general purpose real-time non-rigid deformation algorithm to accelerate this registration. Last but not least, we demonstrate a novel virtual clothes try-on application based on our personalized model utilizing both image and depth cues to synthesize and re-target clothes for single-view videos of different people

    Semantics for virtual humans

    Get PDF
    Population of Virtual Worlds with Virtual Humans is increasing rapidly by people who want to create a virtual life parallel to the real one (i.e. Second Life). The evolution of technology is smoothly providing the necessary elements to increase realism within these virtual worlds by creating believable Virtual Humans. However, creating the amount of resources needed to succeed this believability is a difficult task, mainly because of the complexity of the creation process of Virtual Humans. Even though there are many existing available resources, their reusability is difficult because there is not enough information provided to evaluate if a model contains the desired characteristics to be reused. Additionally, the knowledge involved in the creation of Virtual Humans is not well known, nor well disseminated. There are several different creation techniques, different software components, and several processes to carry out before having a Virtual Human capable of populating a virtual environment. The creation of Virtual Humans involves: a geometrical representation with an internal control structure, the motion synthesis with different animation techniques, higher level controllers and descriptors to simulate human-like behavior such individuality, cognition, interaction capabilities, etc. All these processes require the expertise from different fields of knowledge such as mathematics, artificial intelligence, computer graphics, design, etc. Furthermore, there is neither common framework nor common understanding of how elements involved in the creation, development, and interaction of Virtual Humans features are done. Therefore, there is a need for describing (1) existing resources, (2) Virtual Human's composition and features, (3) a creation pipeline and (4) the different levels/fields of knowledge comprehended. This thesis presents an explicit representation of the Virtual Humans and their features to provide a conceptual framework that will interest to all people involved in the creation and development of these characters. This dissertation focuses in a semantic description of Virtual Humans. The creation of a semantic description involves gathering related knowledge, agreement among experts in the definition of concepts, validation of the ontology design, etc. In this dissertation all these procedures are presented, and an Ontology for Virtual Humans is described in detail together with the validations that conducted to the resulted ontology. The goal of creating such ontology is to promote reusability of existing resources; to create a shared knowledge of the creation and composition of Virtual Humans; and to support new research of the fields involved in the development of believable Virtual Humans and virtual environments. Finally, this thesis presents several developments that aim to demonstrate the ontology usability and reusability. These developments serve particularly to support the research on specialized knowledge of Virtual Humans, the population of virtual environments, and improve the believability of these characters
    corecore