55 research outputs found

    Sketch-based skeleton-driven 2D animation and motion capture.

    Get PDF
    This research is concerned with the development of a set of novel sketch-based skeleton-driven 2D animation techniques, which allow the user to produce realistic 2D character animation efficiently. The technique consists of three parts: sketch-based skeleton-driven 2D animation production, 2D motion capture and a cartoon animation filter. For 2D animation production, the traditional way is drawing the key-frames by experienced animators manually. It is a laborious and time-consuming process. With the proposed techniques, the user only inputs one image ofa character and sketches a skeleton for each subsequent key-frame. The system then deforms the character according to the sketches and produces animation automatically. To perform 2D shape deformation, a variable-length needle model is developed, which divides the deformation into two stages: skeleton driven deformation and nonlinear deformation in joint areas. This approach preserves the local geometric features and global area during animation. Compared with existing 2D shape deformation algorithms, it reduces the computation complexity while still yielding plausible deformation results. To capture the motion of a character from exiting 2D image sequences, a 2D motion capture technique is presented. Since this technique is skeleton-driven, the motion of a 2D character is captured by tracking the joint positions. Using both geometric and visual features, this problem can be solved by ptimization, which prevents self-occlusion and feature disappearance. After tracking, the motion data are retargeted to a new character using the deformation algorithm proposed in the first part. This facilitates the reuse of the characteristics of motion contained in existing moving images, making the process of cartoon generation easy for artists and novices alike. Subsequent to the 2D animation production and motion capture,"Cartoon Animation Filter" is implemented and applied. Following the animation principles, this filter processes two types of cartoon input: a single frame of a cartoon character and motion capture data from an image sequence. It adds anticipation and follow-through to the motion with related squash and stretch effect

    Example based retargeting human motion to arbitrary mesh models

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013.Thesis (Master's) -- Bilkent University, 2013.Includes bibliographical references leaves 51-55.Animation of mesh models can be accomplished in many ways, including character animation with skinned skeletons, deformable models, or physic-based simulation. Generating animations with all of these techniques is time consuming and laborious for novice users; however adapting already available wide-range human motion capture data might simplify the process signi cantly. This thesis presents a method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion retargeting systems try to preserve original motion as is, while satisfying several motion constraints. In our approach, we use a few pose-to-pose examples provided by the user to extract desired semantics behind retargeting process by not limiting the transfer to be only literal. Hence, mesh models, which have di erent structures and/or motion semantics from humanoid skeleton, become possible targets. Also considering mesh models which are widely available and without any additional structure (e.g. skeleton), our method does not require such a structure by providing a build-in surface-based deformation system. Since deformation for animation purpose can require more than rigid behaviour, we augment existing rigid deformation approaches to provide volume preserving and cartoon-like deformation. For demonstrating results of our approach, we retarget several motion capture data to three well-known models, and also investigate how automatic retargeting methods developed considering humanoid models work on our models.Yaz, İlker OM.S

    Example-based retargeting of human motion to arbitrary mesh models

    Get PDF
    We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion-retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose-to-pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. Also considering the fact that most publicly available mesh models lack additional structure (e.g. skeleton), our method dispenses with the need for such a structure by means of a built-in surface-based deformation system. As deformation for animation purposes may require non-rigid behaviour, we augment existing rigid deformation approaches to provide volume-preserving and squash-and-stretch deformations. We demonstrate our approach on well-known mesh models along with several publicly available motion-capture sequences. We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion-retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose-to-pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. © 2014 The Eurographics Association and John Wiley & Sons Ltd

    Facial Expression Retargeting from Human to Avatar Made Easy

    Full text link
    Facial expression retargeting from humans to virtual characters is a useful technique in computer graphics and animation. Traditional methods use markers or blendshapes to construct a mapping between the human and avatar faces. However, these approaches require a tedious 3D modeling process, and the performance relies on the modelers' experience. In this paper, we propose a brand-new solution to this cross-domain expression transfer problem via nonlinear expression embedding and expression domain translation. We first build low-dimensional latent spaces for the human and avatar facial expressions with variational autoencoder. Then we construct correspondences between the two latent spaces guided by geometric and perceptual constraints. Specifically, we design geometric correspondences to reflect geometric matching and utilize a triplet data structure to express users' perceptual preference of avatar expressions. A user-friendly method is proposed to automatically generate triplets for a system allowing users to easily and efficiently annotate the correspondences. Using both geometric and perceptual correspondences, we trained a network for expression domain translation from human to avatar. Extensive experimental results and user studies demonstrate that even nonprofessional users can apply our method to generate high-quality facial expression retargeting results with less time and effort.Comment: IEEE Transactions on Visualization and Computer Graphics (TVCG), to appea

    RaBit: Parametric Modeling of 3D Biped Cartoon Characters with a Topological-consistent Dataset

    Full text link
    Assisting people in efficiently producing visually plausible 3D characters has always been a fundamental research topic in computer vision and computer graphics. Recent learning-based approaches have achieved unprecedented accuracy and efficiency in the area of 3D real human digitization. However, none of the prior works focus on modeling 3D biped cartoon characters, which are also in great demand in gaming and filming. In this paper, we introduce 3DBiCar, the first large-scale dataset of 3D biped cartoon characters, and RaBit, the corresponding parametric model. Our dataset contains 1,500 topologically consistent high-quality 3D textured models which are manually crafted by professional artists. Built upon the data, RaBit is thus designed with a SMPL-like linear blend shape model and a StyleGAN-based neural UV-texture generator, simultaneously expressing the shape, pose, and texture. To demonstrate the practicality of 3DBiCar and RaBit, various applications are conducted, including single-view reconstruction, sketch-based modeling, and 3D cartoon animation. For the single-view reconstruction setting, we find a straightforward global mapping from input images to the output UV-based texture maps tends to lose detailed appearances of some local parts (e.g., nose, ears). Thus, a part-sensitive texture reasoner is adopted to make all important local areas perceived. Experiments further demonstrate the effectiveness of our method both qualitatively and quantitatively. 3DBiCar and RaBit are available at gaplab.cuhk.edu.cn/projects/RaBit.Comment: CVPR 2023, Project page: https://gaplab.cuhk.edu.cn/projects/RaBit

    Psychophysical investigation of facial expressions using computer animated faces

    No full text
    The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness

    Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis

    Get PDF
    International audienceWe present a method for easily drafting expressive character animation by playing with instrumented rigid objects. We parse the input 6D trajectories (position and orientation over time)-called spatial motion doodles-into sequences of actions and convert them into detailed character animations using a dataset of parameterized motion clips which are automatically fitted to the doodles in terms of global trajectory and timing. Moreover, we capture the expres-siveness of user-manipulation by analyzing Laban effort qualities in the input spatial motion doodles and transferring them to the synthetic motions we generate. We validate the ease of use of our system and the expressiveness of the resulting animations through a series of user studies, showing the interest of our approach for interactive digital storytelling applications dedicated to children and non-expert users, as well as for providing fast drafting tools for animators

    Drawing from motion capture : developing visual languages of animation

    Get PDF
    The work presented in this thesis aims to explore novel approaches of combining motion capture with drawing and 3D animation. As the art form of animation matures, possibilities of hybrid techniques become more feasible, and crosses between traditional and digital media provide new opportunities for artistic expression. 3D computer animation is used for its keyframing and rendering advancements, that result in complex pipelines where different areas of technical and artistic specialists contribute to the end result. Motion capture is mostly used for realistic animation, more often than not for live-action filmmaking, as a visual effect. Realistic animated films depend on retargeting techniques, designed to preserve actors performances with a high degree of accuracy. In this thesis, we investigate alternative production methods that do not depend on retargeting, and provide animators with greater options for experimentation and expressivity. As motion capture data is a great source for naturalistic movements, we aim to combine it with interactive methods such as digital sculpting and 3D drawing. As drawing is predominately used in preproduction, in both the case of realistic animation and visual effects, we embed it instead to alternative production methods, where artists can benefit from improvisation and expression, while emerging in a three-dimensional environment. Additionally, we apply these alternative methods for the visual development of animation, where they become relevant for the creation of specific visual languages that can be used to articulate concrete ideas for storytelling in animation

    Mesh modification using deformation gradients

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 117-131).Computer-generated character animation, where human or anthropomorphic characters are animated to tell a story, holds tremendous potential to enrich education, human communication, perception, and entertainment. However, current animation procedures rely on a time consuming and difficult process that requires both artistic talent and technical expertise. Despite the tremendous amount of artistry, skill, and time dedicated to the animation process, there are few techniques to help with reuse. Although individual aspects of animation are well explored, there is little work that extends beyond the boundaries of any one area. As a consequence, the same procedure must be followed for each new character without the opportunity to generalize or reuse technical components. This dissertation describes techniques that ease the animation process by offering opportunities for reuse and a more intuitive animation formulation. A differential specification of arbitrary deformation provides a general representation for adapting deformation to different shapes, computing semantic correspondence between two shapes, and extrapolating natural deformation from a finite set of examples.(cont.) Deformation transfer adds a general-purpose reuse mechanism to the animation pipeline by transferring any deformation of a source triangle mesh onto a different target mesh. The transfer system uses a correspondence algorithm to build a discrete many-to-many mapping between the source and target triangles that permits transfer between meshes of different topology. Results demonstrate retargeting both kinematic poses and non-rigid deformations, as well as transfer between characters of different topological and anatomical structure. Mesh-based inverse kinematics extends the idea of traditional skeleton-based inverse kinematics to meshes by allowing the user to pose a mesh via direct manipulation. The user indicates the dass of meaningful deformations by supplying examples that can be created automatically with deformation transfer, sculpted, scanned, or produced by any other means. This technique is distinguished from traditional animation methods since it avoids the expensive character setup stage. It is distinguished from existing mesh editing algorithms since the user retains the freedom to specify the class of meaningful deformations. Results demonstrate an intuitive interface for posing meshes that requires only a small amount of user effort.by Robert Walker Sumner.Ph.D
    corecore