84 research outputs found

    A multi-resolution approach for adapting close character interaction

    Get PDF
    Synthesizing close interactions such as dancing and fighting between characters is a challenging problem in computer animation. While encouraging results are presented in [Ho et al. 2010], the high computation cost makes the method unsuitable for interactive motion editing and synthesis. In this paper, we propose an efficient multiresolution approach in the temporal domain for editing and adapting close character interactions based on the Interaction Mesh framework. In particular, we divide the original large spacetime optimization problem into multiple smaller problems such that the user can observe the adapted motion while playing-back the movements during run-time. Our approach is highly parallelizable, and achieves high performance by making use of multi-core architectures. The method can be applied to a wide range of applications including motion editing systems for animators and motion retargeting systems for humanoid robots

    Computational design of skinned Quad-Robots

    Get PDF
    We present a computational design system that assists users to model, optimize, and fabricate quad-robots with soft skins. Our system addresses the challenging task of predicting their physical behavior by fully integrating the multibody dynamics of the mechanical skeleton and the elastic behavior of the soft skin. The developed motion control strategy uses an alternating optimization scheme to avoid expensive full space time-optimization, interleaving space-time optimization for the skeleton, and frame-by-frame optimization for the full dynamics. The output are motor torques to drive the robot to achieve a user prescribed motion trajectory. We also provide a collection of convenient engineering tools and empirical manufacturing guidance to support the fabrication of the designed quad-robot. We validate the feasibility of designs generated with our system through physics simulations and with a physically-fabricated prototype

    Supplementing Frequency Domain Interpolation Methods for Character Animation

    Get PDF
    The animation of human characters entails difficulties exceeding those met simulating objects, machines or plants. A person's gait is a product of nature affected by mood and physical condition. Small deviations from natural movement are perceived with ease by an unforgiving audience. Motion capture technology is frequently employed to record human movement. Subsequent playback on a skeleton underlying the character being animated conveys many of the subtleties of the original motion. Played-back recordings are of limited value, however, when integration in a virtual environment requires movements beyond those in the motion library, creating a need for the synthesis of new motion from pre-recorded sequences. An existing approach involves interpolation between motions in the frequency domain, with a blending space defined by a triangle network whose vertices represent input motions. It is this branch of character animation which is supplemented by the methods presented in this thesis, with work undertaken in three distinct areas. The first is a streamlined approach to previous work. It provides benefits including an efficiency gain in certain contexts, and a very different perspective on triangle network construction in which they become adjustable and intuitive user-interface devices with an increased flexibility allowing a greater range of motions to be blended than was possible with previous networks. Interpolation-based synthesis can never exhibit the same motion variety as can animation methods based on the playback of rearranged frame sequences. Limitations such as this were addressed by the second phase of work, with the creation of hybrid networks. These novel structures use properties of frequency domain triangle blending networks to seamlessly integrate playback-based animation within them. The third area focussed on was distortion found in both frequency- and time-domain blending. A new technique, single-source harmonic switching, was devised which greatly reduces it, and adds to the benefits of blending in the frequency domain

    Topology-based character motion synthesis

    Get PDF
    This thesis tackles the problem of automatically synthesizing motions of close-character interactions which appear in animations of wrestling and dancing. Designing such motions is a daunting task even for experienced animators as the close contacts between the characters can easily result in collisions or penetrations of the body segments. The main problem lies in the conventional representation of the character states that is based on the joint angles or the joint positions. As the relationships between the body segments are not encoded in such a representation, the path-planning for valid motions to switch from one posture to another requires intense random sampling and collision detection in the state-space. In order to tackle this problem, we consider to represent the status of the characters using the spatial relationship of the characters. Describing the scene using the spatial relationships can ease users and animators to analyze the scene and synthesize close interactions of characters. We first propose a method to encode the relationship of the body segments by using the Gauss Linking Integral (GLI), which is a value that specifies how much the body segments are winded around each other. We present how it can be applied for content-based retrieval of motion data of close interactions, and also for synthesis of close character interactions. Next, we propose a representation called Interaction Mesh, which is a volumetric mesh composed of points located at the joint position of the characters and vertices of the environment. This raw representation is more general compared to the tangle-based representation as it can describe interactions that do not involve any tangling nor contacts. We describe how it can be applied for motion editing and retargeting of close character interaction while avoiding penetration and pass-throughs of the body segments. The application of our research is not limited to computer animation but also to robotics, where making robots conduct complex tasks such as tangling, wrapping, holding and knotting are essential to let them assist humans for the daily life

    Relationship descriptors for interactive motion adaptation

    Get PDF
    In this thesis we present an interactive motion adaptation scheme for close interactions between skeletal characters and mesh structures, such as navigating restricted environments and manipulating tools. We propose a new spatial-relationship based representation to encode character-object interactions describing the kinematics of the body parts by the weighted sum of vectors relative to descriptor points selectively sampled over the scene. In contrast to previous discrete representations that either only handle static spatial relationships, or require offline, costly optimization processes, our continuous framework smoothly adapts the motion of a character to deformations in the objects and character morphologies in real-time whilst preserving the original context and style of the scene. We demonstrate the strength of working in our relationship-descriptor space in tackling the issue of motion editing under large environment deformations by integrating procedural animation techniques such as repositioning contacts in an interaction whilst preserving the context and style of the original animation. Furthermore we propose a method that can be used to adapt animations from template objects to novel ones by solving for mappings between the two in our relationship-descriptor space effectively transferring an entire motion from one object to a new one of different geometry whilst ensuring continuity across all frames of the animation, as opposed to mapping static poses only as is traditionally achieved. The experimental results show that our method can be used for a wide range of applications, including motion retargeting for dynamically changing scenes, multi-character interactions, and interactive character control and deformation transfer for scenes that involve close interactions. We further demonstrate a key use case in retargeting locomotion to uneven terrains and curving paths convincingly for bipeds and quadrupeds. Our framework is useful for artists who need to design animated scenes interactively, and modern computer games that allow users to design their own virtual characters, objects and environments, such that they can recycle existing motion data for a large variety of different configurations without the need to manually reconfigure motion from scratch or store expensive combinations of animation in memory. Most importantly it’s achieved in real-time

    Interactive 3D video editing

    Get PDF
    We present a generic and versatile framework for interactive editing of 3D video footage. Our framework combines the advantages of conventional 2D video editing with the power of more advanced, depth-enhanced 3D video streams. Our editor takes 3D video as input and writes both 2D or 3D video formats as output. Its underlying core data structure is a novel 4D spatio-temporal representation which we call the video hypervolume. Conceptually, the processing loop comprises three fundamental operators: slicing, selection, and editing. The slicing operator allows users to visualize arbitrary hyperslices from the 4D data set. The selection operator labels subsets of the footage for spatio-temporal editing. This operator includes a 4D graph-cut based algorithm for object selection. The actual editing operators include cut & paste, affine transformations, and compositing with other media, such as images and 2D video. For high-quality rendering, we employ EWA splatting with view-dependent texturing and boundary matting. We demonstrate the applicability of our methods to post-production of 3D vide

    Mesh modification using deformation gradients

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 117-131).Computer-generated character animation, where human or anthropomorphic characters are animated to tell a story, holds tremendous potential to enrich education, human communication, perception, and entertainment. However, current animation procedures rely on a time consuming and difficult process that requires both artistic talent and technical expertise. Despite the tremendous amount of artistry, skill, and time dedicated to the animation process, there are few techniques to help with reuse. Although individual aspects of animation are well explored, there is little work that extends beyond the boundaries of any one area. As a consequence, the same procedure must be followed for each new character without the opportunity to generalize or reuse technical components. This dissertation describes techniques that ease the animation process by offering opportunities for reuse and a more intuitive animation formulation. A differential specification of arbitrary deformation provides a general representation for adapting deformation to different shapes, computing semantic correspondence between two shapes, and extrapolating natural deformation from a finite set of examples.(cont.) Deformation transfer adds a general-purpose reuse mechanism to the animation pipeline by transferring any deformation of a source triangle mesh onto a different target mesh. The transfer system uses a correspondence algorithm to build a discrete many-to-many mapping between the source and target triangles that permits transfer between meshes of different topology. Results demonstrate retargeting both kinematic poses and non-rigid deformations, as well as transfer between characters of different topological and anatomical structure. Mesh-based inverse kinematics extends the idea of traditional skeleton-based inverse kinematics to meshes by allowing the user to pose a mesh via direct manipulation. The user indicates the dass of meaningful deformations by supplying examples that can be created automatically with deformation transfer, sculpted, scanned, or produced by any other means. This technique is distinguished from traditional animation methods since it avoids the expensive character setup stage. It is distinguished from existing mesh editing algorithms since the user retains the freedom to specify the class of meaningful deformations. Results demonstrate an intuitive interface for posing meshes that requires only a small amount of user effort.by Robert Walker Sumner.Ph.D
    • …
    corecore