16 research outputs found

    Animal gaits from video

    Get PDF
    International audienceWe present a method for animating 3D models of animals from existing live video sequences such as wild life documentaries. Videos are first segmented into binary images on which Principal Component Analysis (PCA) is applied. The time-varying coordinates of the images in the PCA space are then used to generate 3D animation. This is done through interpolation with Radial Basis Functions (RBF) of 3D pose examples associated with a small set of key-images extracted from the video. In addition to this processing pipeline, our main contributions are: an automatic method for selecting the best set of key-images for which the designer will need to provide 3D pose examples. This method saves user time and effort since there is no more need for manual selection within the video and then trials and errors in the choice of key-images and 3D pose examples. As another contribution, we propose a simple algorithm based on PCA images to resolve 3D pose prediction ambiguities. These ambiguities are inherent to many animal gaits when only monocular view is available. The method is first valuated on sequences of synthetic images of animal gaits, for which full 3D data is available. We achieve a good quality reconstruction of the input 3D motion from a single video sequence of its 2D rendering. We then illustrate the method by reconstructing animal gaits from live video of wild life documentaries

    Data-Driven Animation of Crowds

    Get PDF
    International audienceIn this paper we propose an original method to animate a crowd of virtual beings in a virtual environment. Instead of relying on models to describe the motions of people along time, we suggest to use {\em a priori} knowledge on the dynamic of the crowd acquired from videos of real crowd situations. In our method this information is expressed as a time-varying motion field which accounts for a continuous flow of people along time. This motion descriptor is obtained through optical flow estimation with a specific second order regularization. Obtained motion fields are then used in a classical fixed step size integration scheme that allows to animate a virtual crowd in real-time. The power of our technique is demonstrated through various examples and possible follow-ups to this work are also described

    Data-Driven Animation of Crowds

    Get PDF
    International audienceIn this paper we propose an original method to animate a crowd of virtual beings in a virtual environment. Instead of relying on models to describe the motions of people along time, we suggest to use {\em a priori} knowledge on the dynamic of the crowd acquired from videos of real crowd situations. In our method this information is expressed as a time-varying motion field which accounts for a continuous flow of people along time. This motion descriptor is obtained through optical flow estimation with a specific second order regularization. Obtained motion fields are then used in a classical fixed step size integration scheme that allows to animate a virtual crowd in real-time. The power of our technique is demonstrated through various examples and possible follow-ups to this work are also described

    Creatures Great and SMAL: Recovering the Shape and Motion of Animals from Video

    Get PDF
    We present a system to recover the 3D shape and motion of a wide variety of quadrupeds from video. The system comprises a machine learning front-end which predicts candidate 2D joint positions, a discrete optimization which finds kinematically plausible joint correspondences, and an energy minimization stage which fits a detailed 3D model to the image. In order to overcome the limited availability of motion capture training data from animals, and the difficulty of generating realistic synthetic training images, the system is designed to work on silhouette data. The joint candidate predictor is trained on synthetically generated silhouette images, and at test time, deep learning methods or standard video segmentation tools are used to extract silhouettes from real data. The system is tested on animal videos from several species, and shows accurate reconstructions of 3D shape and pose.GlaxoSmithKlin

    Hierarchical retargetting of 2D motion fields to the animation of 3D plant models

    Get PDF
    International audienceThe complexity of animating trees, shrubs and foliage is an impediment to the efficient and realistic depiction of natural environments. This paper presents an algorithm to extract, from a single video sequence, motion fields of real shrubs under the influence of wind, and to transfer this motion to the animation of complex, synthetic 3D plant models. The extracted motion is retargeted without requiring physical simulation. First, feature tracking is applied to the video footage, allowing the 2D position and velocity of automatically identified features to be clustered. A key contribution of the method is that the hierarchy obtained through statistical clustering can be used to synthesize a 2D hierarchical geometric structure of branches that terminates according to the cut-off threshold of a classification algorithm. This step extracts both the shape and the motion of a hierarchy of features groups that are identified as geometrical branches. The 2D hierarchy is then extended to three dimensions using the estimated spatial distribution of the features within each group. Another key contribution is that this 3D hierarchical structure can be efficiently used as a motion controller to animate any complex 3D model of similar but non-identical plants using a standard skinning algorithm. Thus, a single video source of a moving shrub becomes an input device for a large class of virtual shrubs. We illustrate the results on two examples of shrubs and one outdoor tree. Extensions to other outdoor plants are discussed

    Using Fourier Analysis To Generate Believable Gait Patterns For Virtual Quadrupeds

    Get PDF
    Achieving a believable gait pattern for a virtual quadrupedal character requires a significant time investment from an animator. This thesis presents a prototype system for creating a foundational layer of natural-looking animation to serve as a starting point for an animator. Starting with video of an actual horse walking, joints are animated over the footage to create a rotoscoped animation. This animation represents the animal’s natural motion. Joint angle values for the legs are sampled per frame of the animation and conditioned for Fourier analysis. The Fast Fourier Transform provides frequency information that is used to create mathematical descriptions of each joint’s movement. A model representing the horse’s overall gait pattern is created once each of the leg joints has been analyzed and defined. Lastly, a new rig for a virtual quadruped is created and its leg joints are animated using the gait pattern model derived through the analysis
    corecore