69,142 research outputs found

    Motion capture and human pose reconstruction from a single-view video sequence

    Get PDF
    Cataloged from PDF version of article.We propose a framework to reconstruct the 3D pose of a human for animation from a sequence of single-view video frames. The framework for pose construction starts with background estimation and the performer's silhouette is extracted using image subtraction for each frame. Then the body silhouettes are automatically labeled using a model-based approach. Finally, the 3D pose is constructed from the labeled human silhouette by assuming orthographic projection. The proposed approach does not require camera calibration. It assumes that the input video has a static background, it has no significant perspective effects, and the performer is in an upright position. The proposed approach requires minimal user interaction. (C) 2013 Elsevier Inc. All rights reserved

    Learning from a MISTAKE (MISconception Teaching Animations in Knowledge Extension)

    Get PDF
    The use of traditional animations in biology-based courses to explain scientific processes has increased significantly over the last number of years, however their effectiveness in undergraduate learning is still not completely clear. In addition, the best approach to using and implementing these videos to harvest their true learning potential for students is still up for debate. We have recently started to explore the answers to these questions in the Human Biology Program at the University of Toronto using a novel approach. Two versions of animated biology video content were developed in three different undergraduate courses: one version that was “traditional” correctly explaining a theory, process or study and one version that used “planned misconceptions” or “mistakes” explaining the same theory, process or study. The approach to using and evaluating these two versions of videos in each course were all unique and spanned a variety of science disciplines. In turn, this provided an opportunity to gauge if there were advantages of using one style of animation over the other, which animation styles students were more comfortable with and which animation styles generated greater student engagement. All of these points will be discussed but most importantly we will attempt to identify which video version, “traditional” or “mistake” better reinforced student learning of key concepts. Finally, audience engagement will be solicited through the viewing of small clips from both video versions (in varying order) and going through small working exercises (e.g. iclicker) to gauge understanding of key concepts in each course example

    Automatic Video Classification

    Get PDF
    Within the past few years video usage has grown in a multi-fold fashion. One of the major reasons for this explosive video growth is the rising Internet bandwidth speeds. As of today, a significant human effort is needed to categorize these video data files. A successful automatic video classification method can substantially help to reduce the growing amount of cluttered video data on the Internet. This research project is based on finding a successful model for video classification. We have utilized various schemes of visual and audio data analysis methods to build a successful classification model. As far as the classification classes are concerned, we have handpicked News, Animation and Music video classes to carry out the experiments. A total number of 445 video files from all three classes were analyzed to build classification models based on Naïve Bayes and Support Vector Machine classifiers. In order to gather the final results we developed a “weighted voting - meta classifier” model. Our approach attained an average of 90% success rate among all three classification classes

    Shape Animation with Combined Captured and Simulated Dynamics

    Get PDF
    We present a novel volumetric animation generation framework to create new types of animations from raw 3D surface or point cloud sequence of captured real performances. The framework considers as input time incoherent 3D observations of a moving shape, and is thus particularly suitable for the output of performance capture platforms. In our system, a suitable virtual representation of the actor is built from real captures that allows seamless combination and simulation with virtual external forces and objects, in which the original captured actor can be reshaped, disassembled or reassembled from user-specified virtual physics. Instead of using the dominant surface-based geometric representation of the capture, which is less suitable for volumetric effects, our pipeline exploits Centroidal Voronoi tessellation decompositions as unified volumetric representation of the real captured actor, which we show can be used seamlessly as a building block for all processing stages, from capture and tracking to virtual physic simulation. The representation makes no human specific assumption and can be used to capture and re-simulate the actor with props or other moving scenery elements. We demonstrate the potential of this pipeline for virtual reanimation of a real captured event with various unprecedented volumetric visual effects, such as volumetric distortion, erosion, morphing, gravity pull, or collisions

    Visual Importance-Biased Image Synthesis Animation

    Get PDF
    Present ray tracing algorithms are computationally intensive, requiring hours of computing time for complex scenes. Our previous work has dealt with the development of an overall approach to the application of visual attention to progressive and adaptive ray-tracing techniques. The approach facilitates large computational savings by modulating the supersampling rates in an image by the visual importance of the region being rendered. This paper extends the approach by incorporating temporal changes into the models and techniques developed, as it is expected that further efficiency savings can be reaped for animated scenes. Applications for this approach include entertainment, visualisation and simulation

    EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

    No full text
    Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time
    • …
    corecore