52 research outputs found

    Automatic Face Reenactment

    No full text
    We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target performance. Our system is fully automatic and does not require a database of source expressions. Instead, it is able to produce convincing reenactment results from a short source video captured with an off-the-shelf camera, such as a webcam, where the user performs arbitrary facial gestures. Our reenactment pipeline is conceived as part image retrieval and part face transfer: The image retrieval is based on temporal clustering of target frames and a novel image matching metric that combines appearance and motion to select candidate frames from the source video, while the face transfer uses a 2D warping strategy that preserves the user's identity. Our system excels in simplicity as it does not rely on a 3D face model, it is robust under head motion and does not require the source and target performance to be similar. We show convincing reenactment results for videos that we recorded ourselves and for low-quality footage taken from the Internet

    Algebraic error analysis of collinear feature points for camera parameter estimation

    Get PDF
    In general, feature points and camera parameters can only be estimated with limited accuracy due to noisy images. In case of collinear feature points, it is possible to benefit from this geometrical regularity by correcting the feature points to lie on the supporting estimated straight line, yielding increased accuracy of the estimated camera parameters. However, regarding Maximum-Likelihood (ML) estimation, this procedure is incomplete and suboptimal. An optimal solution must also determine the error covariance of corrected features. In this paper, a complete theoretical covariance propagation analysis starting from the error of the feature points up to the error of the estimated camera parameters is performed. Additionally, corresponding Fisher Information Matrices are determined and fundamental relationships between the number and distance of collinear points and corresponding error variances are revealed algebraically. To demonstrate the impact of collinearity, experiments are conducted with covariance propagation analyses, showing significant reduction of the error variances of the estimated parameters. © 2010 Elsevier Inc. All rights reserved

    Autoinhibition of the kinesin-2 motor KIF17 via dual intramolecular mechanisms

    Get PDF
    Kinesin-2 motor KIF17 autoinhibition is visualized in vivo; in the absence of cargo, this homodimer’s C-terminal tail blocks microtubule binding, and a coiled-coil segment blocks motility

    A Naturally Associated Rhizobacterium of Arabidopsis thaliana Induces a Starvation-Like Transcriptional Response while Promoting Growth

    Get PDF
    Plant growth promotion by rhizobacteria is a known phenomenon but the underlying mechanisms are poorly understood. We searched for plant growth-promoting rhizobacteria that are naturally associated with Arabidopsis thaliana to investigate the molecular mechanisms that are involved in plant growth-promotion. We isolated a Pseudomonas bacterium (Pseudomonas sp. G62) from roots of field-grown Arabidopsis plants that has not been described previously and analyzed its effect on plant growth, gene expression and the level of sugars and amino acids in the host plant. Inoculation with Pseudomonas sp. G62 promoted plant growth under various growth conditions. Microarray analysis revealed rapid changes in transcript levels of genes annotated to energy-, sugar- and cell wall metabolism in plants 6 h after root inoculation with P. sp. G62. The expression of several of these genes remained stable over weeks, but appeared differentially regulated in roots and shoots. The global gene expression profile observed after inoculation with P. sp. G62 showed a striking resemblance with previously described carbohydrate starvation experiments, although plants were not depleted from soluble sugars, and even showed a slight increase of the sucrose level in roots 5 weeks after inoculation. We suggest that the starvation-like transcriptional phenotype - while steady state sucrose levels are not reduced - is induced by a yet unknown signal from the bacterium that simulates sugar starvation. We discuss the potential effects of the sugar starvation signal on plant growth promotion

    Combining Photometric Normals and Multi-View Stereo for {3D} Reconstruction

    No full text
    Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems. </p

    Multiple Active Speaker Localization Based on Audio-visual Fusion in Two Stages

    No full text
    Localization of multiple active speakers in natural environments with only two microphones is a challenging problem. Reverberation degrades performance of speaker localization based exclusively on directional cues. The audio modality alone has problems with localization accuracy while the video modality alone has problems with false speaker activity detections. This paper presents an approach based on audiovisual fusion in two stages. In the first stage, speaker activity is detected based on the audio-visual fusion which can handle false lip movements. In the second stage, a Gaussian fusion method is proposed to integrate the estimates of both modalities. As a consequence, the localization accuracy and robustness compared to the audio/video modality alone is significantly increased. Experimental results in various scenarios confirmed the improved performance of the proposed system

    Rapid Stereo-Vision Enhanced Face Recognition

    No full text

    Material Memex: {Automatic} Material Suggestions for {3D} Objects

    No full text

    Automatically Rigging Multi-component Characters

    No full text
    Figure 1: Our approach creates rigs for multi-component meshes that can be mapped to an input animation skeleton (far left). Rigging an arbitrary 3D character by creating an animation skeleton is a time-consuming process even for experienced animators. In this paper, we present an algorithm that automatically creates animation rigs for multicomponent 3D models, as they are typically found in online shape databases. Our algorithm takes as input a multi-component model and an input animation skeleton with associated motion data. It then creates a target skeleton for the input model, calculates the rigid skinning weights, and a mapping between the joints of the target skeleton and the input animation skeleton. The automatic approach does not need additional semantic information, such as component labels or user-provided correspondences, and succeeds on a wide range of models where the number of components is significantly different. It implicitly handles large scale and proportional differences between input and target skeletons and can deal with certain morphological differences, e.g., if input and target have different numbers of limbs. The output of our algorithm can be directly used in a retargeting system to create a plausible animated character
    corecore