46,788 research outputs found

    FML: Face Model Learning from Videos

    Full text link
    Monocular image-based 3D reconstruction of faces is a long-standing problem in computer vision. Since image data is a 2D projection of a 3D face, the resulting depth ambiguity makes the problem ill-posed. Most existing methods rely on data-driven priors that are built from limited 3D face scans. In contrast, we propose multi-frame video-based self-supervised training of a deep network that (i) learns a face identity model both in shape and appearance while (ii) jointly learning to reconstruct 3D faces. Our face model is learned using only corpora of in-the-wild video clips collected from the Internet. This virtually endless source of training data enables learning of a highly general 3D face model. In order to achieve this, we propose a novel multi-frame consistency loss that ensures consistent shape and appearance across multiple frames of a subject's face, thus minimizing depth ambiguity. At test time we can use an arbitrary number of frames, so that we can perform both monocular as well as multi-frame reconstruction.Comment: CVPR 2019 (Oral). Video: https://www.youtube.com/watch?v=SG2BwxCw0lQ, Project Page: https://gvv.mpi-inf.mpg.de/projects/FML19

    Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz

    Full text link
    The reconstruction of dense 3D models of face geometry and appearance from a single image is highly challenging and ill-posed. To constrain the problem, many approaches rely on strong priors, such as parametric face models learned from limited 3D scan data. However, prior models restrict generalization of the true diversity in facial geometry, skin reflectance and illumination. To alleviate this problem, we present the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model. Our multi-level face model combines the advantage of 3D Morphable Models for regularization with the out-of-space generalization of a learned corrective space. We train end-to-end on in-the-wild images without dense annotations by fusing a convolutional encoder with a differentiable expert-designed renderer and a self-supervised training loss, both defined at multiple detail levels. Our approach compares favorably to the state-of-the-art in terms of reconstruction quality, better generalizes to real world faces, and runs at over 250 Hz.Comment: CVPR 2018 (Oral). Project webpage: https://gvv.mpi-inf.mpg.de/projects/FML

    3D Microfluidic model for evaluating immunotherapy efficacy by tracking dendritic cell behaviour toward tumor cells

    Get PDF
    Immunotherapy efficacy relies on the crosstalk within the tumor microenvironment between cancer and dendritic cells (DCs) resulting in the induction of a potent and effective antitumor response. DCs have the specific role of recognizing cancer cells, taking up tumor antigens (Ags) and then migrating to lymph nodes for Ag (cross)-presentation to naïve T cells. Interferon-α-conditioned DCs (IFN-DCs) exhibit marked phagocytic activity and the special ability of inducing Ag-specific T-cell response. Here, we have developed a novel microfluidic platform recreating tightly interconnected cancer and immune systems with specific 3D environmental properties, for tracking human DC behaviour toward tumor cells. By combining our microfluidic platform with advanced microscopy and a revised cell tracking analysis algorithm, it was possible to evaluate the guided efficient motion of IFN-DCs toward drug-treated cancer cells and the succeeding phagocytosis events. Overall, this platform allowed the dissection of IFN-DC-cancer cell interactions within 3D tumor spaces, with the discovery of major underlying factors such as CXCR4 involvement and underscored its potential as an innovative tool to assess the efficacy of immunotherapeutic approaches

    The macroscopic effects of microscopic heterogeneity

    Full text link
    Over the past decade, advances in super-resolution microscopy and particle-based modeling have driven an intense interest in investigating spatial heterogeneity at the level of single molecules in cells. Remarkably, it is becoming clear that spatiotemporal correlations between just a few molecules can have profound effects on the signaling behavior of the entire cell. While such correlations are often explicitly imposed by molecular structures such as rafts, clusters, or scaffolds, they also arise intrinsically, due strictly to the small numbers of molecules involved, the finite speed of diffusion, and the effects of macromolecular crowding. In this chapter we review examples of both explicitly imposed and intrinsic correlations, focusing on the mechanisms by which microscopic heterogeneity is amplified to macroscopic effect.Comment: 20 pages, 5 figures. To appear in Advances in Chemical Physic

    Generating 3D faces using Convolutional Mesh Autoencoders

    Full text link
    Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and non-linear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We also show that, replacing the expression space of an existing state-of-the-art face model with our autoencoder, achieves a lower reconstruction error. Our data, model and code are available at http://github.com/anuragranj/com
    corecore