10 research outputs found

    MVF-Net: Multi-View 3D Face Morphable Model Regression

    Full text link
    We address the problem of recovering the 3D geometry of a human face from a set of facial images in multiple views. While recent studies have shown impressive progress in 3D Morphable Model (3DMM) based facial reconstruction, the settings are mostly restricted to a single view. There is an inherent drawback in the single-view setting: the lack of reliable 3D constraints can cause unresolvable ambiguities. We in this paper explore 3DMM-based shape recovery in a different setting, where a set of multi-view facial images are given as input. A novel approach is proposed to regress 3DMM parameters from multi-view inputs with an end-to-end trainable Convolutional Neural Network (CNN). Multiview geometric constraints are incorporated into the network by establishing dense correspondences between different views leveraging a novel self-supervised view alignment loss. The main ingredient of the view alignment loss is a differentiable dense optical flow estimator that can backpropagate the alignment errors between an input view and a synthetic rendering from another input view, which is projected to the target view through the 3D shape to be inferred. Through minimizing the view alignment loss, better 3D shapes can be recovered such that the synthetic projections from one view to another can better align with the observed image. Extensive experiments demonstrate the superiority of the proposed method over other 3DMM methods.Comment: 2019 Conference on Computer Vision and Pattern Recognitio

    Self-supervised Learning of Detailed 3D Face Reconstruction

    Full text link
    In this paper, we present an end-to-end learning framework for detailed 3D face reconstruction from a single image. Our approach uses a 3DMM-based coarse model and a displacement map in UV-space to represent a 3D face. Unlike previous work addressing the problem, our learning framework does not require supervision of surrogate ground-truth 3D models computed with traditional approaches. Instead, we utilize the input image itself as supervision during learning. In the first stage, we combine a photometric loss and a facial perceptual loss between the input face and the rendered face, to regress a 3DMM-based coarse model. In the second stage, both the input image and the regressed texture of the coarse model are unwrapped into UV-space, and then sent through an image-toimage translation network to predict a displacement map in UVspace. The displacement map and the coarse model are used to render a final detailed face, which again can be compared with the original input image to serve as a photometric loss for the second stage. The advantage of learning displacement map in UV-space is that face alignment can be explicitly done during the unwrapping, thus facial details are easier to learn from large amount of data. Extensive experiments demonstrate the superiority of the proposed method over previous work.Comment: Accepted by IEEE Transactions on Image Processing (TIP

    Edge Computing and IoT Systems, Management and Security: First EAI International Conference, ICECI 2020, Virtual Event, November 6, 2020: Proceedings

    No full text
    This book constitutes the refereed post-conference proceedings of the First International Conference Edge Computing and IoT, ICECI 2020, held in November 2020 in Changsha, China. Due to COVID-19 pandemic the conference was held virtually. The rapidly increasing devices and data traffic in the Internet-of-Things (IoT) era are posing significant burdens on the capacity-limited Internet and uncontrollable service delay. The 11 full papers of ICECI 2020 were selected from 79 submissions and present results and ideas in the area of edge computing and IoT.https://digitalcommons.odu.edu/ece_books/1008/thumbnail.jp

    3-D Reconstruction of Human Body Shape From a Single Commodity Depth Camera

    No full text

    Suitability of IS6110-RFLP and MIRU-VNTR for Differentiating Spoligotyped Drug-Resistant Mycobacterium tuberculosis Clinical Isolates from Sichuan in China

    No full text
    Genotypes of Mycobacterium tuberculosis complex (MTBC) vary with the geographic origin of the patients and can affect tuberculosis (TB) transmission. This study was aimed to further differentiate spoligotype-defined clusters of drug-resistant MTBC clinical isolates split in Beijing (n=190) versus non-Beijing isolates (n=84) from Sichuan region, the second high-burden province in China, by IS6110-restriction fragment length polymorphism (RFLP) and 24-locus MIRU-VNTRs. Among 274 spoligotyped isolates, the clustering ratio of Beijing family was 5.3% by 24-locus MIRU-VNTRs versus 2.1% by IS6110-RFLP, while none of the non-Beijing isolates were clustered by 24-locus MIRU-VNTRs versus 9.5% by IS6110-RFLP. Hence, neither the 24-locus MIRU-VNTR was sufficient enough to fully discriminate the Beijing family, nor the IS6110-RFLP for the non-Beijing isolates. A region adjusted scheme combining 12 highly discriminatory VNTR loci with IS6110-RFLP was a better alternative for typing Beijing strains in Sichuan than 24-locus MIRU-VNTRs alone. IS6110-RFLP was for the first time introduced to systematically genotype MTBC in Sichuan and we conclude that the region-adjusted scheme of 12 highly discriminative VNTRs might be a suitable alternative to 24-locus MIRU-VNTR scheme for non-Beijing strains, while the clusters of the Beijing isolates should be further subtyped using IS6110-RFLP for optimal discrimination
    corecore