980 research outputs found

    Reconstruction of surfaces of revolution from single uncalibrated views

    Get PDF
    This paper addresses the problem of recovering the 3D shape of a surface of revolution from a single uncalibrated perspective view. The algorithm introduced here makes use of the invariant properties of a surface of revolution and its silhouette to locate the image of the revolution axis, and to calibrate the focal length of the camera. The image is then normalized and rectified such that the resulting silhouette exhibits bilateral symmetry. Such a rectification leads to a simpler differential analysis of the silhouette, and yields a simple equation for depth recovery. It is shown that under a general camera configuration, there will be a 2-parameter family of solutions for the reconstruction. The first parameter corresponds to an unknown scale, whereas the second one corresponds to an unknown attitude of the object. By identifying the image of a latitude circle, the ambiguity due to the unknown attitude can be resolved. Experimental results on real images are presented, which demonstrate the quality of the reconstruction. © 2004 Elsevier B.V. All rights reserved.postprin

    CNN-based Real-time Dense Face Reconstruction with Inverse-rendered Photo-realistic Face Images

    Full text link
    With the powerfulness of convolution neural networks (CNN), CNN based face reconstruction has recently shown promising performance in reconstructing detailed face shape from 2D face images. The success of CNN-based methods relies on a large number of labeled data. The state-of-the-art synthesizes such data using a coarse morphable face model, which however has difficulty to generate detailed photo-realistic images of faces (with wrinkles). This paper presents a novel face data generation method. Specifically, we render a large number of photo-realistic face images with different attributes based on inverse rendering. Furthermore, we construct a fine-detailed face image dataset by transferring different scales of details from one image to another. We also construct a large number of video-type adjacent frame pairs by simulating the distribution of real video data. With these nicely constructed datasets, we propose a coarse-to-fine learning framework consisting of three convolutional networks. The networks are trained for real-time detailed 3D face reconstruction from monocular video as well as from a single image. Extensive experimental results demonstrate that our framework can produce high-quality reconstruction but with much less computation time compared to the state-of-the-art. Moreover, our method is robust to pose, expression and lighting due to the diversity of data.Comment: Accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence, 201

    Recovering facial shape using a statistical model of surface normal direction

    Get PDF
    In this paper, we show how a statistical model of facial shape can be embedded within a shape-from-shading algorithm. We describe how facial shape can be captured using a statistical model of variations in surface normal direction. To construct this model, we make use of the azimuthal equidistant projection to map the distribution of surface normals from the polar representation on a unit sphere to Cartesian points on a local tangent plane. The distribution of surface normal directions is captured using the covariance matrix for the projected point positions. The eigenvectors of the covariance matrix define the modes of shape-variation in the fields of transformed surface normals. We show how this model can be trained using surface normal data acquired from range images and how to fit the model to intensity images of faces using constraints on the surface normal direction provided by Lambert's law. We demonstrate that the combination of a global statistical constraint and local irradiance constraint yields an efficient and accurate approach to facial shape recovery and is capable of recovering fine local surface details. We assess the accuracy of the technique on a variety of images with ground truth and real-world images

    SHREC'16: partial matching of deformable shapes

    Get PDF
    Matching deformable 3D shapes under partiality transformations is a challenging problem that has received limited focus in the computer vision and graphics communities. With this benchmark, we explore and thoroughly investigate the robustness of existing matching methods in this challenging task. Participants are asked to provide a point-to-point correspondence (either sparse or dense) between deformable shapes undergoing different kinds of partiality transformations, resulting in a total of 400 matching problems to be solved for each method - making this benchmark the biggest and most challenging of its kind. Five matching algorithms were evaluated in the contest; this paper presents the details of the dataset, the adopted evaluation measures, and shows thorough comparisons among all competing methods

    Camera calibration from surfaces of revolution

    Get PDF
    This paper addresses the problem of calibrating a pinhole camera from images of a surface of revolution. Camera calibration is the process of determining the intrinsic or internal parameters (i.e., aspect ratio, focal length, and principal point) of a camera, and it is important for both motion estimation and metric reconstruction of 3D models. In this paper, a novel and simple calibration technique is introduced, which is based on exploiting the symmetry of images of surfaces of revolution. Traditional techniques for camera calibration involve taking images of some precisely machined calibration pattern (such as a calibration grid). The use of surfaces of revolution, which are commonly found in daily life (e.g., bowls and vases), makes the process easier as a result of the reduced cost and increased accessibility of the calibration objects. In this paper, it is shown that two images of a surface of revolution will provide enough information for determining the aspect ratio, focal length, and principal point of a camera with fixed intrinsic parameters. The algorithms presented in this paper have been implemented and tested with both synthetic and real data. Experimental results show that the camera calibration method presented here is both practical and accurate.published_or_final_versio

    R-C-P Method: An Autonomous Volume Calculation Method Using Image Processing and Machine Vision

    Full text link
    Machine vision and image processing are often used with sensors for situation awareness in autonomous systems, from industrial robots to self-driving cars. The 3D depth sensors, such as LiDAR (Light Detection and Ranging), Radar, are great invention for autonomous systems. Due to the complexity of the setup, LiDAR may not be suitable for some operational environments, for example, a space environment. This study was motivated by a desire to get real-time volumetric and change information with multiple 2D cameras instead of a depth camera. Two cameras were used to measure the dimensions of a rectangular object in real-time. The R-C-P (row-column-pixel) method is developed using image processing and edge detection. In addition to the surface areas, the R-C-P method also detects discontinuous edges or volumes. Lastly, experimental work is presented for illustration of the R-C-P method, which provides the equations for calculating surface area dimensions. Using the equations with given distance information between the object and the camera, the vision system provides the dimensions of actual objects

    Left, right or both? Estimating and improving accuracy of one‐side‐only geometric morphometric analyses of cranial variation

    Get PDF
    Procrustes-based geometric morphometric analyses of bilaterally symmetric structures are often performed using only one side. This is particularly common in studies of cranial variation in mammals and other vertebrates. When one is not interested in quantifying asymmetry, landmarking one side, instead of both, reduces the number of variables as well as the time and costs of data collection. It is assumed that the loss of information in the other half, on which landmarks are not digitized, is negligible, but this has seldom been tested. Using 10 samples of mammalian crania and a total of more than 500 specimens, and five different landmark configurations, I demonstrate that this assumption is indeed easily met for size. For shape, in contrast, one-side landmarking has potentially more severe consequences on the estimates of similarity relationships in a sample. In this respect, microevolutionary analyses of small differences are particularly affected, whereas macroevolutionary studies are fairly robust. In almost all instances, however, a simple preliminary operation improves accuracy by making one-side-only shape data more similar to those obtained by landmarking both sides. The same operation also makes estimates of allometry more accurate and improves the visualization. This operation consists in estimating the missing side by a mirror reflection of bilateral landmarks. In the Supporting Information, I exemplify how this can be easily done using free user-friendly software. I also provide an example data set for readers to repeat and learn the steps of this simple procedure

    Reconstruction of surface of revolution from multiple uncalibrated views: a bundle-adjustment approach

    Get PDF
    This paper addresses the problem of recovering the 3D shape of a surface of revolution from multiple uncalibrated perspective views. In previous work, we have exploited the invariant properties of the surface of revolution and its silhouette to recover the contour generator and hence the meridian of the surface of revolution from a single uncalibrated view. However, there exists one degree of freedom in the reconstruction which corresponds to the unknown orientation of the revolution axis of the surface of revolution. In this paper, such an ambiguity is removed by estimating the horizon, again, using the image invariants associated with the surface of revolution. A bundle-adjustment approach is then proposed to provide an optimal estimate of the meridian when multiple uncalibrated views of the same surface of revolution are available. Experimental results on real images are presented, which demonstrate the feasibility of the approach.postprintThe 6th Asian Conference on Computer Vision (ACCV 2004), Jeju, Korea, 27-30 January 2004. In Proceedings of the 6th Asian Conference on Computer Vision, 2004, v. 1, p. 378-38
    corecore