42,662 research outputs found

    Marching Intersections: An Efficient Approach to Shape-from-Silhouette

    Get PDF
    A new shape-from-silhouette algorithm for the creation of 3D digital models is presented. The algorithm is based on the use of the Marching Intersection (MI) data structure, a volumetric scheme which allows ef\ufb01cient representation of 3D polyhedra and reduces the boolean operations between them to simple boolean operations on linear intervals. MI supports the de\ufb01nition of a direct shape-from-silhouette approach: the 3D conoids built from the silhouettes extracted from the images of the object are directly intersected to form the resulting 3D digital model. Compared to existing methods, our approach allows high quality models to be obtained in an ef\ufb01cient way. Examples on synthetic objects together with quantitative and qualitative evaluations are given

    Learning to Reconstruct Shapes from Unseen Classes

    Full text link
    From a single image, humans are able to perceive the full 3D shape of an object by exploiting learned shape priors from everyday life. Contemporary single-image 3D reconstruction algorithms aim to solve this task in a similar fashion, but often end up with priors that are highly biased by training classes. Here we present an algorithm, Generalizable Reconstruction (GenRe), designed to capture more generic, class-agnostic shape priors. We achieve this with an inference network and training procedure that combine 2.5D representations of visible surfaces (depth and silhouette), spherical shape representations of both visible and non-visible surfaces, and 3D voxel-based representations, in a principled manner that exploits the causal structure of how 3D shapes give rise to 2D images. Experiments demonstrate that GenRe performs well on single-view shape reconstruction, and generalizes to diverse novel objects from categories not seen during training.Comment: NeurIPS 2018 (Oral). The first two authors contributed equally to this paper. Project page: http://genre.csail.mit.edu

    Droplet size and morphology characterization for diesel sprays under atmospheric operating conditions

    Get PDF
    The shape of microscopic fuel droplets may differ from the perfect sphere, affecting their external surface area and thus the heat transfer with the surrounding gas. Hence there is a need for the characterization of droplet shapes, and the estimation of external surface area, in order to enable the development of physically accurate mathematical models for the heating and evaporation of diesel fuel sprays. We present ongoing work to automat-ically identify and reconstruct the morphology of fuel droplets, primarily focusing in this study on irregularly-shaped, partially-deformed and oscillating droplets under atmospheric conditions. We used direct imaging tech-niques based on long-working distance microscopy and ultra-high-speed video to conduct a detailed temporal investigation of droplet morphology. We applied purpose-built algorithms to extract droplet size, velocity, vol-ume and external surface area from the microscopic ultra-high-speed video frames. High resolution images of oscillating droplets and a formation of a droplet form ligament, sphericity factors, volume as well as external surface area are presented for 500 bar injection pressure in the near nozzle region (up to 0.7 mm from nozzle exit) under atmospheric conditions. We observed a range of different liquid structures, including perfectly spher-ical, non-spherical droplets and stretched ligaments. We found that large droplets and ligaments exceeding the size of the nozzle hole could be found at the end of injection. In order to estimate droplet volume and external surface area from two-dimensional droplet information, a discrete revolution of the droplet silhouette about its major centroidal axis was used. Special attention was paid to the estimation of actual errors in the prediction of volume and surface characteristics from a droplet silhouette. In addition to the estimation of droplet volume and external surface area, the actual shape reconstruction in 3D coordinates from a droplet silhouette was performed in order to enable future numerical modelling studies of real droplets

    Shape from inconsistent silhouette: Reconstruction of objects in the presence of segmentation and camera calibration error

    Get PDF
    Silhouettes are useful features to reconstruct the object shape when the object is textureless or the shape classes of objects are unknown. In this dissertation, we explore the problem of reconstructing the shape of challenging objects from silhouettes under real-world conditions such as the presence of silhouette and camera calibration error. This problem is called the Shape from Inconsistent Silhouettes problem. A psuedo-Boolean cost function is formalized for this problem, which penalizes differences between the reconstruction images and the silhouette images, and the Shape from Inconsistent Silhouette problem is cast as a psuedo-Boolean minimization problem. We propose a memory and time efficient method to find a local minimum solution to the optimization problem, including heuristics that take into account the geometric nature of the problem. Our methods are demonstrated on a variety of challenging objects including humans and large, thin objects. We also compare our methods to the state-of-the-art by generating reconstructions of synthetic objects with induced error. ^ We also propose a method for correcting camera calibration error given silhouettes with segmentation error. Unlike other existing methods, our method allows camera calibration error to be corrected without camera placement constraints and allows for silhouette segmentation error. This is accomplished by a modified Iterative Closest Point algorithm which minimizes the difference between an initial reconstruction and the input silhouettes. We characterize the degree of error that can be corrected with synthetic datasets with increasing error, and demonstrate the ability of the camera calibration correction method in improving the reconstruction quality in several challenging real-world datasets

    Combining asteroid models derived by lightcurve inversion with asteroidal occultation silhouettes

    Full text link
    Asteroid sizes can be directly measured by observing occultations of stars by asteroids. When there are enough observations across the path of the shadow, the asteroid's projected silhouette can be reconstructed. Asteroid shape models derived from photometry by the lightcurve inversion method enable us to predict the orientation of an asteroid for the time of occultation. By scaling the shape model to fit the occultation chords, we can determine the asteroid size with a relative accuracy of typically ~ 10%. We combine shape and spin state models of 44 asteroids (14 of them are new or updated models) with the available occultation data to derive asteroid effective diameters. In many cases, occultations allow us to reject one of two possible pole solutions that were derived from photometry. We show that by combining results obtained from lightcurve inversion with occultation timings, we can obtain unique physical models of asteroids.Comment: 33 pages, 45 figures, 4 tables, accepted for publication in Icaru

    Virtual Visual Hulls: Example-Based 3D Shape Estimation from a Single Silhouette

    Get PDF
    Recovering a volumetric model of a person, car, or other object of interest from a single snapshot would be useful for many computer graphics applications. 3D model estimation in general is hard, and currently requires active sensors, multiple views, or integration over time. For a known object class, however, 3D shape can be successfully inferred from a single snapshot. We present a method for generating a ``virtual visual hull''-- an estimate of the 3D shape of an object from a known class, given a single silhouette observed from an unknown viewpoint. For a given class, a large database of multi-view silhouette examples from calibrated, though possibly varied, camera rigs are collected. To infer a novel single view input silhouette's virtual visual hull, we search for 3D shapes in the database which are most consistent with the observed contour. The input is matched to component single views of the multi-view training examples. A set of viewpoint-aligned virtual views are generated from the visual hulls corresponding to these examples. The 3D shape estimate for the input is then found by interpolating between the contours of these aligned views. When the underlying shape is ambiguous given a single view silhouette, we produce multiple visual hull hypotheses; if a sequence of input images is available, a dynamic programming approach is applied to find the maximum likelihood path through the feasible hypotheses over time. We show results of our algorithm on real and synthetic images of people
    corecore