281 research outputs found

    Robust recovery of shapes with unknown topology from the dual space

    Get PDF
    In this paper, we address the problem of reconstructing an object surface from silhouettes. Previous works by other authors have shown that, based on the principle of duality, surface points can be recovered, theoretically, as the dual to the tangent plane space of the object. In practice, however, the identification of tangent basis in the tangent plane space is not trivial given a set of discretely sampled data. This problem is further complicated by the existence of bi-tangents to the object surface. The key contribution of this paper is the introduction of epipolar parameterization in identifying a well-defined local tangent basis. This extends the applicability of existing dual space reconstruction methods to fairly complicated shapes, without making any explicit assumption on the object topology. We verify our approach with both synthetic and real-world data, and compare it both qualitatively and quantitatively with other popular reconstruction algorithms. Experimental results demonstrate that our proposed approach produces more accurate estimation, whilst maintaining reasonable robustness towards shapes with complex topologies. © 2007 IEEE.published_or_final_versio

    Reconstruction of sculpture from its profiles with unknown camera positions

    Get PDF
    Profiles of a sculpture provide rich information about its geometry, and can be used for shape recovery under known camera motion. By exploiting correspondences induced by epipolar tangents on the profiles, a successful solution to motion estimation from profiles has been developed in the special case of circular motion. The main drawbacks of using circular motion alone, namely the difficulty in adding new views and part of the object always being invisible, can be overcome by incorporating arbitrary general views of the object and registering its new profiles with the set of profiles resulted from the circular motion. In this paper, we describe a complete and practical system for producing a three-dimensional (3-D) model from uncalibrated images of an arbitrary object using its profiles alone. Experimental results on various objects are presented, demonstrating the quality of the reconstructions using the estimated motion.published_or_final_versio

    Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images

    Full text link
    Recovering the 3D representation of an object from single-view or multi-view RGB images by deep neural networks has attracted increasing attention in the past few years. Several mainstream works (e.g., 3D-R2N2) use recurrent neural networks (RNNs) to fuse multiple feature maps extracted from input images sequentially. However, when given the same set of input images with different orders, RNN-based approaches are unable to produce consistent reconstruction results. Moreover, due to long-term memory loss, RNNs cannot fully exploit input images to refine reconstruction results. To solve these problems, we propose a novel framework for single-view and multi-view 3D reconstruction, named Pix2Vox. By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image. Then, a context-aware fusion module is introduced to adaptively select high-quality reconstructions for each part (e.g., table legs) from different coarse 3D volumes to obtain a fused 3D volume. Finally, a refiner further refines the fused 3D volume to generate the final output. Experimental results on the ShapeNet and Pix3D benchmarks indicate that the proposed Pix2Vox outperforms state-of-the-arts by a large margin. Furthermore, the proposed method is 24 times faster than 3D-R2N2 in terms of backward inference time. The experiments on ShapeNet unseen 3D categories have shown the superior generalization abilities of our method.Comment: ICCV 201

    Structure and motion estimation from apparent contours under circular motion

    Get PDF
    In this paper, we address the problem of recovering structure and motion from the apparent contours of a smooth surface. Fixed image features under circular motion and their relationships with the intrinsic parameters of the camera are exploited to provide a simple parameterization of the fundamental matrix relating any pair of views in the sequence. Such a parameterization allows a trivial initialization of the motion parameters, which all bear physical meaning. It also greatly reduces the dimension of the search space for the optimization problem, which can now be solved using only two epipolar tangents. In contrast to previous methods, the motion estimation algorithm introduced here can cope with incomplete circular motion and more widely spaced images. Existing techniques for model reconstruction from apparent contours are then reviewed and compared. Experiment on real data has been carried out and the 3D model reconstructed from the estimated motion is presented. © 2002 Elsevier Science B.V. All rights reserved.postprin

    Robust surface modelling of visual hull from multiple silhouettes

    Get PDF
    Reconstructing depth information from images is one of the actively researched themes in computer vision and its application involves most vision research areas from object recognition to realistic visualisation. Amongst other useful vision-based reconstruction techniques, this thesis extensively investigates the visual hull (VH) concept for volume approximation and its robust surface modelling when various views of an object are available. Assuming that multiple images are captured from a circular motion, projection matrices are generally parameterised in terms of a rotation angle from a reference position in order to facilitate the multi-camera calibration. However, this assumption is often violated in practice, i.e., a pure rotation in a planar motion with accurate rotation angle is hardly realisable. To address this problem, at first, this thesis proposes a calibration method associated with the approximate circular motion. With these modified projection matrices, a resulting VH is represented by a hierarchical tree structure of voxels from which surfaces are extracted by the Marching cubes (MC) algorithm. However, the surfaces may have unexpected artefacts caused by a coarser volume reconstruction, the topological ambiguity of the MC algorithm, and imperfect image processing or calibration result. To avoid this sensitivity, this thesis proposes a robust surface construction algorithm which initially classifies local convex regions from imperfect MC vertices and then aggregates local surfaces constructed by the 3D convex hull algorithm. Furthermore, this thesis also explores the use of wide baseline images to refine a coarse VH using an affine invariant region descriptor. This improves the quality of VH when a small number of initial views is given. In conclusion, the proposed methods achieve a 3D model with enhanced accuracy. Also, robust surface modelling is retained when silhouette images are degraded by practical noise

    Marching Intersections: An Efficient Approach to Shape-from-Silhouette

    Get PDF
    A new shape-from-silhouette algorithm for the creation of 3D digital models is presented. The algorithm is based on the use of the Marching Intersection (MI) data structure, a volumetric scheme which allows ef\ufb01cient representation of 3D polyhedra and reduces the boolean operations between them to simple boolean operations on linear intervals. MI supports the de\ufb01nition of a direct shape-from-silhouette approach: the 3D conoids built from the silhouettes extracted from the images of the object are directly intersected to form the resulting 3D digital model. Compared to existing methods, our approach allows high quality models to be obtained in an ef\ufb01cient way. Examples on synthetic objects together with quantitative and qualitative evaluations are given

    3D object reconstruction using computer vision : reconstruction and characterization applications for external human anatomical structures

    Get PDF
    Tese de doutoramento. Engenharia InformĂĄtica. Faculdade de Engenharia. Universidade do Porto. 201

    Object segmentation from low depth of field images and video sequences

    Get PDF
    This thesis addresses the problem of autonomous object segmentation. To do so the proposed segementation method uses some prior information, namely that the image to be segmented will have a low depth of field and that the object of interest will be more in focus than the background. To differentiate the object from the background scene, a multiscale wavelet based assessment is proposed. The focus assessment is used to generate a focus intensity map, and a sparse fields level set implementation of active contours is used to segment the object of interest. The initial contour is generated using a grid based technique. The method is extended to segment low depth of field video sequences with each successive initialisation for the active contours generated from the binary dilation of the previous frame's segmentation. Experimental results show good segmentations can be achieved with a variety of different images, video sequences, and objects, with no user interaction or input. The method is applied to two different areas. In the first the segmentations are used to automatically generate trimaps for use with matting algorithms. In the second, the method is used as part of a shape from silhouettes 3D object reconstruction system, replacing the need for a constrained background when generating silhouettes. In addition, not using a thresholding to perform the silhouette segmentation allows for objects with dark components or areas to be segmented accurately. Some examples of 3D models generated using silhouettes are shown
    • 

    corecore