132 research outputs found

    A Survey of Methods for Volumetric Scene Reconstruction from Photographs

    Get PDF
    Scene reconstruction, the task of generating a 3D model of a scene given multiple 2D photographs taken of the scene, is an old and difficult problem in computer vision. Since its introduction, scene reconstruction has found application in many fields, including robotics, virtual reality, and entertainment. Volumetric models are a natural choice for scene reconstruction. Three broad classes of volumetric reconstruction techniques have been developed based on geometric intersections, color consistency, and pair-wise matching. Some of these techniques have spawned a number of variations and undergone considerable refinement. This paper is a survey of techniques for volumetric scene reconstruction

    3D Object Reconstruction using Multi-View Calibrated Images

    Get PDF
    In this study, two models are proposed, one is a visual hull model and another one is a 3D object reconstruction model. The proposed visual hull model, which is based on bounding edge representation, obtains high time performance which makes it to be one of the best methods. The main contribution of the proposed visual hull model is to provide bounding surfaces over the bounding edges, which results a complete triangular surface mesh. Moreover, the proposed visual hull model can be computed over the camera networks distributedly. The second model is a depth map based 3D object reconstruction model which results a watertight triangular surface mesh. The proposed model produces the result with acceptable accuracy as well as high completeness, only using stereo matching and triangulation. The contribution of this model is to playing with the 3D points to find the best reliable ones and fitting a surface over them

    Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images

    Full text link
    Recovering the 3D representation of an object from single-view or multi-view RGB images by deep neural networks has attracted increasing attention in the past few years. Several mainstream works (e.g., 3D-R2N2) use recurrent neural networks (RNNs) to fuse multiple feature maps extracted from input images sequentially. However, when given the same set of input images with different orders, RNN-based approaches are unable to produce consistent reconstruction results. Moreover, due to long-term memory loss, RNNs cannot fully exploit input images to refine reconstruction results. To solve these problems, we propose a novel framework for single-view and multi-view 3D reconstruction, named Pix2Vox. By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image. Then, a context-aware fusion module is introduced to adaptively select high-quality reconstructions for each part (e.g., table legs) from different coarse 3D volumes to obtain a fused 3D volume. Finally, a refiner further refines the fused 3D volume to generate the final output. Experimental results on the ShapeNet and Pix3D benchmarks indicate that the proposed Pix2Vox outperforms state-of-the-arts by a large margin. Furthermore, the proposed method is 24 times faster than 3D-R2N2 in terms of backward inference time. The experiments on ShapeNet unseen 3D categories have shown the superior generalization abilities of our method.Comment: ICCV 201

    Contributions to Robust Multi-view 3D Action Recognition

    Get PDF
    This thesis focus on human action recognition using volumetric reconstructions obtained from multiple monocular cameras. The problem of action recognition has been addressed using di erent approaches, both in the 2D and 3D domains, and using one or multiple views. However, the development of robust recognition methods, independent from the view employed, remains an open problem. Multi-view approaches allow to exploit 3D information to improve the recognition performance. Nevertheless, manipulating the large amount of information of 3D representations poses a major problem. As a consequence, standard dimensionality reduction techniques must be applied prior to the use of machine learning approaches. The rst contribution of this work is a new descriptor of volumetric information that can be further reduced using standard Dimensionality Reduction techniques in both holistic and sequential recognition approaches. However, the descriptor itself reduces the amount of data up to an order of magnitude (compared to previous descriptors) without a ecting to the classi cation performance. The descriptor represents the volumetric information obtained by SfS techniques. However, this family of techniques are highly in uenced by errors in the segmentation process (e.g., undersegmentation causes false negatives in the reconstructed volumes) so that the recognition performance is highly a ected by this rst step. The second contribution of this work is a new SfS technique (named SfSDS) that employs the Dempster-Shafer theory to fuse evidences provided by multiple cameras. The central idea is to consider the relative position between cameras so as to deal with inconsistent silhouettes and obtain robust volumetric reconstructions. The basic SfS technique still have a main drawback, it requires the whole volume to be analized in order to obtain the reconstruction. On the other hand, octree-based representations allows to save memory and time employing a dynamic tree structure where only occupied nodes are stored. Nevertheless, applying the SfS method to octreebased representations is not straightforward. The nal contribution of this work is a method for generating octrees using our proposed SfSDS technique so as to obtain robust and compact volumetric representations.Esta tesis se centra en el reconocimiento de acciones humanas usando reconstrucciones volum etricas obtenidas a partir de m ultiples c amaras monoculares. El problema del reconocimiento de acciones ha sido tratado usando diferentes enfoques, en los dominios 2D y 3D, y usando una o varias vistas. No obstante, el desarrollo de m etodos de reconocimiento robustos, independientes de la vista empleada, sigue siendo un problema abierto. Los enfoques multi-vista permiten explotar la informaci on 3D para mejorar el rendimiento del reconocimiento. Sin embargo, manipular las grandes cantidades de informaci on de las representaciones 3D plantea un importante problema. Como consecuencia, deben ser aplicadas t ecnicas est andar de reducci on de dimensionalidad con anterioridad al uso de propuestas de aprendizaje. La primera contribuci on de este trabajo es un nuevo descriptor de informaci on volum etrica que puede ser posteriormente reducido mediante t ecnicas est andar de reducci on de dimensionalidad en los enfoques de reconocimiento hol sticos y secuenciales. El descriptor, por si mismo, reduce la cantidad de datos hasta en un orden de magnitud (en comparaci on con descriptores previos) sin afectar al rendimiento de clasi caci on. El descriptor representa la informaci on volum etrica obtenida en t ecnicas SfS. Sin embargo, esta familia de t ecnicas est a altamente in uenciada por los errores en el proceso de segmentaci on (p.e., una sub-segmentaci on causa falsos negativos en los vol umenes reconstruidos) de forma que el rendimiento del reconocimiento est a signi cativamente afectado por este primer paso. La segunda contribuci on de este trabajo es una nueva t ecnica SfS (denominada SfSDS) que emplea la teor a de Dempster-Shafer para fusionar evidencias proporcionadas por m ultiples c amaras. La idea central consiste en considerar la posici on relativa entre c amaras de forma que se traten las inconsistencias en las siluetas y se obtenga reconstrucciones volum etricas robustas. La t ecnica SfS b asica sigue teniendo un inconveniente principal; requiere que el volumen completo sea analizado para obtener la reconstrucci on. Por otro lado, las representaciones basadas en octrees permiten salvar memoria y tiempo empleando una estructura de arbol din amica donde s olo se almacenan los nodos ocupados. No obstante, la aplicaci on del m etodo SfS a representaciones basadas en octrees no es directa. La contribuci on nal de este trabajo es un m etodo para la generaci on de octrees usando nuestra t ecnica SfSDS propuesta de forma que se obtengan representaciones volum etricas robustas y compactas

    3D object reconstruction using computer vision : reconstruction and characterization applications for external human anatomical structures

    Get PDF
    Tese de doutoramento. Engenharia InformĂĄtica. Faculdade de Engenharia. Universidade do Porto. 201

    3D Dynamic Scene Reconstruction from Multi-View Image Sequences

    Get PDF
    A confirmation report outlining my PhD research plan is presented. The PhD research topic is 3D dynamic scene reconstruction from multiple view image sequences. Chapter 1 describes the motivation and research aims. An overview of the progress in the past year is included. Chapter 2 is a review of volumetric scene reconstruction techniques and Chapter 3 is an in-depth description of my proposed reconstruction method. The theory behind the proposed volumetric scene reconstruction method is also presented, including topics in projective geometry, camera calibration and energy minimization. Chapter 4 presents the research plan and outlines the future work planned for the next two years

    View generated database

    Get PDF
    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics
    • 

    corecore