976 research outputs found

    3D Dynamic Scene Reconstruction from Multi-View Image Sequences

    Get PDF
    A confirmation report outlining my PhD research plan is presented. The PhD research topic is 3D dynamic scene reconstruction from multiple view image sequences. Chapter 1 describes the motivation and research aims. An overview of the progress in the past year is included. Chapter 2 is a review of volumetric scene reconstruction techniques and Chapter 3 is an in-depth description of my proposed reconstruction method. The theory behind the proposed volumetric scene reconstruction method is also presented, including topics in projective geometry, camera calibration and energy minimization. Chapter 4 presents the research plan and outlines the future work planned for the next two years

    Pushing the Limits of 3D Color Printing: Error Diffusion with Translucent Materials

    Full text link
    Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this paper, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details.Comment: 15 pages, 14 figures; includes supplemental figure

    Reconstructing specular objects with Image Based Rendering using Color Caching

    Get PDF
    Various Image Based Rendering (IBR) techniques have been proposed to reconstruct scenes from its images. Voxel-based IBR algorithms reconstruct Lambertian scenes well, but fail for specular objects due to limitations of their consistency checks. We show that the conventional consistency techniques fail due to the large variation in reflected color of the surface for different viewing positions. We present a new consistency approach that can predict this variation in color and reconstruct specular objects present in the scene. We also present an evaluation of our technique by comparing it with three other consistency methods

    Accelerated volumetric reconstruction from uncalibrated camera views

    Get PDF
    While both work with images, computer graphics and computer vision are inverse problems. Computer graphics starts traditionally with input geometric models and produces image sequences. Computer vision starts with input image sequences and produces geometric models. In the last few years, there has been a convergence of research to bridge the gap between the two fields. This convergence has produced a new field called Image-based Rendering and Modeling (IBMR). IBMR represents the effort of using the geometric information recovered from real images to generate new images with the hope that the synthesized ones appear photorealistic, as well as reducing the time spent on model creation. In this dissertation, the capturing, geometric and photometric aspects of an IBMR system are studied. A versatile framework was developed that enables the reconstruction of scenes from images acquired with a handheld digital camera. The proposed system targets applications in areas such as Computer Gaming and Virtual Reality, from a lowcost perspective. In the spirit of IBMR, the human operator is allowed to provide the high-level information, while underlying algorithms are used to perform low-level computational work. Conforming to the latest architecture trends, we propose a streaming voxel carving method, allowing a fast GPU-based processing on commodity hardware

    Semantic Visual Localization

    Full text link
    Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes

    Weakly supervised 3D Reconstruction with Adversarial Constraint

    Full text link
    Supervised 3D reconstruction has witnessed a significant progress through the use of deep neural networks. However, this increase in performance requires large scale annotations of 2D/3D data. In this paper, we explore inexpensive 2D supervision as an alternative for expensive 3D CAD annotation. Specifically, we use foreground masks as weak supervision through a raytrace pooling layer that enables perspective projection and backpropagation. Additionally, since the 3D reconstruction from masks is an ill posed problem, we propose to constrain the 3D reconstruction to the manifold of unlabeled realistic 3D shapes that match mask observations. We demonstrate that learning a log-barrier solution to this constrained optimization problem resembles the GAN objective, enabling the use of existing tools for training GANs. We evaluate and analyze the manifold constrained reconstruction on various datasets for single and multi-view reconstruction of both synthetic and real images
    • 

    corecore