26,400 research outputs found

    Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Get PDF
    To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects

    Statistical Reconstruction Methods for 3D Imaging of Biological Samples with Electron Microscopy

    Get PDF
    Electron microscopy has emerged as the leading method for the in vivo study of biological structures such as cells, organelles, protein molecules and virus like particles. By providing 3D images up to near atomic resolution, it plays a significant role in analyzing complex organizations, understanding physiological functions and developing medicines. The 3D images representing the electrostatic potential distribution are reconstructed by utilizing the 2D projection images of the target acquired by electron microscope. There are two main 3D reconstruction techniques in the field of electron microscopy: electron tomography (ET) and single particle reconstruction (SPR). In ET, the projection images are acquired by rotating the specimen for different angles. In SPR, the projection images are obtained by analyzing the images of multiple objects representing the same structure. Then, the tomographic reconstruction methods are applied in both methods to obtain the 3D image through the 2D projections.Physical and mechanical limitations can prevent to acquire projection images that cover the projection angle space completely and uniformly. Incomplete and non-uniform sampling of the projection angles results in anisotropic resolution in the image plane and generates artifacts. Another problem is that the total applied dose of electrons is limited in order to prevent the radiation damage to the biological target. Therefore, limited number of projection images with low signal to noise ratio can be used in the reconstruction process. This affects the resolution of the reconstructed image significantly. This study presents statistical methods to overcome these major challenges to obtain precise and high resolution images in electron microscopy.Statistical image reconstruction methods have been successful in recovering a signal from imperfect measurements due to their capability of utilizing a priori information. First, we developed a sequential application of a statistical method for ET. Then we extended the method to support projection angles freely distributed in 3D space and applied the method in SPR. In both applications, we observed the strength of the method in projection gap filling, robustness against noise, and resolving the high resolution details in comparison with the conventional reconstruction methods. Afterwards, we improved the method in terms of computation time by incorporating multiresolution reconstruction. Furthermore, we developed an adaptive regularization method to minimize the parameters required to be set by the user. We also proposed the local adaptive Wiener filter for the class averaging step of SPR to improve the averaging accuracy.The qualitative and quantitative analysis of the reconstructions with phantom and experimental datasets has demonstrated that the proposed reconstruction methods outperform the conventional reconstruction methods. These statistical approaches provided better image accuracy and higher resolution compared with the conventional algebraic and transfer domain based reconstruction methods. The methods provided in this study contribute to enhance our understanding of cellular and molecular structures by providing 3D images of those with improved accuracy and resolution

    High-Resolution Shape Completion Using Deep Neural Networks for Global Structure and Local Geometry Inference

    Get PDF
    We propose a data-driven method for recovering miss-ing parts of 3D shapes. Our method is based on a new deep learning architecture consisting of two sub-networks: a global structure inference network and a local geometry refinement network. The global structure inference network incorporates a long short-term memorized context fusion module (LSTM-CF) that infers the global structure of the shape based on multi-view depth information provided as part of the input. It also includes a 3D fully convolutional (3DFCN) module that further enriches the global structure representation according to volumetric information in the input. Under the guidance of the global structure network, the local geometry refinement network takes as input lo-cal 3D patches around missing regions, and progressively produces a high-resolution, complete surface through a volumetric encoder-decoder architecture. Our method jointly trains the global structure inference and local geometry refinement networks in an end-to-end manner. We perform qualitative and quantitative evaluations on six object categories, demonstrating that our method outperforms existing state-of-the-art work on shape completion.Comment: 8 pages paper, 11 pages supplementary material, ICCV spotlight pape

    Dense 3D Object Reconstruction from a Single Depth View

    Get PDF
    In this paper, we propose a novel approach, 3D-RecGAN++, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of 256^3 by recovering the occluded/missing regions. The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks (GAN) framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets and real-world Kinect datasets show that the proposed 3D-RecGAN++ significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects.Comment: TPAMI 2018. Code and data are available at: https://github.com/Yang7879/3D-RecGAN-extended. This article extends from arXiv:1708.0796

    Semantic Visual Localization

    Full text link
    Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes

    Lose The Views: Limited Angle CT Reconstruction via Implicit Sinogram Completion

    Full text link
    Computed Tomography (CT) reconstruction is a fundamental component to a wide variety of applications ranging from security, to healthcare. The classical techniques require measuring projections, called sinograms, from a full 180∘^\circ view of the object. This is impractical in a limited angle scenario, when the viewing angle is less than 180∘^\circ, which can occur due to different factors including restrictions on scanning time, limited flexibility of scanner rotation, etc. The sinograms obtained as a result, cause existing techniques to produce highly artifact-laden reconstructions. In this paper, we propose to address this problem through implicit sinogram completion, on a challenging real world dataset containing scans of common checked-in luggage. We propose a system, consisting of 1D and 2D convolutional neural networks, that operates on a limited angle sinogram to directly produce the best estimate of a reconstruction. Next, we use the x-ray transform on this reconstruction to obtain a "completed" sinogram, as if it came from a full 180∘^\circ measurement. We feed this to standard analytical and iterative reconstruction techniques to obtain the final reconstruction. We show with extensive experimentation that this combined strategy outperforms many competitive baselines. We also propose a measure of confidence for the reconstruction that enables a practitioner to gauge the reliability of a prediction made by our network. We show that this measure is a strong indicator of quality as measured by the PSNR, while not requiring ground truth at test time. Finally, using a segmentation experiment, we show that our reconstruction preserves the 3D structure of objects effectively.Comment: Spotlight presentation at CVPR 201
    • 

    corecore