244,491 research outputs found

    Métodos de Interpolación Basados en Funciones de Base Radial con Aplicaciones a la Reconstrucción de Imágenes

    Get PDF
    This article presents the Radial Base Functions (RBFs) as a functional interpolation method for implicit surface reconstruction from points cloud.  These methods allow not only to improve inaccuracies resulting from scanners, but also possible discontinuities that occur in the point clouds.  The complexity of three-dimensional objects makes reconstruction difficult since devices such as scanners do not always faithfully reproduce the objects, which can lead to information gaps or an incomplete reconstruction. Interpolation methods based on RBFs allow to correct these errors.  Three-dimensional surface reconstruction has wide applications in biomedical engineering, in the design of industrial parts, among others.  With the algorithm, we developed we have been able to make reconstructions of both explicit and implicit functions, in two and three dimensions.Keywords:  Radial Basis Functions, Three-dimensional reconstruction, Interpolation Methods

    Reconstruction of the 3D Object Model: A review

    Get PDF
    The three-dimensional (3D) reconstruction model of a real object is useful in many applications ranging from medical imaging, product design, parts inspection, reverse engineering to rapid prototyping. In the medical field, imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI) and single positron emission tomography (SPECT) are applied to create 3D images from emanation measurements for disease diagnoses and organ study. On the other hand, reconstruction is widely utilized to redesign manufacturing parts in order to save production cost and time. A typical reconstruction application consists of three major steps, which are data acquisition, registration and integration as well as surface fitting. Based on the nature of data captured, the 3D reconstruction model can be categorized into two groups: methods working on (i) two-dimensional (2D) images and (ii) sets of 3D points. This paper reviews different methods of 3D object model reconstruction and techniques subjected to each method

    SURFACE NORMAL RECONSTRUCTION USING POLARIZATION-UNET

    Get PDF
    Today, three-dimensional reconstruction of objects has many applications in various fields, and therefore, choosing a suitable method for high resolution three-dimensional reconstruction is an important issue and displaying high-level details in three-dimensional models is a serious challenge in this field. Until now, active methods have been used for high-resolution three-dimensional reconstruction. But the problem of active three-dimensional reconstruction methods is that they require a light source close to the object. Shape from polarization (SfP) is one of the best solutions for high-resolution three-dimensional reconstruction of objects, which is a passive method and does not have the drawbacks of active methods. The changes in polarization of the reflected light from an object can be analyzed by using a polarization camera or locating polarizing filter in front of the digital camera and rotating the filter. Using this information, the surface normal can be reconstructed with high accuracy, which will lead to local reconstruction of the surface details. In this paper, an end-to-end deep learning approach has been presented to produce the surface normal of objects. In this method a benchmark dataset has been used to train the neural network and evaluate the results. The results have been evaluated quantitatively and qualitatively by other methods and under different lighting conditions. The MAE value (Mean-Angular-Error) has been used for results evaluation. The evaluations showed that the proposed method could accurately reconstruct the surface normal of objects with the lowest MAE value which is equal to 18.06 degree on the whole dataset, in comparison to previous physics-based methods which are between 41.44 and 49.03 degree

    The mathematics of surface reconstruction

    Get PDF
    This thesis discusses mathematics engineers use to produce computerized three dimensional im- ages of surfaces. It is self-contained in that all background information is included. As a result, mathematicians who know very little about the technology involved in three dimensional imaging should be able to understand the topics herein, and engineers with no differential geometry background will be able to understand the mathematics. The purpose of this thesis is to unify and understand the notation commonly used by engineers, understand their terminology, and appreciate the difficulties faced by engineers in their pursuitsIt is also intended to bridge the gap between mathematics and engineering. This paper proceeds as follows. Chapter one introduces the topic and provides a brief overview of this thesis. Chapter two provides background information on technology and differential geometry. Chapter three discusses various methods by which normal vectors are estimated. In Chapter four, we discuss methods by which curvature is estimatedIn Chapter six, we put it all together to recreate the surface. Finally, in chapter seven, we conclude with a discussion of future research. Each chapter concludes with a comparison of the methods discussed. The study of these reconstruction algorithms originated from various engineering papers on surface reconstruction. The background information was gathered from a thesis and various differential geometry texts. The challenge arises in the nature of the data with which we work. The surface must be recreated based on a set of discrete points. However, the study of surfaces is one of differential geometry which assumes differentiable functions representing the surface. Since we only have a discrete set of points, methods to overcome this shortcoming must be developed. Two categories of surface reconstruction have been developed to overcome this shortcoming. The first category estimates the data by data by smooth functionsThe second reconstructs the surface using the discrete data directly. We found that various aspects of surface reconstruction are very reliable, while others are only marginally so. We found that methods recreating the surface from discrete data directly produce very similar results suggesting that some underlying facts about surfaces represented by discrete information may be influencing the results

    Data-Optimized Coronal Field Model: I. Proof of Concept

    Full text link
    Deriving the strength and direction of the three-dimensional (3D) magnetic field in the solar atmosphere is fundamental for understanding its dynamics. Volume information on the magnetic field mostly relies on coupling 3D reconstruction methods with photospheric and/or chromospheric surface vector magnetic fields. Infrared coronal polarimetry could provide additional information to better constrain magnetic field reconstructions. However, combining such data with reconstruction methods is challenging, e.g., because of the optical-thinness of the solar corona and the lack and limitations of stereoscopic polarimetry. To address these issues, we introduce the Data-Optimized Coronal Field Model (DOCFM) framework, a model-data fitting approach that combines a parametrized 3D generative model, e.g., a magnetic field extrapolation or a magnetohydrodynamic model, with forward modeling of coronal data. We test it with a parametrized flux rope insertion method and infrared coronal polarimetry where synthetic observations are created from a known "ground truth" physical state. We show that this framework allows us to accurately retrieve the ground truth 3D magnetic field of a set of force-free field solutions from the flux rope insertion method. In observational studies, the DOCFM will provide a means to force the solutions derived with different reconstruction methods to satisfy additional, common, coronal constraints. The DOCFM framework therefore opens new perspectives for the exploitation of coronal polarimetry in magnetic field reconstructions and for developing new techniques to more reliably infer the 3D magnetic fields that trigger solar flares and coronal mass ejections.Comment: 14 pages, 6 figures; Accepted for publication in Ap

    Multi-View Neural Surface Reconstruction with Structured Light

    Full text link
    Three-dimensional (3D) object reconstruction based on differentiable rendering (DR) is an active research topic in computer vision. DR-based methods minimize the difference between the rendered and target images by optimizing both the shape and appearance and realizing a high visual reproductivity. However, most approaches perform poorly for textureless objects because of the geometrical ambiguity, which means that multiple shapes can have the same rendered result in such objects. To overcome this problem, we introduce active sensing with structured light (SL) into multi-view 3D object reconstruction based on DR to learn the unknown geometry and appearance of arbitrary scenes and camera poses. More specifically, our framework leverages the correspondences between pixels in different views calculated by structured light as an additional constraint in the DR-based optimization of implicit surface, color representations, and camera poses. Because camera poses can be optimized simultaneously, our method realizes high reconstruction accuracy in the textureless region and reduces efforts for camera pose calibration, which is required for conventional SL-based methods. Experiment results on both synthetic and real data demonstrate that our system outperforms conventional DR- and SL-based methods in a high-quality surface reconstruction, particularly for challenging objects with textureless or shiny surfaces.Comment: Accepted by BMVC 202
    corecore