224 research outputs found

    Robust Carving for Non-Lambertian Objects

    Get PDF
    International audienceThis paper presents a new surface reconstruction method that extends previous carving methods for non-Lambertian objects by integrating the smoothness and image information of different aspects. We introduce a robust multiview photo-consistency function and a single-view visualconsistency function. The former considers a general specularity that may introduce both reflectance models and robust statistics, while the latter is based on pixel homogeneity and continuity in individual images. The new consistent shape, with both multi-view and single-view, extends previous photo hull and visual hull, and is proven to be included in both photo hull and visual hull. This approximate shape is then refined by a graph-cut optimization method. Sample reconstructions from real sequences for both global and local optimizations are demonstrated

    A framework for digital sunken relief generation based on 3D geometric models

    Get PDF
    Sunken relief is a special art form of sculpture whereby the depicted shapes are sunk into a given surface. This is traditionally created by laboriously carving materials such as stone. Sunken reliefs often utilize the engraved lines or strokes to strengthen the impressions of a 3D presence and to highlight the features which otherwise are unrevealed. In other types of reliefs, smooth surfaces and their shadows convey such information in a coherent manner. Existing methods for relief generation are focused on forming a smooth surface with a shallow depth which provides the presence of 3D figures. Such methods unfortunately do not help the art form of sunken reliefs as they omit the presence of feature lines. We propose a framework to produce sunken reliefs from a known 3D geometry, which transforms the 3D objects into three layers of input to incorporate the contour lines seamlessly with the smooth surfaces. The three input layers take the advantages of the geometric information and the visual cues to assist the relief generation. This framework alters existing techniques in line drawings and relief generation, and then combines them organically for this particular purpose

    Depth Enhancement and Surface Reconstruction with RGB/D Sequence

    Get PDF
    Surface reconstruction and 3D modeling is a challenging task, which has been explored for decades by the computer vision, computer graphics, and machine learning communities. It is fundamental to many applications such as robot navigation, animation and scene understanding, industrial control and medical diagnosis. In this dissertation, I take advantage of the consumer depth sensors for surface reconstruction. Considering its limited performance on capturing detailed surface geometry, a depth enhancement approach is proposed in the first place to recovery small and rich geometric details with captured depth and color sequence. In addition to enhancing its spatial resolution, I present a hybrid camera to improve the temporal resolution of consumer depth sensor and propose an optimization framework to capture high speed motion and generate high speed depth streams. Given the partial scans from the depth sensor, we also develop a novel fusion approach to build up complete and watertight human models with a template guided registration method. Finally, the problem of surface reconstruction for non-Lambertian objects, on which the current depth sensor fails, is addressed by exploiting multi-view images captured with a hand-held color camera and we propose a visual hull based approach to recovery the 3D model

    3D Least Squares Based Surface Reconstruction

    Get PDF
    Diese Arbeit präsentiert einen vollständig dreidimensionalen (3D) Algorithmus zur Oberflächenrekonstruktion aus Bildfolgen mit großer Basis. Die rekonstruierten Oberflächen werden durch Dreiecksgitter beschrieben, was eine einfache Integration von Bild- und Geometrie-basierten Bedingungen ermöglicht. Die vorgestellte Arbeit erweitert den erfolgreichen Ansatz von Heipke (1990) zur 2,5D Rekonstruktion zur vollständigen 3D Rekonstruktion. Verdeckung und nicht-Lambertsche Spiegelung werden durch robuste kleinste Quadrate Ausgleichung zur Schätzung des Modells berücksichtigt. Ausgangsdaten sind Bilder von verschiedenen Positionen, abgeleitete genaue Orientierungen der Bilder und eine begrenzte Zahl von 3D Punkten (Bartelsen and Mayer 2010). Die erste Neuerung des vorgestellten Ansatzes besteht in der Art und Weise, wie zusätzliche Punkte (Unbekannte) in dem Dreiecksgitter aus den vorgegebenen 3D Punkten positioniert werden. Dank den genauen Positionen dieser zusätzlichen Punkte werden präzisere und genauere rekonstruierte Oberflächen bezüglich Form und Anpassung der Bildtextur erhalten. Die zweite Neuerung besteht darin, dass individuelle Bias-Parameter für verschiedene Bilder und angepasste Gewichtungen für unterschiedliche Bildbeobachtungen verwendet werden, um damit unterschiedliche Intensitäten verschiedener Bilder als auch Ausreißer zu berücksichtigen. Die dritte Neuerung sind die verwendete Faktorisierung der Design-Matrix und die Art und Weise, wie die Gitter in Ebenen zerlegt werden, um die Laufzeit zu reduzieren. Das wesentliche Element des vorgestellten Modells besteht in der Varianz der Intensitätswerte der Bildbeobachtungen innerhalb eines Dreiecks. Mit dem vorgestellten Ansatz können genaue 3D Oberflächen für unterschiedliche Arten von Szenen rekonstruiert werden. Ergebnisse werden als VRML (Virtual Reality Modeling Language) Modelle ausgegeben, welche sowohl das Potential als auch die derzeitigen Grenzen des Ansatzes aufzeigen.This thesis presents a fully three dimensional (3D) surface reconstruction algorithm from wide-baseline image sequences. Triangle meshes represent the reconstructed surfaces allowing for an easy integration of image- and geometry-based constraints. We extend the successful approach for 2.5D reconstruction of Heipke (1990) to full 3D. To take into account occlusion and non-Lambertian reflection, we apply robust least squares adjustment to estimate the model. The input for our approach are images taken from different positions and derived accurate image orientations as well as sparse 3D points (Bartelsen and Mayer 2010). The first novelty of our approach is the way we position additional 3D points (unknowns) in the triangle meshes constructed from given 3D points. Owing to the precise positions of these additional 3D points, we obtain more precise and accurate reconstructed surfaces in terms of shape and fit of texture. The second novelty is to apply individual bias parameters for different images and adapted weights for different image observations to account for differences in the intensity values for different images as well as to consider outliers in the estimation. The third novelty is the way we factorize the design matrix and divide the meshes into layers to reduce the run time. The essential element for our model is the variance of the intensity values of image observations inside a triangle. Applying the approach, we can reconstruct accurate 3D surfaces for different types of scenes. Results are presented in the form of VRML (Virtual Reality Modeling Language) models, demonstrating the potential of the approach as well as its current shortcomings

    Accelerated volumetric reconstruction from uncalibrated camera views

    Get PDF
    While both work with images, computer graphics and computer vision are inverse problems. Computer graphics starts traditionally with input geometric models and produces image sequences. Computer vision starts with input image sequences and produces geometric models. In the last few years, there has been a convergence of research to bridge the gap between the two fields. This convergence has produced a new field called Image-based Rendering and Modeling (IBMR). IBMR represents the effort of using the geometric information recovered from real images to generate new images with the hope that the synthesized ones appear photorealistic, as well as reducing the time spent on model creation. In this dissertation, the capturing, geometric and photometric aspects of an IBMR system are studied. A versatile framework was developed that enables the reconstruction of scenes from images acquired with a handheld digital camera. The proposed system targets applications in areas such as Computer Gaming and Virtual Reality, from a lowcost perspective. In the spirit of IBMR, the human operator is allowed to provide the high-level information, while underlying algorithms are used to perform low-level computational work. Conforming to the latest architecture trends, we propose a streaming voxel carving method, allowing a fast GPU-based processing on commodity hardware

    Minimizing the Multi-view Stereo Reprojection Error for Triangular Surface Meshes

    Get PDF
    International audienceThis article proposes a variational multi-view stereo vision method based on meshes for recovering 3D scenes (shape and radiance) from images. Our method is based on generative models and minimizes the reprojection error (difference between the observed images and the images synthesized from the reconstruction). Our contributions are twofold. 1) For the first time, we rigorously compute the gradient of the reprojection error for non smooth surfaces defined by discrete triangular meshes. The gradient correctly takes into account the visibility changes that occur when a surface moves; this forces the contours generated by the reconstructed surface to perfectly match with the apparent contours in the input images. 2) We propose an original modification of the Lambertian model to take into account deviations from the constant brightness assumption without explicitly modelling the reflectance properties of the scene or other photometric phenomena involved by the camera model. Our method is thus able to recover the shape and the diffuse radiance of non Lambertian scenes

    Three-dimensional modeling of the human jaw/teeth using optics and statistics.

    Get PDF
    Object modeling is a fundamental problem in engineering, involving talents from computer-aided design, computational geometry, computer vision and advanced manufacturing. The process of object modeling takes three stages: sensing, representation, and analysis. Various sensors may be used to capture information about objects; optical cameras and laser scanners are common with rigid objects, while X-ray, CT and MRI are common with biological organs. These sensors may provide a direct or an indirect inference about the object, requiring a geometric representation in the computer that is suitable for subsequent usage. Geometric representations that are compact, i.e., capture the main features of the objects with a minimal number of data points or vertices, fall into the domain of computational geometry. Once a compact object representation is in the computer, various analysis steps can be conducted, including recognition, coding, transmission, etc. The subject matter of this dissertation is object reconstruction from a sequence of optical images using shape from shading (SFS) and SFS with shape priors. The application domain is dentistry. Most of the SFS approaches focus on the computational part of the SFS problem, i.e. the numerical solution. As a result, the imaging model in most conventional SFS algorithms has been simplified under three simple, but restrictive assumptions: (1) the camera performs an orthographic projection of the scene, (2) the surface has a Lambertian reflectance and (3) the light source is a single point source at infinity. Unfortunately, such assumptions are no longer held in the case of reconstruction of real objects as intra-oral imaging environment for human teeth. In this work, we introduce a more realistic formulation of the SFS problem by considering the image formation components: the camera, the light source, and the surface reflectance. This dissertation proposes a non-Lambertian SFS algorithm under perspective projection which benefits from camera calibration parameters. The attenuation of illumination is taken account due to near-field imaging. The surface reflectance is modeled using the Oren-Nayar-Wolff model which accounts for the retro-reflection case. In this context, a new variational formulation is proposed that relates an evolving surface model with image information, taking into consideration that the image is taken by a perspective camera with known parameters. A new energy functional is formulated to incorporate brightness, smoothness and integrability constraints. In addition, to further improve the accuracy and practicality of the results, 3D shape priors are incorporated in the proposed SFS formulation. This strategy is motivated by the fact that humans rely on strong prior information about the 3D world around us in order to perceive 3D shape information. Such information is statistically extracted from training 3D models of the human teeth. The proposed SFS algorithms have been used in two different frameworks in this dissertation: a) holistic, which stitches a sequence of images in order to cover the entire jaw, and then apply the SFS, and b) piece-wise, which focuses on a specific tooth or a segment of the human jaw, and applies SFS using physical teeth illumination characteristics. To augment the visible portion, and in order to have the entire jaw reconstructed without the use of CT or MRI or even X-rays, prior information were added which gathered from a database of human jaws. This database has been constructed from an adult population with variations in teeth size, degradation and alignments. The database contains both shape and albedo information for the population. Using this database, a novel statistical shape from shading (SSFS) approach has been created. Extending the work on human teeth analysis, Finite Element Analysis (FEA) is adapted for analyzing and calculating stresses and strains of dental structures. Previous Finite Element (FE) studies used approximate 2D models. In this dissertation, an accurate three-dimensional CAD model is proposed. 3D stress and displacements of different teeth type are successfully carried out. A newly developed open-source finite element solver, Finite Elements for Biomechanics (FEBio), has been used. The limitations of the experimental and analytical approaches used for stress and displacement analysis are overcome by using FEA tool benefits such as dealing with complex geometry and complex loading conditions

    Structure and motion from scene registration

    Get PDF
    We propose a method for estimating the 3D structure and the dense 3D motion (scene flow) of a dynamic nonrigid 3D scene, using a camera array. The core idea is to use a dense multi-camera array to construct a novel, dense 3D volumetric representation of the 3D space where each voxel holds an estimated intensity value and a confidence measure of this value. The problem of 3D structure and 3D motion estimation of a scene is thus reduced to a nonrigid registration of two volumes - hence the term ”Scene Registration”. Registering two dense 3D scalar volumes does not require recovering the 3D structure of the scene as a preprocessing step, nor does it require explicit reasoning about occlusions. From this nonrigid registration we accurately extract the 3D scene flow and the 3D structure of the scene, and successfully recover the sharp discontinuities in both time and space. We demonstrate the advantages of our method on a number of challenging synthetic and real data sets
    • …
    corecore