10 research outputs found

    Data Fusion of Objects Using Techniques Such as Laser Scanning, Structured Light and Photogrammetry for Cultural Heritage Applications

    Full text link
    In this paper we present a semi-automatic 2D-3D local registration pipeline capable of coloring 3D models obtained from 3D scanners by using uncalibrated images. The proposed pipeline exploits the Structure from Motion (SfM) technique in order to reconstruct a sparse representation of the 3D object and obtain the camera parameters from image feature matches. We then coarsely register the reconstructed 3D model to the scanned one through the Scale Iterative Closest Point (SICP) algorithm. SICP provides the global scale, rotation and translation parameters, using minimal manual user intervention. In the final processing stage, a local registration refinement algorithm optimizes the color projection of the aligned photos on the 3D object removing the blurring/ghosting artefacts introduced due to small inaccuracies during the registration. The proposed pipeline is capable of handling real world cases with a range of characteristics from objects with low level geometric features to complex ones

    How accurate are the fusion of Cone-beam CT and 3-D stereophotographic images?

    Get PDF
    Background: Cone-beam Computed Tomography (CBCT) and stereophotography are two of the latest imaging modalities available for three-dimensional (3-D) visualization of craniofacial structures. However, CBCT provides only limited information on surface texture. This can be overcome by combining the bone images derived from CBCT with 3-D photographs. The objectives of this study were 1) to evaluate the feasibility of integrating 3-D Photos and CBCT images 2) to assess degree of error that may occur during the above processes and 3) to identify facial regions that would be most appropriate for 3-D image registration. Methodology: CBCT scans and stereophotographic images from 29 patients were used for this study. Two 3-D images corresponding to the skin and bone were extracted from the CBCT data. The 3-D photo was superimposed on the CBCT skin image using relatively immobile areas of the face as a reference. 3-D colour maps were used to assess the accuracy of superimposition were distance differences between the CBCT and 3-D photo were recorded as the signed average and the Root Mean Square (RMS) error. Principal Findings: The signed average and RMS of the distance differences between the registered surfaces were -0.018 (±0.129) mm and 0.739 (±0.239) mm respectively. The most errors were found in areas surrounding the lips and the eyes, while minimal errors were noted in the forehead, root of the nose and zygoma. Conclusions: CBCT and 3-D photographic data can be successfully fused with minimal errors. When compared to RMS, the signed average was found to under-represent the registration error. The virtual 3-D composite craniofacial models permit concurrent assessment of bone and soft tissues during diagnosis and treatment planning. © 2012 Jayaratne et al.published_or_final_versio

    Reconstruction of 3D human facial images using partial differential equations.

    Get PDF
    One of the challenging problems in geometric modeling and computer graphics is the construction of realistic human facial geometry. Such geometry are essential for a wide range of applications, such as 3D face recognition, virtual reality applications, facial expression simulation and computer based plastic surgery application. This paper addresses a method for the construction of 3D geometry of human faces based on the use of Elliptic Partial Differential Equations (PDE). Here the geometry corresponding to a human face is treated as a set of surface patches, whereby each surface patch is represented using four boundary curves in the 3-space that formulate the appropriate boundary conditions for the chosen PDE. These boundary curves are extracted automatically using 3D data of human faces obtained using a 3D scanner. The solution of the PDE generates a continuous single surface patch describing the geometry of the original scanned data. In this study, through a number of experimental verifications we have shown the efficiency of the PDE based method for 3D facial surface reconstruction using scan data. In addition to this, we also show that our approach provides an efficient way of facial representation using a small set of parameters that could be utilized for efficient facial data storage and verification purposes

    Automatic features characterization from 3d facial images.

    Get PDF
    This paper presents a novel and computationally fast method for automatic identification of symmetry profile from 3D facial images. The algorithm is based on the concepts of computational geometry which yield fast and accurate results. In order to detect the symmetry profile of a human face, the tip of the nose is identified first. Assuming that the symmetry plane passes through the tip of the nose, the symmetry profile is then extracted. This is undertaken by means of computing the intersection between the symmetry plane and the facial mesh, resulting in a planner curve that accurately represents the symmetry profile. Experimentation using two different 3D face databases was carried out, resulting in fast and accurate results

    Animated statues

    Full text link

    3D Human Face Reconstruction and 2D Appearance Synthesis

    Get PDF
    3D human face reconstruction has been an extensive research for decades due to its wide applications, such as animation, recognition and 3D-driven appearance synthesis. Although commodity depth sensors are widely available in recent years, image based face reconstruction are significantly valuable as images are much easier to access and store. In this dissertation, we first propose three image-based face reconstruction approaches according to different assumption of inputs. In the first approach, face geometry is extracted from multiple key frames of a video sequence with different head poses. The camera should be calibrated under this assumption. As the first approach is limited to videos, we propose the second approach then focus on single image. This approach also improves the geometry by adding fine grains using shading cue. We proposed a novel albedo estimation and linear optimization algorithm in this approach. In the third approach, we further loose the constraint of the input image to arbitrary in the wild images. Our proposed approach can robustly reconstruct high quality model even with extreme expressions and large poses. We then explore the applicability of our face reconstructions on four interesting applications: video face beautification, generating personalized facial blendshape from image sequences, face video stylizing and video face replacement. We demonstrate great potentials of our reconstruction approaches on these real-world applications. In particular, with the recent surge of interests in VR/AR, it is increasingly common to see people wearing head-mounted displays. However, the large occlusion on face is a big obstacle for people to communicate in a face-to-face manner. Our another application is that we explore hardware/software solutions for synthesizing the face image with presence of HMDs. We design two setups (experimental and mobile) which integrate two near IR cameras and one color camera to solve this problem. With our algorithm and prototype, we can achieve photo-realistic results. We further propose a deep neutral network to solve the HMD removal problem considering it as a face inpainting problem. This approach doesn\u27t need special hardware and run in real-time with satisfying results

    Constructing a 3D individualized head model from two orthogonal views

    No full text

    Uses of uncalibrated images to enrich 3D models information

    Get PDF
    The decrease in costs of semi-professional digital cameras has led to the possibility for everyone to acquire a very detailed description of a scene in a very short time. Unfortunately, the interpretation of the images is usually quite hard, due to the amount of data and the lack of robust and generic image analysis methods. Nevertheless, if a geometric description of the depicted scene is available, it gets much easier to extract information from 2D data. This information can be used to enrich the quality of the 3D data in several ways. In this thesis, several uses of sets of unregistered images for the enrichment of 3D models are shown. In particular, two possible fields of application are presented: the color acquisition, projection and visualization and the geometry modification. Regarding color management, several practical and cheap solutions to overcome the main issues in this field are presented. Moreover, some real applications, mainly related to Cultural Heritage, show that provided methods are robust and effective. In the context of geometry modification, two approaches are presented to modify already existing 3D models. In the first one, information extracted from images is used to deform a dummy model to obtain accurate 3D head models, used for simulation in the context of three-dimensional audio rendering. The second approach presents a method to fill holes in 3D models, with the use of registered images depicting a pattern projected on the real object. Finally, some useful indications about the possible future work in all the presented fields are given, in order to delineate the developments of this promising direction of research

    Efficient, image-based appearance acquisition of real-world objects

    Get PDF
    Two ingredients are necessary to synthesize realistic images: an accurate rendering algorithm and, equally important, high-quality models in terms of geometry and reflection properties. In this dissertation we focus on capturing the appearance of real world objects. The acquired model must represent both the geometry and the reflection properties of the object in order to create new views of the object with novel illumination. Starting from scanned 3D geometry, we measure the reflection properties (BRDF) of the object from images taken under known viewing and lighting conditions. The BRDF measurement require only a small number of input images and is made even more efficient by a view planning algorithm. In particular, we propose algorithms for efficient image-to-geometry registration, and an image-based measurement technique to reconstruct spatially varying materials from a sparse set of images using a point light source. Moreover, we present a view planning algorithm that calculates camera and light source positions for optimal quality and efficiency of the measurement process. Relightable models of real-world objects are requested in various fields such as movie production, e-commerce, digital libraries, and virtual heritage.Zur Synthetisierung realistischer Bilder ist zweierlei nötig: ein akkurates Verfahren zur Beleuchtungsberechnung und, ebenso wichtig, qualitativ hochwertige Modelle, die Geometrie und Reflexionseigenschaften der Szene repräsentieren. Die Aufnahme des Erscheinungbildes realer Gegenstände steht im Mittelpunkt dieser Dissertation. Um beliebige Ansichten eines Gegenstandes unter neuer Beleuchtung zu berechnen, müssen die aufgenommenen Modelle sowohl die Geometrie als auch die Reflexionseigenschaften beinhalten. Ausgehend von einem eingescannten 3D-Geometriemodell, werden die Reflexionseigenschaften (BRDF) anhand von Bildern des Objekts gemessen, die unter kontrollierten Lichtverhältnissen aus verschiedenen Perspektiven aufgenommen wurden. Für die Messungen der BRDF sind nur wenige Eingabebilder erforderlich. Im Speziellen werden Methoden vorgestellt für die Registrierung von Bildern und Geometrie sowie für die bildbasierte Messung von variierenden Materialien. Zur zusätzlichen Steigerung der Effizienz der Aufnahme wie der Qualität des Modells, wurde ein Planungsalgorithmus entwickelt, der optimale Kamera- und Lichtquellenpositionen berechnet. Anwendung finden virtuelle 3D-Modelle bespielsweise in der Filmproduktion, im E-Commerce, in digitalen Bibliotheken wie auch bei der Bewahrung von kulturhistorischem Erbe

    Extracting Depth Information From Photographs of Faces

    Get PDF
    Recently new methods of recovering the 3D appearance of objects, like stereo- imaging sensors, laser scanners, and range-imaging sensors provide automatic tools for obtaining the 3D appearance of an object but they require the presence of the object. When only photographic images are available, it is still possible to reconstruct the 3D appearance of the object if there is also a model which can be referenced. The human face is very popular with researchers who try to solve the problems including facial recognition, animation, composition, or modelling. However it is rare to find attempts to reconstruct shape from single photographic images of human faces, although there are numerous methods to solve the shape-from-shading (SFS) problem to date. This thesis describes a novel geometrical approach to reconstructing the original face from a very impoverished facial model1 and a single Lambertian image. This thesis also introduces a different approach to the SFS problem in the sense that it uses prior knowledge of the object, the so-called shape-from-prior-knowledge approach, and addresses the question of what degree of impoverishment is sufficient to compromise the reconstruction. Most recovered surfaces using conventional SFS methods suffer from flattening so that we cannot view them in other directions. We believe that this flatness is due to the lack of geometric knowledge of the subject to be recovered. In this thesis, it is also argued that our approach improves upon existing SFS techniques, because a reconstructed face looks correct even when it is turned to a different orientation from the one in the input image
    corecore