38 research outputs found

    Modelling appearance and geometry from images

    Get PDF
    Acquisition of realistic and relightable 3D models of large outdoor structures, such as buildings, requires the modelling of detailed geometry and visual appearance. Recovering these material characteristics can be very time consuming and needs specially dedicated equipment. Alternatively, surface detail can be conveyed by textures recovered from images, whose appearance is only valid under the originally photographed viewing and lighting conditions. Methods to easily capture locally detailed geometry, such as cracks in stone walls, and visual appearance require control of lighting conditions, which are usually restricted to small portions of surfaces captured at close range.This thesis investigates the acquisition of high-quality models from images, using simple photographic equipment and modest user intervention. The main focus of this investigation is on approximating detailed local depth information and visual appearance, obtained using a new image-based approach, and combining this with gross-scale 3D geometry. This is achieved by capturing these surface characteristics in small accessible regions and transferring them to the complete façade. This approach yields high-quality models, imparting the illusion of measured reflectance. In this thesis, we first present two novel algorithms for surface detail and visual appearance transfer, where these material properties are captured for small exemplars, using an image-based technique. Second, we develop an interactive solution to solve the problems of performing the transfer over both a large change in scale and to the different materials contained in a complete façade. Aiming to completely automate this process, a novel algorithm to differentiate between materials in the façade and associate them with the correct exemplars is introduced with promising results. Third, we present a new method for texture reconstruction from multiple images that optimises texture quality, by choosing the best view for every point and minimising seams. Material properties are transferred from the exemplars to the texture map, approximating reflectance and meso-structure. The combination of these techniques results in a complete working system capable of producing realistic relightable models of full building façades, containing high-resolution geometry and plausible visual appearance.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Unsupervised detection and localization of structural textures using projection profiles

    Get PDF
    Cataloged from PDF version of article.The main goal of existing approaches for structural texture analysis has been the identification of repeating texture primitives and their placement patterns in images containing a single type of texture. We describe a novel unsupervised method for simultaneous detection and localization of multiple structural texture areas along with estimates of their orientations and scales in real images. First, multi-scale isotropic filters are used to enhance the potential texton locations. Then, regularity of the textons is quantified in terms of the periodicity of projection profiles of filter responses within sliding windows at multiple orientations. Next, a regularity index is computed for each pixel as the maximum regularity score together with its orientation and scale. Finally, thresholding of this regularity index produces accurate localization of structural textures in images containing different kinds of textures as well as non-textured areas. Experiments using three different data sets show the effectiveness of the proposed method in complex scenes.(C)2010 Elsevier Ltd. All rights reserved

    Real-time Terrain Mapping

    Get PDF
    We present an interactive, real-time mapping system for digital elevation maps (DEMs), which allows Earth scientists to map and therefore understand the deformation of the continental crust at length scales of 10m to 1000km. Our system visualizes the surface of the Earth as a 3D~surface generated from a DEM, with a color texture generated from a registered multispectral image and vector-based mapping elements draped over it. We use a quadtree-based multiresolution method to be able to render high-resolution terrain mapping data sets of large spatial regions in real time. The main strength of our system is the combination of interactive rendering and interactive mapping directly onto the 3D~surface, with the ability to navigate the terrain and to change viewpoints arbitrarily during mapping. User studies and comparisons with commercially available mapping software show that our system improves mapping accuracy and efficiency, and also enables qualitatively different observations that are not possible to make with existing systems

    Unsupervised detection and localization of structural textures using projection profiles

    Get PDF
    The main goal of existing approaches for structural texture analysis has been the identification of repeating texture primitives and their placement patterns in images containing a single type of texture. We describe a novel unsupervised method for simultaneous detection and localization of multiple structural texture areas along with estimates of their orientations and scales in real images. First, multi-scale isotropic filters are used to enhance the potential texton locations. Then, regularity of the textons is quantified in terms of the periodicity of projection profiles of filter responses within sliding windows at multiple orientations. Next, a regularity index is computed for each pixel as the maximum regularity score together with its orientation and scale. Finally, thresholding of this regularity index produces accurate localization of structural textures in images containing different kinds of textures as well as non-textured areas. Experiments using three different data sets show the effectiveness of the proposed method in complex scenes. © 2010 Elsevier Ltd. All rights reserved

    Surface Appearance Estimation from Video Sequences

    Get PDF
    The realistic virtual reproduction of real world objects using Computer Graphics techniques requires the accurate acquisition and reconstruction of both 3D geometry and surface appearance. Unfortunately, in several application contexts, such as Cultural Heritage (CH), the reflectance acquisition can be very challenging due to the type of object to acquire and the digitization conditions. Although several methods have been proposed for the acquisition of object reflectance, some intrinsic limitations still make its acquisition a complex task for CH artworks: the use of specialized instruments (dome, special setup for camera and light source, etc.); the need of highly controlled acquisition environments, such as a dark room; the difficulty to extend to objects of arbitrary shape and size; the high level of expertise required to assess the quality of the acquisition. The Ph.D. thesis proposes novel solutions for the acquisition and the estimation of the surface appearance in fixed and uncontrolled lighting conditions with several degree of approximations (from a perceived near diffuse color to a SVBRDF), taking advantage of the main features that differentiate a video sequences from an unordered photos collections: the temporal coherence; the data redundancy; the easy of the acquisition, which allows acquisition of many views of the object in a short time. Finally, Reflectance Transformation Imaging (RTI) is an example of widely used technology for the acquisition of the surface appearance in the CH field, even if limited to single view Reflectance Fields of nearly flat objects. In this context, the thesis addresses also two important issues in RTI usage: how to provide better and more flexible virtual inspection capabilities with a set of operators that improve the perception of details, features and overall shape of the artwork; how to increase the possibility to disseminate this data and to support remote visual inspection of both scholar and ordinary public

    Quantifying Texture Scale in Accordance With Human Perception

    Get PDF
    Visual texture has multiple perceptual attributes (e.g. regularity, isotropy, etc.), including scale. The scale of visual texture has been defined as the size of the repeating unit (or texel) of which the texture is composed. Not all textures are formed through the placement of a clearly discernible repeating unit (e.g. irregular and stochastic textures). There is currently no rigorous definition for texture scale that is applicable to textures of a wide range of regularities. We hypothesised that texture scale ought to extend to these less regular textures. Non-overlapping sample windows (or patches) taken from a texture appear increasingly similar as the size of the window gets larger. This is true irrespective of whether the texture is formed by the placement of a discernible repeating unit or not. We propose the following new characterisation for texture scale: “the smallest window size beyond within which texture appears consistently”. We perform two psychophysical studies and report data that demonstrates consensus across subjects and across methods of probing in the assessment of texture scale. We then present an empirical algorithm for the estimation of scale based on this characterisation. We demonstrate agreement between the algorithm and (subjective) human assessment with an RMS accuracy of 1.2 just-noticeable-differences, a significant improvement over previous published algorithms. We provide two ground-truth perceptual datasets, one for each of our psychophysical studies, for the texture scale of the entire Brodatz album, together with confidence levels for each of our estimates. Finally, we make available an online tool which researchers can use to obtain texture scale estimates by uploading images of textures

    Image-Based Rendering Of Real Environments For Virtual Reality

    Get PDF

    Efficient, image-based appearance acquisition of real-world objects

    Get PDF
    Two ingredients are necessary to synthesize realistic images: an accurate rendering algorithm and, equally important, high-quality models in terms of geometry and reflection properties. In this dissertation we focus on capturing the appearance of real world objects. The acquired model must represent both the geometry and the reflection properties of the object in order to create new views of the object with novel illumination. Starting from scanned 3D geometry, we measure the reflection properties (BRDF) of the object from images taken under known viewing and lighting conditions. The BRDF measurement require only a small number of input images and is made even more efficient by a view planning algorithm. In particular, we propose algorithms for efficient image-to-geometry registration, and an image-based measurement technique to reconstruct spatially varying materials from a sparse set of images using a point light source. Moreover, we present a view planning algorithm that calculates camera and light source positions for optimal quality and efficiency of the measurement process. Relightable models of real-world objects are requested in various fields such as movie production, e-commerce, digital libraries, and virtual heritage.Zur Synthetisierung realistischer Bilder ist zweierlei nötig: ein akkurates Verfahren zur Beleuchtungsberechnung und, ebenso wichtig, qualitativ hochwertige Modelle, die Geometrie und Reflexionseigenschaften der Szene reprĂ€sentieren. Die Aufnahme des Erscheinungbildes realer GegenstĂ€nde steht im Mittelpunkt dieser Dissertation. Um beliebige Ansichten eines Gegenstandes unter neuer Beleuchtung zu berechnen, mĂŒssen die aufgenommenen Modelle sowohl die Geometrie als auch die Reflexionseigenschaften beinhalten. Ausgehend von einem eingescannten 3D-Geometriemodell, werden die Reflexionseigenschaften (BRDF) anhand von Bildern des Objekts gemessen, die unter kontrollierten LichtverhĂ€ltnissen aus verschiedenen Perspektiven aufgenommen wurden. FĂŒr die Messungen der BRDF sind nur wenige Eingabebilder erforderlich. Im Speziellen werden Methoden vorgestellt fĂŒr die Registrierung von Bildern und Geometrie sowie fĂŒr die bildbasierte Messung von variierenden Materialien. Zur zusĂ€tzlichen Steigerung der Effizienz der Aufnahme wie der QualitĂ€t des Modells, wurde ein Planungsalgorithmus entwickelt, der optimale Kamera- und Lichtquellenpositionen berechnet. Anwendung finden virtuelle 3D-Modelle bespielsweise in der Filmproduktion, im E-Commerce, in digitalen Bibliotheken wie auch bei der Bewahrung von kulturhistorischem Erbe

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed
    corecore