5 research outputs found

    Transfer of albedo and local depth variation to photo-textures

    Get PDF
    Acquisition of displacement and albedo maps for full building façades is a difficult problem and traditionally achieved through a labor intensive artistic process. In this paper, we present a material appearance transfer method, Transfer by Analogy, designed to infer surface detail and diffuse reflectance for textured surfaces like the present in building façades. We begin by acquiring small exemplars (displacement and albedo maps), in accessible areas, where capture conditions can be controlled. We then transfer these properties to a complete phototexture constructed from reference images and captured under diffuse daylight illumination. Our approach allows super-resolution inference of albedo and displacement from information in the photo-texture. When transferring appearance from multiple exemplars to façades containing multiple materials, our approach also sidesteps the need for segmentation. We show how we use these methods to create relightable models with a high degree of texture detail, reproducing the visually rich self-shadowing effects that would normally be difficult to capture using just simple consumer equipment. Copyright © 2012 by the Association for Computing Machinery, Inc

    Rich Intrinsic Image Separation for Multi-View Outdoor Scenes

    Get PDF
    Intrinsic images aim at separating an image into its reflectance and illumination components to facilitate further analysis or manipulation. This separation is severely ill-posed and the most successful methods rely on user indications or precise geometry to resolve the ambiguities inherent to this problem. In this paper we propose a method to estimate intrinsic images from multiple views of an outdoor scene without the need for precise geometry or involved user intervention. We use multiview stereo to automatically reconstruct a 3D point cloud of the scene. Although this point cloud is sparse and incomplete, we show that it provides the necessary information to compute plausible sky and indirect illumination at each 3D point. We then introduce an optimization method to estimate sun visibility over the point cloud. This algorithm compensates for the lack of accurate geometry and allows the extraction of precise shadows in the final image. We finally propagate the information computed over the sparse point cloud to every pixel in the photograph using image-guided propagation. Our propagation not only separates reflectance from illumination, but also decomposes the illumination into a sun, sky and indirect layer. This rich decomposition allows novel image manipulations as demonstrated by our results.Nous présentons une méthode capable de décomposer les photographies d'une scène en trois composantes intrinsèques --- la réflectance, l'illumination due au soleil, l'illumination due au ciel, et l'illumination indirecte. L'extraction d'images intrinsèques à partir de photographies est un problème difficile, généralement résolu en utilisant des méthodes de propagation guidée par l'image nécessitant de multiples indications utilisateur. Des méthodes récentes en vision par ordinateur permettent l'acquisition facile mais approximative d'informations géométriques d'une scène à l'aide de plusieurs photographies selon des points de vue différents. Nous développons un nouvel algorithme qui nous permet d'exploiter cette information bruitée et peu fiable pour automatiser et améliorer les algorithmes d'estimation d'images intrinsèque par propagation. En particulier, nous développons une nouvelle approche par optimisation afin d'estimer les ombres portées dans l'image, en peaufinant une estimation initiale obtenue à partir des informations géométriques reconstruites. Dans une dernière étape nous adaptons les algorithmes de propagation guidée par l'image, en remplaçant les indications utilisateurs manuelles par les données d'ombre et de réflectance déduite du nuage de points 3D par notre algorithme. Notre méthode permet l'extraction automatique des images intrinsèques à partir de multiples points de vue, permettant ainsi de nombreux types de manipulations d'images

    Architectural visualisation toolkit for 3D Studio Max users

    Get PDF
    Architectural Visualisation has become a vital part of the design process for architects and engineers. The process of modelling and rendering an architectural visualisation can be complex and time consuming with only a few tools available to assist novice modellers. This paper looks at available solutions for visualisation specialists including AutoCAD, 3D Studio Max and Google SketchUp as well as available solutions which attempt to automate the process including Batzal Roof Designer. This thesis details a new program which has been developed to automate the modelling and rendering of the architectural visualisation process. The tool created for this thesis is written in MAXScript and runs along side 3D Studio Max. N.B.: Audio files were attached to this thesis at the time of its submission. Please refer to the author for further details

    Relightable Buildings from Images

    Get PDF
    This is a Eurographics 2011 conference paper
    corecore