136 research outputs found

    State of the Art in Example-based Texture Synthesis

    Get PDF
    International audienceRecent years have witnessed significant progress in example-based texture synthesis algorithms. Given an example texture, these methods produce a larger texture that is tailored to the user's needs. In this state-of-the-art report, we aim to achieve three goals: (1) provide a tutorial that is easy to follow for readers who are not already familiar with the subject, (2) make a comprehensive survey and comparisons of different methods, and (3) sketch a vision for future work that can help motivate and guide readers that are interested in texture synthesis research. We cover fundamental algorithms as well as extensions and applications of texture synthesis

    Novel Views of Objects from a Single Image

    Get PDF
    Taking an image of an object is at its core a lossy process. The rich information about the three-dimensional structure of the world is flattened to an image plane and decisions such as viewpoint and camera parameters are final and not easily revertible. As a consequence, possibilities of changing viewpoint are limited. Given a single image depicting an object, novel-view synthesis is the task of generating new images that render the object from a different viewpoint than the one given. The main difficulty is to synthesize the parts that are disoccluded; disocclusion occurs when parts of an object are hidden by the object itself under a specific viewpoint. In this work, we show how to improve novel-view synthesis by making use of the correlations observed in 3D models and applying them to new image instances. We propose a technique to use the structural information extracted from a 3D model that matches the image object in terms of viewpoint and shape. For the latter part, we propose an efficient 2D-to-3D alignment method that associates precisely the image appearance with the 3D model geometry with minimal user interaction. Our technique is able to simulate plausible viewpoint changes for a variety of object classes within seconds. Additionally, we show that our synthesized images can be used as additional training data that improves the performance of standard object detectors

    Modelling appearance and geometry from images

    Get PDF
    Acquisition of realistic and relightable 3D models of large outdoor structures, such as buildings, requires the modelling of detailed geometry and visual appearance. Recovering these material characteristics can be very time consuming and needs specially dedicated equipment. Alternatively, surface detail can be conveyed by textures recovered from images, whose appearance is only valid under the originally photographed viewing and lighting conditions. Methods to easily capture locally detailed geometry, such as cracks in stone walls, and visual appearance require control of lighting conditions, which are usually restricted to small portions of surfaces captured at close range.This thesis investigates the acquisition of high-quality models from images, using simple photographic equipment and modest user intervention. The main focus of this investigation is on approximating detailed local depth information and visual appearance, obtained using a new image-based approach, and combining this with gross-scale 3D geometry. This is achieved by capturing these surface characteristics in small accessible regions and transferring them to the complete façade. This approach yields high-quality models, imparting the illusion of measured reflectance. In this thesis, we first present two novel algorithms for surface detail and visual appearance transfer, where these material properties are captured for small exemplars, using an image-based technique. Second, we develop an interactive solution to solve the problems of performing the transfer over both a large change in scale and to the different materials contained in a complete façade. Aiming to completely automate this process, a novel algorithm to differentiate between materials in the façade and associate them with the correct exemplars is introduced with promising results. Third, we present a new method for texture reconstruction from multiple images that optimises texture quality, by choosing the best view for every point and minimising seams. Material properties are transferred from the exemplars to the texture map, approximating reflectance and meso-structure. The combination of these techniques results in a complete working system capable of producing realistic relightable models of full building façades, containing high-resolution geometry and plausible visual appearance.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Texture Transfer Based on Texture Descriptor Variations

    Get PDF
    In this report, we tackle the problem of image-space texture transfer which aims to modify an object or surface material by replacing its input texture by another reference texture. The main challenge of texture transfer is to successfully reproduce the reference texture patterns while preserving the input texture variations due to its environment such as illumination or shape variations. We propose to use a texture descriptor composed of local luminance and local gradients orientation and magnitude to characterize the input texture variations. We then introduce a guided texture synthesis algorithm to synthesize a texture resembling the reference texture with the input texture variations. The main contribution of our algorithm is its ability to locally deform the reference texture according to local texture descriptors in order to better reproduce the input texture variations. We show that our approach is able to produce results comparable with current state-of-the-art approaches but with fewer user inputs.Dans ce rapport, nous nous intĂ©ressons au transfert de texture en espace imagequi consiste Ă  modifier le matĂ©riau d’un objet ou d’une surface en remplaçant sa texture d’entrĂ©epar une texture de rĂ©fĂ©rence. La principale difficultĂ© du transfert de texture est d’arriver Ă reproduire les motifs de la texture de rĂ©fĂ©rence, tout en prĂ©servant les variations de la textured’entrĂ©e introduites par son environnement comme des variations de forme ou d’illumination.Nous proposons d’utiliser un descripteur de texture composĂ© de la luminance locale ainsi que del’orientation et l’amplitude locale des gradients afin de caractĂ©riser les variations de la textured’entrĂ©e. Nous introduisons ensuite un algorithme de synthĂšse de texture guidĂ© afin de synthĂ©tiserune texture ressemblant Ă  la rĂ©fĂ©rence mais prĂ©servant les variations de la texture d’entrĂ©e. Laprincipale contribution de cet algorithme est sa capacitĂ© Ă  dĂ©former la texture de rĂ©fĂ©rencelocalement en fonction du descripteur de texture. Cette approche permet d’obtenir des rĂ©sultatscomparables Ă  l’état de l’art, mais nĂ©cessitant moins d’informations de la part de l’utilisateur

    Painterly rendering techniques: A state-of-the-art review of current approaches

    Get PDF
    In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd
    • 

    corecore