296 research outputs found

    Transfer of albedo and local depth variation to photo-textures

    Get PDF
    Acquisition of displacement and albedo maps for full building façades is a difficult problem and traditionally achieved through a labor intensive artistic process. In this paper, we present a material appearance transfer method, Transfer by Analogy, designed to infer surface detail and diffuse reflectance for textured surfaces like the present in building façades. We begin by acquiring small exemplars (displacement and albedo maps), in accessible areas, where capture conditions can be controlled. We then transfer these properties to a complete phototexture constructed from reference images and captured under diffuse daylight illumination. Our approach allows super-resolution inference of albedo and displacement from information in the photo-texture. When transferring appearance from multiple exemplars to façades containing multiple materials, our approach also sidesteps the need for segmentation. We show how we use these methods to create relightable models with a high degree of texture detail, reproducing the visually rich self-shadowing effects that would normally be difficult to capture using just simple consumer equipment. Copyright © 2012 by the Association for Computing Machinery, Inc

    Livrable D2.2 of the PERSEE project : Analyse/Synthese de Texture

    Get PDF
    Livrable D2.2 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D2.2 du projet. Son titre : Analyse/Synthese de Textur

    07171 Abstracts Collection -- Visual Computing -- Convergence of Computer Graphics and Computer Vision

    Get PDF
    From 22.04. to 27.04.2007, the Dagstuhl Seminar 07171 ``Visual Computing - Convergence of Computer Graphics and Computer Vision\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Texture Structure Analysis

    Get PDF
    abstract: Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms of perceived regularity. Our human visual system (HVS) uses the perceived regularity as one of the important pre-attentive cues in low-level image understanding. Similar to the HVS, image processing and computer vision systems can make fast and efficient decisions if they can quantify this regularity automatically. In this work, the problem of quantifying the degree of perceived regularity when looking at an arbitrary texture is introduced and addressed. One key contribution of this work is in proposing an objective no-reference perceptual texture regularity metric based on visual saliency. Other key contributions include an adaptive texture synthesis method based on texture regularity, and a low-complexity reduced-reference visual quality metric for assessing the quality of synthesized textures. In order to use the best performing visual attention model on textures, the performance of the most popular visual attention models to predict the visual saliency on textures is evaluated. Since there is no publicly available database with ground-truth saliency maps on images with exclusive texture content, a new eye-tracking database is systematically built. Using the Visual Saliency Map (VSM) generated by the best visual attention model, the proposed texture regularity metric is computed. The proposed metric is based on the observation that VSM characteristics differ between textures of differing regularity. The proposed texture regularity metric is based on two texture regularity scores, namely a textural similarity score and a spatial distribution score. In order to evaluate the performance of the proposed regularity metric, a texture regularity database called RegTEX, is built as a part of this work. It is shown through subjective testing that the proposed metric has a strong correlation with the Mean Opinion Score (MOS) for the perceived regularity of textures. The proposed method is also shown to be robust to geometric and photometric transformations and outperforms some of the popular texture regularity metrics in predicting the perceived regularity. The impact of the proposed metric to improve the performance of many image-processing applications is also presented. The influence of the perceived texture regularity on the perceptual quality of synthesized textures is demonstrated through building a synthesized textures database named SynTEX. It is shown through subjective testing that textures with different degrees of perceived regularities exhibit different degrees of vulnerability to artifacts resulting from different texture synthesis approaches. This work also proposes an algorithm for adaptively selecting the appropriate texture synthesis method based on the perceived regularity of the original texture. A reduced-reference texture quality metric for texture synthesis is also proposed as part of this work. The metric is based on the change in perceived regularity and the change in perceived granularity between the original and the synthesized textures. The perceived granularity is quantified through a new granularity metric that is proposed in this work. It is shown through subjective testing that the proposed quality metric, using just 2 parameters, has a strong correlation with the MOS for the fidelity of synthesized textures and outperforms the state-of-the-art full-reference quality metrics on 3 different texture databases. Finally, the ability of the proposed regularity metric in predicting the perceived degradation of textures due to compression and blur artifacts is also established.Dissertation/ThesisPh.D. Electrical Engineering 201

    Texture Transfer Based on Texture Descriptor Variations

    Get PDF
    In this report, we tackle the problem of image-space texture transfer which aims to modify an object or surface material by replacing its input texture by another reference texture. The main challenge of texture transfer is to successfully reproduce the reference texture patterns while preserving the input texture variations due to its environment such as illumination or shape variations. We propose to use a texture descriptor composed of local luminance and local gradients orientation and magnitude to characterize the input texture variations. We then introduce a guided texture synthesis algorithm to synthesize a texture resembling the reference texture with the input texture variations. The main contribution of our algorithm is its ability to locally deform the reference texture according to local texture descriptors in order to better reproduce the input texture variations. We show that our approach is able to produce results comparable with current state-of-the-art approaches but with fewer user inputs.Dans ce rapport, nous nous intéressons au transfert de texture en espace imagequi consiste à modifier le matériau d’un objet ou d’une surface en remplaçant sa texture d’entréepar une texture de référence. La principale difficulté du transfert de texture est d’arriver àreproduire les motifs de la texture de référence, tout en préservant les variations de la textured’entrée introduites par son environnement comme des variations de forme ou d’illumination.Nous proposons d’utiliser un descripteur de texture composé de la luminance locale ainsi que del’orientation et l’amplitude locale des gradients afin de caractériser les variations de la textured’entrée. Nous introduisons ensuite un algorithme de synthèse de texture guidé afin de synthétiserune texture ressemblant à la référence mais préservant les variations de la texture d’entrée. Laprincipale contribution de cet algorithme est sa capacité à déformer la texture de référencelocalement en fonction du descripteur de texture. Cette approche permet d’obtenir des résultatscomparables à l’état de l’art, mais nécessitant moins d’informations de la part de l’utilisateur

    MODELING AND ANALYSIS OF WRINKLES ON AGING HUMAN FACES

    Get PDF
    The analysis and modeling of aging human faces has been extensively studied in the past decade. Most of this work is based on matching learning techniques focused on appearance of faces at different ages incorporating facial features such as face shape/geometry and patch-based texture features. However, we do not find much work done on the analysis of facial wrinkles in general and specific to a person. The goal of this dissertation is to analyse and model facial wrinkles for different applications. Facial wrinkles are challenging low-level image features to analyse. In general, skin texture has drastically varying appearance due to its characteristic physical properties. A skin patch looks very different when viewed or illuminated from different angles. This makes subtle skin features like facial wrinkles difficult to be detected in images acquired in uncontrolled imaging settings. In this dissertation, we examine the image properties of wrinkles i.e. intensity gradients and geometric properties and use them for several applications including low-level image processing for automatic detection/localization of wrinkles, soft biometrics and removal of wrinkles using digital inpainting. First, we present results of detection/localization of wrinkles in images using Marked Point Process (MPP). Wrinkles are modeled as sequences of line segments in a Bayesian framework which incorporates a prior probability model based on the likely geometric properties of wrinkles and a data likelihood term based on image intensity gradients. Wrinkles are localized by sampling the posterior probability using a Reversible Jump Markov Chain Monte Carlo (RJMCMC) algorithm. We also present an evaluation algorithm to quantitatively evaluate the detection and false alarm rate of our algorithm and conduct experiments with images taken in uncontrolled settings. The MPP model, despite its promising localization results, requires a large number of iterations in the RJMCMC algorithm to reach global minimum resulting in considerable computation time. This motivated us to adopt a deterministic approach based on image morphology for fast localization of facial wrinkles. We propose image features based on Gabor filter banks to highlight subtle curvilinear discontinuities in skin texture caused by wrinkles. Then, image morphology is used to incorporate geometric constraints to localize curvilinear shapes of wrinkles at image sites of large Gabor filter responses. We conduct experiments on two sets of low and high resolution images to demonstrate faster and visually better localization results as compared to those obtained by MPP modeling. As a next application, we investigate the user-drawn and automatically detected wrinkles as a pattern for their discriminative power as a soft biometrics to recognize subjects from their wrinkle patterns only. A set of facial wrinkles from an image is treated as a curve pattern and used for subject recognition. Given the wrinkle patterns from a query and gallery images, several distance measures are calculated between the two patterns to quantify the similarity between them. This is done by finding the possible correspondences between curves from the two patterns using a simple bipartite graph matching algorithm. Then several metrics are used to calculate the similarity between the two wrinkle patterns. These metrics are based on Hausdorff distance and curve-to-curve correspondences. We conduct experiments on data sets of both hand drawn and automatically detected wrinkles. Finally, we apply digital inpainting to automatically remove wrinkles from facial images. Digital image inpainting refers to filling in the holes of arbitrary shapes in images so that they seem to be part of the original image. The inpainting methods target either the structure or the texture of an image or both. There are two limitations of existing inpainting methods for the removal of wrinkles. First, the differences in the attributes of structure and texture requires different inpainting methods. Facial wrinkles do not fall strictly under the category of structure or texture and can be considered as some where in between. Second, almost all of the image inpainting techniques are supervised i.e. the area/gap to be filled is provided by user interaction and the algorithms attempt to find the suitable image portion automatically. We present an unsupervised image inpainting method where facial regions with wrinkles are detected automatically using their characteristic intensity gradients and removed by painting the regions by the surrounding skin texture
    • …
    corecore