49,399 research outputs found

    Combined Structure and Texture Image Inpainting Algorithm for Natural Scene Image Completion

    Get PDF
    Image inpainting or image completion refers to the task of filling in the missing or damaged regions of an image in a visually plausible way. Many works on this subject have been proposed these recent years. We present a hybrid method for completion of images of natural scenery, where the removal of a foreground object creates a hole in the image. The basic idea is to decompose the original image into a structure and a texture image. Reconstruction of each image is performed separately. The missing information in the structure component is reconstructed using a structure inpainting algorithm, while the texture component is repaired by an improved exemplar based texture synthesis technique. Taking advantage of both the structure inpainting methods and texture synthesis techniques, we designed an effective image reconstruction method. A comparison with some existing methods on different natural images shows the merits of our proposed approach in providing high quality inpainted images. Keywords: Image inpainting, Decomposition method, Structure inpainting, Exemplar based, Texture synthesi

    A survey of exemplar-based texture synthesis

    Full text link
    Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statistics-based methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever "copy-paste" procedure, which stitches together large regions of the sample. Hybrid methods try to combine ideas from both approaches to avoid their hurdles. The recent approaches using convolutional neural networks fit to this classification, some being statistical and others performing patch re-arrangement in the feature space. They produce impressive synthesis on various kinds of textures. Nevertheless, we found that most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe FRAME. New method presented: CNNMR

    In-Band Disparity Compensation for Multiview Image Compression and View Synthesis

    Get PDF

    Hybrid image representation methods for automatic image annotation: a survey

    Get PDF
    In most automatic image annotation systems, images are represented with low level features using either global methods or local methods. In global methods, the entire image is used as a unit. Local methods divide images into blocks where fixed-size sub-image blocks are adopted as sub-units; or into regions by using segmented regions as sub-units in images. In contrast to typical automatic image annotation methods that use either global or local features exclusively, several recent methods have considered incorporating the two kinds of information, and believe that the combination of the two levels of features is beneficial in annotating images. In this paper, we provide a survey on automatic image annotation techniques according to one aspect: feature extraction, and, in order to complement existing surveys in literature, we focus on the emerging image annotation methods: hybrid methods that combine both global and local features for image representation

    Fitting a 3D Morphable Model to Edges: A Comparison Between Hard and Soft Correspondences

    Get PDF
    We propose a fully automatic method for fitting a 3D morphable model to single face images in arbitrary pose and lighting. Our approach relies on geometric features (edges and landmarks) and, inspired by the iterated closest point algorithm, is based on computing hard correspondences between model vertices and edge pixels. We demonstrate that this is superior to previous work that uses soft correspondences to form an edge-derived cost surface that is minimised by nonlinear optimisation.Comment: To appear in ACCV 2016 Workshop on Facial Informatic

    Example based texture synthesis and quantification of texture quality

    Get PDF
    Textures have been used effectively to create realistic environments for virtual worlds by reproducing the surface appearances. One of the widely-used methods for creating textures is the example based texture synthesis method. In this method of generating a texture of arbitrary size, an input image from the real world is provided. This input image is used for the basis of generating large textures. Various methods based on the underlying pattern of the image have been used to create these textures; however, the problem of finding an algorithm which provides a good output is still an open research issue. Moreover, the process of determining the best of the outputs produced by the existing methods is a subjective one and requires human intervention. No quantification measure exists to do a relative comparison between the outputs. This dissertation addresses both problems using a novel approach. The dissertation also proposes an improved algorithm for image inpainting which yields better results than existing methods. Firstly, this dissertation presents a methodology which uses a HSI (hue, saturation, intensity) color model in conjunction with the hybrid approach to improve the quality of the synthesized texture. Unlike the RGB (red, green, blue) color model, the HSI color model is more intuitive and closer to human perception. The hue, saturation and intensity are better indicators than the three color channels used in the RGB model. They represent the exact way, in which the eye sees color in the real world. Secondly, this dissertation addresses the issue of quantifying the quality of the output textures generated using the various texture synthesis methods. Quantifying the quality of the output generated is an important issue and a novel method using statistical measures and a color autocorrelogram has been proposed. It is a two step method; in the first step a measure of the energy, entropy and similar statistical measures helps determine the consistency of the output texture. In the second step an autocorelogram is used to analyze color images as well and quantify them effectively. Finally, this disseratation prsesents a method for improving image inpainting. In the case of inpainting, small sections of the image missing due to noise or other similar reasons can be reproduced using example based texture synthesis. The region of the image immediately surrounding the missing section is treated as sample input. Inpainting can also be used to alter images by removing large sections of the image and filling the removed section with the image data from the rest of the image. For this, a maximum edge detector method is proposed to determine the correct order of section filling and produces significantly better results
    • 

    corecore