53,185 research outputs found

    A survey of exemplar-based texture synthesis

    Full text link
    Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statistics-based methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever "copy-paste" procedure, which stitches together large regions of the sample. Hybrid methods try to combine ideas from both approaches to avoid their hurdles. The recent approaches using convolutional neural networks fit to this classification, some being statistical and others performing patch re-arrangement in the feature space. They produce impressive synthesis on various kinds of textures. Nevertheless, we found that most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe FRAME. New method presented: CNNMR

    Combined Structure and Texture Image Inpainting Algorithm for Natural Scene Image Completion

    Get PDF
    Image inpainting or image completion refers to the task of filling in the missing or damaged regions of an image in a visually plausible way. Many works on this subject have been proposed these recent years. We present a hybrid method for completion of images of natural scenery, where the removal of a foreground object creates a hole in the image. The basic idea is to decompose the original image into a structure and a texture image. Reconstruction of each image is performed separately. The missing information in the structure component is reconstructed using a structure inpainting algorithm, while the texture component is repaired by an improved exemplar based texture synthesis technique. Taking advantage of both the structure inpainting methods and texture synthesis techniques, we designed an effective image reconstruction method. A comparison with some existing methods on different natural images shows the merits of our proposed approach in providing high quality inpainted images. Keywords: Image inpainting, Decomposition method, Structure inpainting, Exemplar based, Texture synthesi

    In-Band Disparity Compensation for Multiview Image Compression and View Synthesis

    Get PDF

    TiO2-doped resorcinol–formaldehyde (RF) polymer and carbon gels with photocatalytic activity

    Get PDF
    Resorcinol-formaldehyde (RF) polymer gels offer a relatively easy and versatile route for incorporating metals into a carbon aerogel matrix. The hybrid materials thus obtained are ideal candidates for applications involving enhanced adsorption or catalysis. This paper presents a detailed study of Ti-doped RF and carbon aerogels. The metal was introduced into the system at three different stages of the preparation process: during polymerization, by impregnation of the RF gel, or by impregnation of the carbon gel. The structure and morphology of the samples are compared using low temperature N2 adsorption, SEM, and small and wide angle X-Ray scattering (SAXS/WAXS) methods. The TiO2-doped carbon aerogels display photocatalytic activity in breaking down aromatic compounds

    Hybrid image representation methods for automatic image annotation: a survey

    Get PDF
    In most automatic image annotation systems, images are represented with low level features using either global methods or local methods. In global methods, the entire image is used as a unit. Local methods divide images into blocks where fixed-size sub-image blocks are adopted as sub-units; or into regions by using segmented regions as sub-units in images. In contrast to typical automatic image annotation methods that use either global or local features exclusively, several recent methods have considered incorporating the two kinds of information, and believe that the combination of the two levels of features is beneficial in annotating images. In this paper, we provide a survey on automatic image annotation techniques according to one aspect: feature extraction, and, in order to complement existing surveys in literature, we focus on the emerging image annotation methods: hybrid methods that combine both global and local features for image representation
    • 

    corecore