91,846 research outputs found

    Characterization and adaptive texture synthesis-based compression scheme

    No full text
    International audienceThis paper presents an adaptive texture synthesis-based compression scheme, where textured regions are detected and removed at encoder side, allowing the decoder to use texture synthesis to fill them. The detection relies on locally adaptive resolution segmentation. According to results shown by synthesis algorithms, they need to be parameterized according to the patterns to be synthesized. In this framework, the synthesizer gets its parameters from DCT feature-based texture descriptors. An adaptive pixel-based algorithm is used, relying on the comparison between current pixel neighborhood and those in an atypically shaped sample. Different neighborhood sizes are considered to better match texture patterns. The framework has been validated within an H.264/AVC video codec. Experimental results show significant bit-rate saving at similar visual quality

    Perspective-aware texture analysis and synthesis

    Get PDF
    The original publication is available at www.springerlink.comInternational audienceThis paper presents a novel texture synthesis scheme for anisotropic 2D textures based on perspective feature analysis and energy optimization. Given an example texture, the synthesis process starts with analyzing the texel (TEXture ELement) scale variations to obtain the perspective map (scale map). Feature mask and simple user-assisted scale extraction operations including slant and tilt angles assignment and scale value editing are applied. The scale map represents the global variations of the texel scales in the sample texture. Then, we extend 2D texture optimization techniques to synthesize these kinds of perspectively featured textures. The non-parametric texture optimization approach is integrated with histogram matching, which forces the global statics of the texel scale variations of the synthesized texture to match those of the example. We also demonstrate that our method is well-suited for image completion of a perspectively featured texture region in a digital photo

    A survey of exemplar-based texture synthesis

    Full text link
    Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statistics-based methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever "copy-paste" procedure, which stitches together large regions of the sample. Hybrid methods try to combine ideas from both approaches to avoid their hurdles. The recent approaches using convolutional neural networks fit to this classification, some being statistical and others performing patch re-arrangement in the feature space. They produce impressive synthesis on various kinds of textures. Nevertheless, we found that most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe FRAME. New method presented: CNNMR

    Solid Texture Synthesis using Generative Adversarial Networks

    Full text link
    Solid texture synthesis, as an effective way to extend 2D exemplar to a volumetric texture, exhibits advantages in numerous application domains. However, existing methods generally suffer from synthesis distortion due to the under-utilization of information. In this paper, we propose a novel approach for the solid texture synthesis based on generative adversarial networks(GANs), named STS-GAN, learning the distribution of 2D exemplars with volumetric operation in a feature-free manner. The multi-scale discriminators evaluate the similarities between patch exemplars and slices from generated volume, promoting the generator to synthesize realistic solid texture. Experimental results demonstrate that the proposed method can synthesize high-quality solid texture with similar visual characteristics to the exemplar

    On Using Backpropagation for Speech Texture Generation and Voice Conversion

    Full text link
    Inspired by recent work on neural network image generation which rely on backpropagation towards the network inputs, we present a proof-of-concept system for speech texture synthesis and voice conversion based on two mechanisms: approximate inversion of the representation learned by a speech recognition neural network, and on matching statistics of neuron activations between different source and target utterances. Similar to image texture synthesis and neural style transfer, the system works by optimizing a cost function with respect to the input waveform samples. To this end we use a differentiable mel-filterbank feature extraction pipeline and train a convolutional CTC speech recognition network. Our system is able to extract speaker characteristics from very limited amounts of target speaker data, as little as a few seconds, and can be used to generate realistic speech babble or reconstruct an utterance in a different voice.Comment: Accepted to ICASSP 201

    Optimization for automated assembly of puzzles

    Get PDF
    The puzzle assembly problem has many application areas such as restoration and reconstruction of archeological findings, repairing of broken objects, solving jigsaw type puzzles, molecular docking problem, etc. The puzzle pieces usually include not only geometrical shape information but also visual information such as texture, color, and continuity of lines. This paper presents a new approach to the puzzle assembly problem that is based on using textural features and geometrical constraints. The texture of a band outside the border of pieces is predicted by inpainting and texture synthesis methods. Feature values are derived from these original and predicted images of pieces. An affinity measure of corresponding pieces is defined and alignment of the puzzle pieces is formulated as an optimization problem where the optimum assembly of the pieces is achieved by maximizing the total affinity measure. An fft based image registration technique is used to speed up the alignment of the pieces. Experimental results are presented on real and artificial data sets

    Texture Segmentation Using Optimal Gabor Filter

    Get PDF
    Texture segmentation is one of the most important feature utilized in practical diagnosis because it can reveal the changing tendency of the image. A texture segmentation method based on Gabor lter is proposed in the project. This method synthesis the information of location, color and texture features to be the wight, this can make satisfactory segmentation according to texture of image. The experiment shows that overall rate correctness for this method exceeds 81%
    corecore