27,270 research outputs found

    Scalable Data Parallel Algorithms for Texture Synthesis and Compression using Gibbs Random Fields

    Get PDF
    This paper introduces scalable data parallel algorithms for image processing. Focusing on Gibbs and Markov Random Field model representation for textures, we present parallel algorithms for texture synthesis, compression, and maximum likelihood parameter estimation, currently implemented on Thinking Machines CM-2 and CM-5. Use of fine-grained, data parallel processing techniques yields real-time algorithms for texture synthesis and compression that are substantially faster than the previously known sequential implementations. Although current implementations are on Connection Machines, the methodology presented here enables machine independent scalable algorithms for a number of problems in image processing and analysis. (Also cross-referenced as UMIACS-TR-93-80.

    A Generative Model of Natural Texture Surrogates

    Full text link
    Natural images can be viewed as patchworks of different textures, where the local image statistics is roughly stationary within a small neighborhood but otherwise varies from region to region. In order to model this variability, we first applied the parametric texture algorithm of Portilla and Simoncelli to image patches of 64X64 pixels in a large database of natural images such that each image patch is then described by 655 texture parameters which specify certain statistics, such as variances and covariances of wavelet coefficients or coefficient magnitudes within that patch. To model the statistics of these texture parameters, we then developed suitable nonlinear transformations of the parameters that allowed us to fit their joint statistics with a multivariate Gaussian distribution. We find that the first 200 principal components contain more than 99% of the variance and are sufficient to generate textures that are perceptually extremely close to those generated with all 655 components. We demonstrate the usefulness of the model in several ways: (1) We sample ensembles of texture patches that can be directly compared to samples of patches from the natural image database and can to a high degree reproduce their perceptual appearance. (2) We further developed an image compression algorithm which generates surprisingly accurate images at bit rates as low as 0.14 bits/pixel. Finally, (3) We demonstrate how our approach can be used for an efficient and objective evaluation of samples generated with probabilistic models of natural images.Comment: 34 pages, 9 figure

    TheWorld vs. SCOTT: Synthesis of COncealment Two-level Texture

    Get PDF
    International audienceWe propose an original method of Synthesis of COncealment Two-level Texture (SCOTT). SCOTT was designed according to the Human Visual System so that the concealment texture is faithful to the visual environment it will be placed in, in terms of forms and colors. The results of simulation prove that the concealment texture is efficient although it is made of simple forms and only a few colors. Even if SCOTT has initially been designed for an application of reducing the visual pollution caused by manmade equipments (antenna, electrical cabinets, distributor boxes, repeater shelters, etc.), it may be used in many applications, such as inpainting, and even in image compression

    Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing

    Full text link
    Free-viewpoint video conferencing allows a participant to observe the remote 3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint image is commonly synthesized using two pairs of transmitted texture and depth maps from two neighboring captured viewpoints via depth-image-based rendering (DIBR). To maintain high quality of synthesized images, it is imperative to contain the adverse effects of network packet losses that may arise during texture and depth video transmission. Towards this end, we develop an integrated approach that exploits the representation redundancy inherent in the multiple streamed videos a voxel in the 3D scene visible to two captured views is sampled and coded twice in the two views. In particular, at the receiver we first develop an error concealment strategy that adaptively blends corresponding pixels in the two captured views during DIBR, so that pixels from the more reliable transmitted view are weighted more heavily. We then couple it with a sender-side optimization of reference picture selection (RPS) during real-time video coding, so that blocks containing samples of voxels that are visible in both views are more error-resiliently coded in one view only, given adaptive blending will erase errors in the other view. Further, synthesized view distortion sensitivities to texture versus depth errors are analyzed, so that relative importance of texture and depth code blocks can be computed for system-wide RPS optimization. Experimental results show that the proposed scheme can outperform the use of a traditional feedback channel by up to 0.82 dB on average at 8% packet loss rate, and by as much as 3 dB for particular frames

    EFFICIENT DEPTH MAP COMPRESSION EXPLOITING CORRELATION WITH TEXTURE DATA IN MULTIRESOLUTION PREDICTIVE IMAGE CODERS

    Get PDF
    International audienceNew 3D applications such as 3DTV and FVV require not only a large amount of data, but also high-quality visual rendering. Based on one or several depth maps, intermediate views can be synthesized using a depth image-based rendering technique. Many compression schemes have been proposed for texture-plus-depth data, but the exploitation of the correlation between the two representations in enhancing compression performances is still an open research issue. In this paper, we present a novel compression scheme that aims at improving the depth coding using a joint depth/texture coding scheme. This method is an extension of the LAR (Locally Adaptive Resolution) codec, initially designed for 2D images. The LAR coding framework provides a lot of functionalities such as lossy/lossless compression, low complexity, resolution and quality scalability and quality control. Experimental results address both lossless and lossy compression aspects, considering some state of the art techniques in the two domains (JPEGLS, JPEGXR). Subjective results on the intermediate view synthesis after depth map coding show that the proposed method significantly improves the visual quality
    • …
    corecore