73,321 research outputs found

    Image coding using redundant dictionaries

    Get PDF
    This chapter discusses the problem of coding images using very redundant libraries of waveforms, also referred to as dictionaries. We start with a discussion of the shortcomings of classical approaches based on orthonormal bases. More specifically, we show why these redundant dictionaries provide an interesting alternative for image representation. We then introduce a special dictionary of 2-D primitives called anisotropic refinement atoms that are well suited for representing edge dominated images. Using a simple greedy algorithm, we design an image coder that performs very well at low bit rate. We finally discuss its performance and particular features such as geometric adaptativity and rate scalability

    Regularity scalable image coding based on wavelet singularity detection

    Get PDF
    In this paper, we propose an adaptive algorithm for scalable wavelet image coding, which is based on the general feature, the regularity, of images. In pattern recognition or computer vision, regularity of images is estimated from the oriented wavelet coefficients and quantified by the Lipschitz exponents. To estimate the Lipschitz exponents, evaluating the interscale evolution of the wavelet transform modulus sum (WTMS) over the directional cone of influence was proven to be a better approach than tracing the wavelet transform modulus maxima (WTMM). This is because the irregular sampling nature of the WTMM complicates the reconstruction process. Moreover, examples were found to show that the WTMM representation cannot uniquely characterize a signal. It implies that the reconstruction of signal from its WTMM may not be consistently stable. Furthermore, the WTMM approach requires much more computational effort. Therefore, we use the WTMS approach to estimate the regularity of images from the separable wavelet transformed coefficients. Since we do not concern about the localization issue, we allow the decimation to occur when we evaluate the interscale evolution. After the regularity is estimated, this information is utilized in our proposed adaptive regularity scalable wavelet image coding algorithm. This algorithm can be simply embedded into any wavelet image coders, so it is compatible with the existing scalable coding techniques, such as the resolution scalable and signal-to-noise ratio (SNR) scalable coding techniques, without changing the bitstream format, but provides more scalable levels with higher peak signal-to-noise ratios (PSNRs) and lower bit rates. In comparison to the other feature-based wavelet scalable coding algorithms, the proposed algorithm outperforms them in terms of visual perception, computational complexity and coding efficienc

    Strong edge features for image coding

    Get PDF
    A two-component model is proposed for perceptual image coding. For the first component of the model, the watershed operator is used to detect strong edge features. Then, an efficient morphological interpolation algorithm reconstructs the smooth areas of the image from the extracted edge information, also known as sketch data. The residual component, containing fine textures, is separately coded by a subband coding scheme. The morphological operators involved in the coding of the primary component perform very efficiently compared to conventional techniques like the LGO operator, used for the edge extraction, or the diffusion filters, iteratively applied for the interpolation of smooth areas in previously reported sketch-based coding schemes.Peer ReviewedPostprint (published version

    Extended Non-Binary Low-Density Parity-Check Codes over Erasure Channels

    Full text link
    Based on the extended binary image of non-binary LDPC codes, we propose a method for generating extra redundant bits, such as to decreases the coding rate of a mother code. The proposed method allows for using the same decoder, regardless of how many extra redundant bits have been produced, which considerably increases the flexibility of the system without significantly increasing its complexity. Extended codes are also optimized for the binary erasure channel, by using density evolution methods. Nevertheless, the results presented in this paper can easily be extrapolated to more general channel models.Comment: ISIT 2011, submitte

    ROI coding of volumetric medical images with application to visualisation

    Get PDF

    Quality Adaptive Least Squares Trained Filters for Video Compression Artifacts Removal Using a No-reference Block Visibility Metric

    No full text
    Compression artifacts removal is a challenging problem because videos can be compressed at different qualities. In this paper, a least squares approach that is self-adaptive to the visual quality of the input sequence is proposed. For compression artifacts, the visual quality of an image is measured by a no-reference block visibility metric. According to the blockiness visibility of an input image, an appropriate set of filter coefficients that are trained beforehand is selected for optimally removing coding artifacts and reconstructing object details. The performance of the proposed algorithm is evaluated on a variety of sequences compressed at different qualities in comparison to several other deblocking techniques. The proposed method outperforms the others significantly both objectively and subjectively
    • 

    corecore