627 research outputs found

    A VLSI architecture of JPEG2000 encoder

    Get PDF
    Copyright @ 2004 IEEEThis paper proposes a VLSI architecture of JPEG2000 encoder, which functionally consists of two parts: discrete wavelet transform (DWT) and embedded block coding with optimized truncation (EBCOT). For DWT, a spatial combinative lifting algorithm (SCLA)-based scheme with both 5/3 reversible and 9/7 irreversible filters is adopted to reduce 50% and 42% multiplication computations, respectively, compared with the conventional lifting-based implementation (LBI). For EBCOT, a dynamic memory control (DMC) strategy of Tier-1 encoding is adopted to reduce 60% scale of the on-chip wavelet coefficient storage and a subband parallel-processing method is employed to speed up the EBCOT context formation (CF) process; an architecture of Tier-2 encoding is presented to reduce the scale of on-chip bitstream buffering from full-tile size down to three-code-block size and considerably eliminate the iterations of the rate-distortion (RD) truncation.This work was supported in part by the China National High Technologies Research Program (863) under Grant 2002AA1Z142

    An efficient error resilience scheme based on wyner-ziv coding for region-of-Interest protection of wavelet based video transmission

    Get PDF
    In this paper, we propose a bandwidth efficient error resilience scheme for wavelet based video transmission over wireless channel by introducing an additional Wyner-Ziv (WZ) stream to protect region of interest (ROI) in a frame. In the proposed architecture, the main video stream is compressed by a generic wavelet domain coding structure and passed through the error prone channel without any protection. Meanwhile, the predefined ROI area related wavelet coefficients obtained after an integer wavelet transform will be specially protected by WZ codec in an additional channel during transmission. At the decoder side, the error-prone ROI related wavelet coefficients will be used as side information to help decoding the WZ stream. Different size of WZ bit streams can be applied in order to meet different bandwidth condition and different requirement of end users. The simulation results clearly revealed that the proposed scheme has distinct advantages in saving bandwidth comparing with fully applied FEC algorithm to whole video stream and in the meantime offer the robust transmission over error prone channel for certain video applications

    Complexity scalable bitplane image coding with parallel coefficient processing

    Get PDF
    Very fast image and video codecs are a pursued goal both in the academia and the industry. This paper presents a complexity scalable and parallel bitplane coding engine for wavelet-based image codecs. The proposed method processes the coefficients in parallel, suiting hardware architectures based on vector instructions. Our previous work is extended with a mechanism that provides complexity scalability to the system. Such a feature allows the coder to regulate the throughput achieved at the expense of slightly penalizing compression effi- ciency. Experimental results suggests that, when using the fastest speed, the method almost doubles the throughput of our previous engine while penalizing compression efficiency by about 10

    Image Coding with Face Descriptors Embedding

    Get PDF
    4siContent descriptors, useful for browsing and retrieval tasks, are generally extracted and treated as a separate entity with respect to the nature of the content itself. At the same time, conventional coding processes do not take into account information carried out by content descriptors. Content descriptors are closely related to the content itself, and they potentially can be used to exploit redundancy in entropy coding processes. Embedding content descriptors in the bitstream can reduce content description extraction load, and at the same time, it can reduce the rate associated to the compressed content and its description. In this paper an effective implementation of this approach is presented, where image descriptors are actively used in the coding process for exploiting redundancy. First of all, image areas containing faces are detected and encoded using a scalable method, where the base layer is represented by the corresponding eigenface, and the enhancement layer is formed by the prediction error. The remaining areas are then encoded by using a traditional approach. Simulations show that achievable compression performances are comparable with those provided by conventional, making the proposed approach very convenient for source coding and content description.partially_openpartially_openBoschetti A.; Adami N.; Leonardi R.; Okuda M.Boschetti, Alberto; Adami, Nicola; Leonardi, Riccardo; Okuda, M

    New rate adaptation method for JPEG2000-based SNR Scalable Video Coding with Integer Linear Programming models

    Get PDF
    Abstract—In the last few years scalable video coding emerged as a promising technology for efficient distribution of videos through heterogeneous networks. In a heterogeneous environment, the video content needs to be adapted in order to meet different end terminal capability requirements (user adaptation) or fluctuations of the available bandwidth (network adaptation). Consequently, the adaptation problem is a critical issue in scalable video coding design. In this paper we introduce a new adaptation method for a proposed JPEG2000-based SNR scalable codec, that formulates and solves the adaptation problem as an Integer Linear Programming problem

    Accelerating BPC-PaCo through visually lossless techniques

    Get PDF
    Fast image codecs are a current need in applications that deal with large amounts of images. Graphics Processing Units (GPUs) are suitable processors to speed up most kinds of algorithms, especially when they allow fine-grain parallelism. Bitplane Coding with Parallel Coefficient processing (BPC-PaCo) is a recently proposed algorithm for the core stage of wavelet-based image codecs tailored for the highly parallel architectures of GPUs. This algorithm provides complexity scalability to allow faster execution at the expense of coding efficiency. Its main drawback is that the speedup and loss in image quality is controlled only roughly, resulting in visible distortion at low and medium rates. This paper addresses this issue by integrating techniques of visually lossless coding into BPC-PaCo. The resulting method minimizes the visual distortion introduced in the compressed file, obtaining higher-quality images to a human observer. Experimental results also indicate 12% speedups with respect to BPC-PaCo
    corecore