76 research outputs found

    Low cost architecture for JPEG2000 encoder without code-block memory

    Get PDF
    [[abstract]]The amount of memory required for code-block is one of the most important issues in JPEG2000 encoder chip implementation. This work tries to unify the output scanning order of the 2D-DWT and the processing order of the EBCOT and further to eliminate the code-block memory completely eliminated. We also propose a new architecture for embedded block coding (EBC), code-block switch adaptive embedded block coding (CS-AEBC), which can skip the insignificant bit-planes to reduce the computation time and save power consumption. Besides, a new dynamic rate distortion optimization (RDO) approach is proposed to reduce the computation time when the EBC processes lossy compression operation. The total memory required for the proposed JPEG2000 is only 2KB of internal memory, and the bandwidth required for the external memory is 2.1 B/cycle.[[conferencetype]]國際[[conferencedate]]20080623-20080626[[iscallforpapers]]Y[[conferencelocation]]Hannover, German

    High Efficiency Concurrent Embedded Block Coding Architecture for JPEG 2000

    Get PDF
    [[abstract]]Embedded block coding with optimized truncation (EBCOT) is the most important part of JPEG 2000. Due to the bit level operation and the three-pass scanning technique, the EBCOT may take more than 50% operation time in the JPEG 2000. This paper presents a high efficiency concurrent EBCOT(HECEBC) entropy encoder hardware architecture. The proposed HECEBC can concurrently process the four samples in a stripe column. Furthermore this architecture can be extended to process several stripe columns concurrently for the JPEG 2000 to accomplish high resolution applications in real time. Besides, the HECEBC uses the technique of concentrated context window to stabilize the Context-Decision (CX-D) output to relax the load in between the arithmetic encoder (AE) and the parallel-in-serial-out (PISO) buffer to triple the EBC performance.[[notice]]補正完畢[[incitationindex]]EI[[booktype]]紙

    High throughput image compression and decompression on GPUs

    Get PDF
    Diese Arbeit befasst sich mit der Entwicklung eines GPU-freundlichen, intra-only, Wavelet-basierten Videokompressionsverfahrens mit hohem Durchsatz, das für visuell verlustfreie Anwendungen optimiert ist. Ausgehend von der Beobachtung, dass der JPEG 2000 Entropie-Kodierer ein Flaschenhals ist, werden verschiedene algorithmische Änderungen vorgeschlagen und bewertet. Zunächst wird der JPEG 2000 Selective Arithmetic Coding Mode auf der GPU realisiert, wobei sich die Erhöhung des Durchsatzes hierdurch als begrenzt zeigt. Stattdessen werden zwei nicht standard-kompatible Änderungen vorgeschlagen, die (1) jede Bitebebene in nur einem einzelnen Pass verarbeiten (Single-Pass-Modus) und (2) einen echten Rohcodierungsmodus einführen, der sample-weise parallelisierbar ist und keine aufwendige Kontextmodellierung erfordert. Als nächstes wird ein alternativer Entropiekodierer aus der Literatur, der Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), evaluiert. Er gibt Signaladaptivität zu Gunsten von höherer Parallelität auf und daher wird hier untersucht und gezeigt, dass ein aus verschiedensten Testsequenzen gemitteltes statisches Wahrscheinlichkeitsmodell eine kompetitive Kompressionseffizienz erreicht. Es wird zudem eine Kombination von BPC-PaCo mit dem Single-Pass-Modus vorgeschlagen, der den Speedup gegenüber dem JPEG 2000 Entropiekodierer von 2,15x (BPC-PaCo mit zwei Pässen) auf 2,6x (BPC-PaCo mit Single-Pass-Modus) erhöht auf Kosten eines um 0,3 dB auf 1,0 dB erhöhten Spitzen-Signal-Rausch-Verhältnis (PSNR). Weiter wird ein paralleler Algorithmus zur Post-Compression Ratenkontrolle vorgestellt sowie eine parallele Codestream-Erstellung auf der GPU. Es wird weiterhin ein theoretisches Laufzeitmodell formuliert, das es durch Benchmarking von einer GPU ermöglicht die Laufzeit einer Routine auf einer anderen GPU vorherzusagen. Schließlich wird der erste JPEG XS GPU Decoder vorgestellt und evaluiert. JPEG XS wurde als Low Complexity Codec konzipiert und forderte erstmals explizit GPU-Freundlichkeit bereits im Call for Proposals. Ab Bitraten über 1 bpp ist der Decoder etwa 2x schneller im Vergleich zu JPEG 2000 und 1,5x schneller als der schnellste hier vorgestellte Entropiekodierer (BPC-PaCo mit Single-Pass-Modus). Mit einer GeForce GTX 1080 wird ein Decoder Durchsatz von rund 200 fps für eine UHD-4:4:4-Sequenz erreicht.This work investigates possibilities to create a high throughput, GPU-friendly, intra-only, Wavelet-based video compression algorithm optimized for visually lossless applications. Addressing the key observation that JPEG 2000’s entropy coder is a bottleneck and might be overly complex for a high bit rate scenario, various algorithmic alterations are proposed. First, JPEG 2000’s Selective Arithmetic Coding mode is realized on the GPU, but the gains in terms of an increased throughput are shown to be limited. Instead, two independent alterations not compliant to the standard are proposed, that (1) give up the concept of intra-bit plane truncation points and (2) introduce a true raw-coding mode that is fully parallelizable and does not require any context modeling. Next, an alternative block coder from the literature, the Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), is evaluated. Since it trades signal adaptiveness for increased parallelism, it is shown here how a stationary probability model averaged from a set of test sequences yields competitive compression efficiency. A combination of BPC-PaCo with the single-pass mode is proposed and shown to increase the speedup with respect to the original JPEG 2000 entropy coder from 2.15x (BPC-PaCo with two passes) to 2.6x (proposed BPC-PaCo with single-pass mode) at the marginal cost of increasing the PSNR penalty by 0.3 dB to at most 1 dB. Furthermore, a parallel algorithm is presented that determines the optimal code block bit stream truncation points (given an available bit rate budget) and builds the entire code stream on the GPU, reducing the amount of data that has to be transferred back into host memory to a minimum. A theoretical runtime model is formulated that allows, based on benchmarking results on one GPU, to predict the runtime of a kernel on another GPU. Lastly, the first ever JPEG XS GPU-decoder realization is presented. JPEG XS was designed to be a low complexity codec and for the first time explicitly demanded GPU-friendliness already in the call for proposals. Starting at bit rates above 1 bpp, the decoder is around 2x faster compared to the original JPEG 2000 and 1.5x faster compared to JPEG 2000 with the fastest evaluated entropy coder (BPC-PaCo with single-pass mode). With a GeForce GTX 1080, a decoding throughput of around 200 fps is achieved for a UHD 4:4:4 sequence

    High efficiency architecture of ESCOT with pass concurrent context modeling scheme for scalable video coding

    Get PDF
    [[abstract]]In this work, we propose a high efficiency hardware architecture of embedded sub-band coding with optimal truncation (ESCOT) with pass concurrent context modeling (PCCM) scheme for wavelet-based scalable video coding (SVC). PCCM can merge the three-pass process of bit-plane coding into a single pass process. It improves the efficiency of the ESCOT algorithm and reduces the frequencies of memory access, which can reduce the power consumption. Furthermore we use the parallel architecture scheme of PCCM to encode 4 samples concurrently, which improves the operation speed and can reduce 40% of internal memory requirement. We use Artison TSMC 0.18 mum 1P6M standard cell library to design and implement the proposed concurrent context modeling. The simulation results indicate that PCCM can have an operation speedup of 9.5 compared to the standard context modeling of ESCOT, and it can operate for 1080 p with frame rate of 30 fps at clock rate of 125 MHz.[[conferencetype]]國際[[conferencedate]]20080518~20080521[[iscallforpapers]]Y[[conferencelocation]]Seattle, WA, US

    Accelerating BPC-PaCo through visually lossless techniques

    Get PDF
    Fast image codecs are a current need in applications that deal with large amounts of images. Graphics Processing Units (GPUs) are suitable processors to speed up most kinds of algorithms, especially when they allow fine-grain parallelism. Bitplane Coding with Parallel Coefficient processing (BPC-PaCo) is a recently proposed algorithm for the core stage of wavelet-based image codecs tailored for the highly parallel architectures of GPUs. This algorithm provides complexity scalability to allow faster execution at the expense of coding efficiency. Its main drawback is that the speedup and loss in image quality is controlled only roughly, resulting in visible distortion at low and medium rates. This paper addresses this issue by integrating techniques of visually lossless coding into BPC-PaCo. The resulting method minimizes the visual distortion introduced in the compressed file, obtaining higher-quality images to a human observer. Experimental results also indicate 12% speedups with respect to BPC-PaCo

    GPU-oriented architecture for an end-to-end image/video codec based on JPEG2000

    Get PDF
    Modern image and video compression standards employ computationally intensive algorithms that provide advanced features to the coding system. Current standards often need to be implemented in hardware or using expensive solutions to meet the real-time requirements of some environments. Contrarily to this trend, this paper proposes an end-to-end codec architecture running on inexpensive Graphics Processing Units (GPUs) that is based on, though not compatible with, the JPEG2000 international standard for image and video compression. When executed in a commodity Nvidia GPU, it achieves real time processing of 12K video. The proposed S/W architecture utilizes four CUDA kernels that minimize memory transfers, use registers instead of shared memory, and employ a double-buffer strategy to optimize the streaming of data. The analysis of throughput indicates that the proposed codec yields results at least 10× superior on average to those achieved with JPEG2000 implementations devised for CPUs, and approximately 4× superior to those achieved with hardwired solutions of the HEVC/H.265 video compression standard

    Design and FPGA Implementation of High Speed DWT-IDWT Architecture with Pipelined SPIHT Architecture for Image Compression

    Get PDF
    Image compression demands high speed architectures for transformation and encoding process Medical image compression demands lossless compression schemes and faster architectures A trade-off between speed and area decides the complexity of image compression algorithms In this work a high speed DWT architecture and pipelined SPIHT architecture is designed modeled and implemented on FPGA platform DWT computation is performed using matrix multiplication operation and is implemented on Virtex-5 FPGA that consumes less than 1 of the hardware resource The SPIHT algorithm that is performed using pipelined architecture and hence achieves higher throughput and latency The SPIHT algorithm operates at a frequency of 260 MHz and occupies area less than 15 of the resources The architecture designed is suitable for high speed image compression application

    Compressive sensing based image processing and energy-efficient hardware implementation with application to MRI and JPG 2000

    Get PDF
    In the present age of technology, the buzzwords are low-power, energy-efficient and compact systems. This directly leads to the date processing and hardware techniques employed in the core of these devices. One of the most power-hungry and space-consuming schemes is that of image/video processing, due to its high quality requirements. In current design methodologies, a point has nearly been reached in which physical and physiological effects limit the ability to just encode data faster. These limits have led to research into methods to reduce the amount of acquired data without degrading image quality and increasing the energy consumption. Compressive sensing (CS) has emerged as an efficient signal compression and recovery technique, which can be used to efficiently reduce the data acquisition and processing. It exploits the sparsity of a signal in a transform domain to perform sampling and stable recovery. This is an alternative paradigm to conventional data processing and is robust in nature. Unlike the conventional methods, CS provides an information capturing paradigm with both sampling and compression. It permits signals to be sampled below the Nyquist rate, and still allowing optimal reconstruction of the signal. The required measurements are far less than those of conventional methods, and the process is non-adaptive, making the sampling process faster and universal. In this thesis, CS methods are applied to magnetic resonance imaging (MRI) and JPEG 2000, which are popularly used imaging techniques in clinical applications and image compression, respectively. Over the years, MRI has improved dramatically in both imaging quality and speed. This has further revolutionized the field of diagnostic medicine. However, imaging speed, which is essential to many MRI applications still remains a major challenge. The specific challenge addressed in this work is the use of non-Fourier based complex measurement-based data acquisition. This method provides the possibility of reconstructing high quality MRI data with minimal measurements, due to the high incoherence between the two chosen matrices. Similarly, JPEG2000, though providing a high compression, can be further improved upon by using compressive sampling. In addition, the image quality is also improved. Moreover, having a optimized JPEG 2000 architecture reduces the overall processing, and a faster computation when combined with CS. Considering the requirements, this thesis is presented in two parts. In the first part: (1) A complex Hadamard matrix (CHM) based 2D and 3D MRI data acquisition with recovery using a greedy algorithm is proposed. The CHM measurement matrix is shown to satisfy the necessary condition for CS, known as restricted isometry property (RIP). The sparse recovery is done using compressive sampling matching pursuit (CoSaMP); (2) An optimized matrix and modified CoSaMP is presented, which enhances the MRI performance when compared with the conventional sampling; (3) An energy-efficient, cost-efficient hardware design based on field programmable gate array (FPGA) is proposed, to provide a platform for low-cost MRI processing hardware. At every stage, the design is proven to be superior with other commonly used MRI-CS methods and is comparable with the conventional MRI sampling. In the second part, CS techniques are applied to image processing and is combined with JPEG 2000 coder. While CS can reduce the encoding time, the effect on the overall JPEG 2000 encoder is not very significant due to some complex JPEG 2000 algorithms. One problem encountered is the big-level operations in JPEG 2000 arithmetic encoding (AE), which is completely based on bit-level operations. In this work, this problem is tackled by proposing a two-symbol AE with an efficient FPGA based hardware design. Furthermore, this design is energy-efficient, fast and has lower complexity when compared to conventional JPEG 2000 encoding

    Parallel architectural design space exploration for real-time image compression

    Get PDF
    Embedded block coding with optimized truncation (EBCOT) is a coding algorithm used in JPEG2000. EBCOT operates on the wavelet transformed data to generate highly scalable compressed bit stream. Sub-band samples obtained from wavelet transform are partitioned into smaller blocks called code-blocks. EBCOT encoding is done on blocks to avoid error propagation through the bands and to increase robustness. Block wise encoding provides flexibility for parallel hardware implementation of EBCOT. The encoding process in JPEG2000 is divided into two phases: Tier 1 coding (Entropy encoding) and Tier 2 coding (Tag tree coding). This thesis deals with design space exploration and implementation of parallel hardware architecture of Tier 1 encoder used in JPEG2000. Parallel capabilities of Tier-1 encoder is the motivation for exploration of high performance real time image compression architecture in hardware. The design space covers the following investigations: - The effect of block-size in terms of resources, speed, and compression performance, - Computational performance. The key computational performance parameters targeted by the architecture are - significant speedup compared to a sequential implementation, - minimum processing latency and, - minimum logic resource utilization. The proposed architecture is developed for an embedded application system, coded in VHDL and synthesized for implementation on Xilinx FPGA system
    • …
    corecore