14 research outputs found

    High-performance arithmetic coding VLSI macro for the H264 video compression standard

    Get PDF

    Scanned Document Compression Technique

    Get PDF
    These days’ different media records are utilized to impart data. The media documents are content records, picture, sound, video and so forth. All these media documents required substantial measure of spaces when it is to be exchanged. Regular five page report records involve 75 KB of space, though a solitary picture can take up around 1.4 MB. In our paper, fundamental center is on two pressure procedures which are named as DjVU pressure strategy and the second is Block-based Hybrid Video Codec. In which we will chiefly concentrate on DjVU pressure strategy. DjVu is a picture pressure procedure particularly equipped towards the pressure of checked records in shading at high determination. Run of the mill magazine pages in shading filtered at 300dpi are compacted to somewhere around 40 and 80 KB, or 5 to 10 times littler than with JPEG for a comparative level of subjective quality. The frontal area layer, which contains the content and drawings and requires high spatial determination, is isolated from the foundation layer, which contains pictures and foundations and requires less determination. The closer view is packed with a bi-tonal picture pressure system that exploits character shape similitudes. The foundation is compacted with another dynamic, wavelet-based pressure strategy. A constant, memory proficient variant of the decoder is accessible as a module for famous web programs. We likewise exhibit that the proposed division calculation can enhance the nature of decoded reports while at the same time bringing down the bit rate

    REGION SPECIFIC WAVELET COMPRESSION FOR 4K SURVEILLANCE IMAGES

    Get PDF
    For successful transmission of massively sequenced images during 4K surveillance operations large amount of data transfer cost high bandwidth, latency and delay of information transfer. Thus, there lies a need for real-time compression of this image sequences. In this study we present a region specific approach for wavelet based image compression to enable management of huge chunks of information flow by transforming Harr wavelets in hierarchical order.  Â

    The B-coder: an improved binary arithmetic coder and probability estimator

    Get PDF
    In this paper we present the B-coder, an efficient binary arithmetic coder that performs extremely well on a wide range of data. The B-coder should be classed as an `approximate’ arithmetic coder, because of its use of an approximation to multiplication. We show that the approximation used in the B-coder has an efficiency cost of 0.003 compared to Shannon entropy. At the heart of the B-coder is an efficient state machine that adapts rapidly to the data to be coded. The adaptation is achieved by allowing a fixed table of transitions and probabilities to change within a given tolerance. The combination of the two techniques gives a coder that out-performs the current state-of-the-art binary arithmetic coders

    The B-coder: an improved binary arithmetic coder and probability estimator

    Get PDF
    In this paper we present the B-coder, an efficient binary arithmetic coder that performs extremely well on a wide range of data. The B-coder should be classed as an `approximate’ arithmetic coder, because of its use of an approximation to multiplication. We show that the approximation used in the B-coder has an efficiency cost of 0.003 compared to Shannon entropy. At the heart of the B-coder is an efficient state machine that adapts rapidly to the data to be coded. The adaptation is achieved by allowing a fixed table of transitions and probabilities to change within a given tolerance. The combination of the two techniques gives a coder that out-performs the current state-of-the-art binary arithmetic coders

    Bitplane image coding with parallel coefficient processing

    Get PDF
    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible

    Exploring Discrete Cosine Transform for Multi-resolution Analysis

    Get PDF
    Multi-resolution analysis has been a very popular technique in the recent years. Wavelets have been used extensively to perform multi resolution image expansion and analysis. DCT, however, has been used to compress image but not for multi resolution image analysis. This thesis is an attempt to explore the possibilities of using DCT for multi-resolution image analysis. Naive implementation of block DCT for multi-resolution expansion has many difficulties that lead to signal distortion. One of the main causes of distortion is the blocking artifacts that appear when reconstructing images transformed by DCT. The new algorithm is based on line DCT which eliminates the need for block processing. The line DCT is one dimensional array based on cascading the image rows and columns in one transform operation. Several images have been used to test the algorithm at various resolution levels. The reconstruction mean square error rate is used as an indication to the success of the method. The proposed algorithm has also been tested against the traditional block DCT

    Bitplane Image Coding With Parallel Coefficient Processing

    Full text link

    Near-Lossless Bitonal Image Compression System

    Get PDF
    The main purpose of this thesis is to develop an efficient near-lossless bitonal compression algorithm and to implement that algorithm on a hardware platform. The current methods for compression of bitonal images include the JBIG and JBIG2 algorithms, however both JBIG and JBIG2 have their disadvantages. Both of these algorithms are covered by patents filed by IBM, making them costly to implement commercially. Also, JBIG only provides means for lossless compression while JBIG2 provides lossy methods only for document-type images. For these reasons a new method for introducing loss and controlling this loss to sustain quality is developed. The lossless bitonal image compression algorithm used for this thesis is called Block Arithmetic Coder for Image Compression (BACIC), which can efficiently compress bitonal images. In this thesis, loss is introduced for cases where better compression efficiency is needed. However, introducing loss in bitonal images is especially difficult, because pixels undergo such a drastic change, either from white to black or black to white. Such pixel flipping introduces salt and pepper noise, which can be very distracting when viewing an image. Two methods are used in combination to control the visual distortion introduced into the image. The first is to keep track of the error created by the flipping of pixels, and using this error to decide whether flipping another pixel will cause the visual distortion to exceed a predefined threshold. The second method is region of interest consideration. In this method, lower loss or no loss is introduced into the important parts of an image, and higher loss is introduced into the less important parts. This allows for a good quality image while increasing the compression efficiency. Also, the ability of BACIC to compress grayscale images is studied and BACICm, a multiplanar BACIC algorithm, is created. A hardware implementation of the BACIC lossless bitonal image compression algorithm is also designed. The hardware implementation is done using VHDL targeting a Xilinx FPGA, which is very useful, because of its flexibility. The programmed FPGA could be included in a product of the facsimile or printing industry to handle the compression or decompression internal to the unit, giving it an advantage in the marketplace
    corecore