372 research outputs found

    A new multistage lattice vector quantization with adaptive subband thresholding for image compression

    Get PDF
    Lattice vector quantization (LVQ) reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ) using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by "blowing out" the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images

    Fast and Efficient Lossless Image Compression

    Get PDF
    (c) 1993 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.We present a new method for lossless image compression that gives compression comparable to JPEG lossless mode with about ve times the speed. Our method, called FELICS, is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding we use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter we present and analyze a provably good method for estimating the single coding parameter

    Data structures and compression algorithms for high-throughput sequencing technologies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>High-throughput sequencing (HTS) technologies play important roles in the life sciences by allowing the rapid parallel sequencing of very large numbers of relatively short nucleotide sequences, in applications ranging from genome sequencing and resequencing to digital microarrays and ChIP-Seq experiments. As experiments scale up, HTS technologies create new bioinformatics challenges for the storage and sharing of HTS data.</p> <p>Results</p> <p>We develop data structures and compression algorithms for HTS data. A processing stage maps short sequences to a reference genome or a large table of sequences. Then the integers representing the short sequence absolute or relative addresses, their length, and the substitutions they may contain are compressed and stored using various entropy coding algorithms, including both old and new fixed codes (e.g Golomb, Elias Gamma, MOV) and variable codes (e.g. Huffman). The general methodology is illustrated and applied to several HTS data sets. Results show that the information contained in HTS files can be compressed by a factor of 10 or more, depending on the statistical properties of the data sets and various other choices and constraints. Our algorithms fair well against general purpose compression programs such as gzip, bzip2 and 7zip; timing results show that our algorithms are consistently faster than the best general purpose compression programs.</p> <p>Conclusions</p> <p>It is not likely that exactly one encoding strategy will be optimal for all types of HTS data. Different experimental conditions are going to generate various data distributions whereby one encoding strategy can be more effective than another. We have implemented some of our encoding algorithms into the software package GenCompress which is available upon request from the authors. With the advent of HTS technology and increasingly new experimental protocols for using the technology, sequence databases are expected to continue rising in size. The methodology we have proposed is general, and these advanced compression techniques should allow researchers to manage and share their HTS data in a more timely fashion.</p

    A new multistage lattice vector quantization with adaptive subband thresholding for image compression

    Get PDF
    Lattice vector quantization (LVQ) reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ) using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by "blowing out" the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images

    Hardware Software Synthesis of a H.264 / AVC Baseline Profile Decoder

    Get PDF
    The latest video compression standard is a joint effort between ITU and MPEG known as H.264/AVC. As with any video compression standard the H.264/AVC uses computationally intensive algorithms to maximize performance. During decompression these algorithms must be applied in real-time, processing 30 frames a second. This can be done in software, specialized hardware, or a combination of the two. Software solutions allow for maximum portability and ease of design, but General Purpose Processors (GPP) can not take full advantage of the parallelizable algorithms that the H.264 decoder is based upon. Specialized hardware solutions, on the other hand, allow concurrent data and instruction paths, but do not offer a high level of abstraction for cross platform development. Recent work by Xilinx has resulted in the advent of the MicroBlaze soft-processor that is a stand alone microcontroller built from an FPGA. The MicroBlaze provides a specialized hardware medium to run software on-chip with VHDL entities. The goal of this thesis was to model and simulate a software hardware hybrid H.264/AVC Baseline Profile decoder using VHDL and a soft-processor. It was proposed to divide all highly sequential calculations (run-length and CALVC decoding) and control data flow into software and perform the remaining calculations (prediction, inverse transform, inverse quantization, etc.) in hardware modules. The software runs on Xilinx\u27 s MicroBlaze soft-processor and the hardware was designed using VHDL. A major advantage of soft-processors over GPP\u27s, is that it hardware instantiations reside on-chip with the processor. The software and MicroBlaze soft-processor were simulated in a test bench and the results proved that the MicroBlaze could not handle the encoded bit-stream in real-time. For this reason the hardware interface and hardware decoder were never fully implemented. The scope of the thesis covers the H.264 Baseline Profile standard, MicroBlaze processor, the implemented software solution, and the proposed hardware counterpart

    Test Slice Difference Technique for Low-Transition Test Data Compression

    Get PDF
    [[notice]]èŁœæ­ŁćźŒç•ą[[incitationindex]]EI[[booktype]]電歐

    Capsule endoscopy system with novel imaging algorithms

    Get PDF
    Wireless capsule endoscopy (WCE) is a state-of-the-art technology to receive images of human intestine for medical diagnostics. In WCE, the patient ingests a specially designed electronic capsule which has imaging and wireless transmission capabilities inside it. While the capsule travels through the gastrointestinal (GI) tract, it captures images and sends them wirelessly to an outside data logger unit. The data logger stores the image data and then they are transferred to a personal computer (PC) where the images are reconstructed and displayed for diagnosis. The key design challenge in WCE is to reduce the area and power consumption of the capsule while maintaining acceptable image reconstruction. In this research, the unique properties of WCE images are identified by analyzing hundreds of endoscopic images and video frames, and then these properties are used to develop novel and low complexity compression algorithms tailored for capsule endoscopy. The proposed image compressor consists of a new YEF color space converter, lossless prediction coder, customizable chrominance sub-sampler and an efficient Golomb-Rice encoder. The scheme has both lossy and lossless modes and is further customized to work with two lighting modes – conventional white light imaging (WLI) and emerging narrow band imaging (NBI). The average compression ratio achieved using the proposed lossy compression algorithm is 80.4% for WBI and 79.2% for NBI with high reconstruction quality index for both bands. Two surveys have been conducted which show that the reconstructed images have high acceptability among medical imaging doctors and gastroenterologists. The imaging algorithms have been realized in hardware description language (HDL) and their functionalities have been verified in field programmable gate array (FPGA) board. Later it was implemented in a 0.18 ÎŒm complementary metal oxide semiconductor (CMOS) technology and the chip was fabricated. Due to the low complexity of the core compressor, it consumes only 43 ”W of power and 0.032 mm2 of area. The compressor is designed to work with commercial low-power image sensor that outputs image pixels in raster scan fashion, eliminating the need of significant input buffer memory. To demonstrate the advantage, a prototype of the complete WCE system including an FPGA based electronic capsule, a microcontroller based data logger unit and a Windows based image reconstruction software have been developed. The capsule contains the proposed low complexity image compressor and can generate both lossy and lossless compressed bit-stream. The capsule prototype also supports both white light imaging (WLI) and narrow band imaging (NBI) imaging modes and communicates with the data logger in full duplex fashion, which enables configuring the image size and imaging mode in real time during the examination. The developed data logger is portable and has a high data rate wireless connectivity including Bluetooth, graphical display for real time image viewing with state-of-the-art touch screen technology. The data are logged in micro SD cards and can be transferred to PC or Smartphone using card reader, USB interface, or Bluetooth wireless link. The workstation software can decompress and show the reconstructed images. The images can be navigated, marked, zoomed and can be played as video. Finally, ex-vivo testing of the WCE system has been done in pig's intestine to validate its performance

    Motion estimation and CABAC VLSI co-processors for real-time high-quality H.264/AVC video coding

    Get PDF
    Real-time and high-quality video coding is gaining a wide interest in the research and industrial community for different applications. H.264/AVC, a recent standard for high performance video coding, can be successfully exploited in several scenarios including digital video broadcasting, high-definition TV and DVD-based systems, which require to sustain up to tens of Mbits/s. To that purpose this paper proposes optimized architectures for H.264/AVC most critical tasks, Motion estimation and context adaptive binary arithmetic coding. Post synthesis results on sub-micron CMOS standard-cells technologies show that the proposed architectures can actually process in real-time 720 × 480 video sequences at 30 frames/s and grant more than 50 Mbits/s. The achieved circuit complexity and power consumption budgets are suitable for their integration in complex VLSI multimedia systems based either on AHB bus centric on-chip communication system or on novel Network-on-Chip (NoC) infrastructures for MPSoC (Multi-Processor System on Chip

    Gbit/second lossless data compression hardware

    Get PDF
    This thesis investigates how to improve the performance of lossless data compression hardware as a tool to reduce the cost per bit stored in a computer system or transmitted over a communication network. Lossless data compression allows the exact reconstruction of the original data after decompression. Its deployment in some high-bandwidth applications has been hampered due to performance limitations in the compressing hardware that needs to match the performance of the original system to avoid becoming a bottleneck. Advancing the area of lossless data compression hardware, hence, offers a valid motivation with the potential of doubling the performance of the system that incorporates it with minimum investment. This work starts by presenting an analysis of current compression methods with the objective of identifying the factors that limit performance and also the factors that increase it. [Continues.
    • 

    corecore