9,274 research outputs found

    One Pass Quality Control and Low Complexity RDO in A Quadtree Based Scalable Image Coder

    Get PDF
    International audienceThis paper presents a joint quality control (QC) and rate distortion optimization (RDO) algorithm applied to a still image codec called Locally Adaptive Resolution (LAR). LAR supports scalability in resolution for both lossy and lossless coding and has low complexity. This algorithm is based on the study of the relationship between compression efficiency and relative parameters. The RDO model is proposed firstly to find suitable parameters. Relying on this optimization, relationships between the distortion of reconstructed image and quantization parameter can be described with a new linear model. This model is used for parametric configuration to control compression distortion. Experimental results show that this algorithm provides an effective solution for an efficient one pass codec with automatic parameters selection and accurate QC. This algorithm could be extended to codecs with similar functions, such as High Efficiency Video Coding (HEVC)

    JPEG2000 image coding system theory and applications

    Get PDF
    JPEG2000, the new standard for still image coding, provides a new framework and an integrated toolbox to better address increasing needs for compression. It offers a wide range of functionalities such as lossless and lossy coding, embedded lossy to lossless coding, progression by resolution and quality, high compression efficiency, error resilience and region-of-interest (ROI) coding. Comparative results have shown that JPEG2000 is indeed superior to established image compression standards. Overall, the JPEG2000 standard offers the richest set of features in a very efficient way and within a unified algorithm. The price of this is its additional complexity, but this should not be perceived as a disadvantage, as the technology evolves rapidly

    High-Performance Compression of Multibeam Echosounders Water Column Data

    Get PDF
    Over the last few decades, multibeam echosounders (MBES) have become the dominant technique to efficiently and accurately map the seafloor. They now allow to collect water column acoustic images along with the bathymetry, which is providing a wealth of new possibilities in oceans exploration. However, water column imagery generates vast amounts of data that poses obvious logistic, economic, and technical challenges. Surprisingly, very few studies have addressed this problem by providing efficient lossless or lossy data compression solutions. Currently, the available options are only lossless, providing low compression ratios at low speeds. In this paper, we adapt a data compression algorithm, the Fully Adaptive Prediction Error Coder (FAPEC), which was created to offer outstanding performance under the strong requirements of space data transmission. We have added to this entropy coder a specific pre-processing stage tailored to theKongsbergMaritime water column file formats. Here, we test it on data acquired with Kongsberg MBES models EM302, EM710, andEM2040.With this bespoke pre-processing, FAPEC provides good lossless compression ratios at high speeds, whereas lossy ratios reach water column file sizes even smaller than bathymetry raw files still with good image quality. We show the advantages over other lossless compression solutions, both in terms of compression ratios and speed.We illustrate the quality of water column images after lossy FAPEC compression, as well as its resilience to datagram errors and its potential for automatic detection of water column targets. We also show the successful integration in ARM microprocessors (like those used by smartphones and also by autonomous underwater vehicles), which provides a real-time solution for MBES water column data compression

    Statistical lossless compression of space imagery and general data in a reconfigurable architecture

    Get PDF

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding
    • …
    corecore