2,103 research outputs found

    Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding

    Get PDF
    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties

    Compression of spectral meteorological imagery

    Get PDF
    Data compression is essential to current low-earth-orbit spectral sensors with global coverage, e.g., meteorological sensors. Such sensors routinely produce in excess of 30 Gb of data per orbit (over 4 Mb/s for about 110 min) while typically limited to less than 10 Gb of downlink capacity per orbit (15 minutes at 10 Mb/s). Astro-Space Division develops spaceborne compression systems for compression ratios from as little as three to as much as twenty-to-one for high-fidelity reconstructions. Current hardware production and development at Astro-Space Division focuses on discrete cosine transform (DCT) systems implemented with the GE PFFT chip, a 32x32 2D-DCT engine. Spectral relations in the data are exploited through block mean extraction followed by orthonormal transformation. The transformation produces blocks with spatial correlation that are suitable for further compression with any block-oriented spatial compression system, e.g., Astro-Space Division's Laplacian modeler and analytic encoder of DCT coefficients

    Lossless Astronomical Image Compression and the Effects of Noise

    Full text link
    We compare a variety of lossless image compression methods on a large sample of astronomical images and show how the compression ratios and speeds of the algorithms are affected by the amount of noise in the images. In the ideal case where the image pixel values have a random Gaussian distribution, the equivalent number of uncompressible noise bits per pixel is given by Nbits =log2(sigma * sqrt(12)) and the lossless compression ratio is given by R = BITPIX / Nbits + K where BITPIX is the bit length of the pixel values and K is a measure of the efficiency of the compression algorithm. We perform image compression tests on a large sample of integer astronomical CCD images using the GZIP compression program and using a newer FITS tiled-image compression method that currently supports 4 compression algorithms: Rice, Hcompress, PLIO, and GZIP. Overall, the Rice compression algorithm strikes the best balance of compression and computational efficiency; it is 2--3 times faster and produces about 1.4 times greater compression than GZIP. The Rice algorithm produces 75%--90% (depending on the amount of noise in the image) as much compression as an ideal algorithm with K = 0. The image compression and uncompression utility programs used in this study (called fpack and funpack) are publicly available from the HEASARC web site. A simple command-line interface may be used to compress or uncompress any FITS image file.Comment: 20 pages, 9 figures, to be published in PAS

    Information-theoretic assessment of on-board near-lossless compression of hyperspectral data

    Get PDF
    A rate-distortion model to measure the impact of near-lossless compression of raw data, that is, compression with user-defined maximum absolute error, on the information avail- able once the compressed data have been received and decompressed is proposed. Such a model requires the original uncompressed raw data and their measured noise variances. Advanced near- lossless methods are exploited only to measure the entropy of the datasets but are not required for on-board compression. In substance, the acquired raw data are regarded as a noisy realization of a noise-free spectral information source. The useful spectral information at the decoder is the mutual information between the unknown ideal source and the decoded source, which is affected by both instrument noise and compression-induced distortion. Experiments on simulated noisy images, in which the noise-free source and the noise realization are exactly known, show the trend of spectral information versus compression distortion, which in turn is related to the coded bit rate or equivalently to the compression ratio through the rate-distortion characteristic of the encoder used on satellite. Preliminary experiments on airborne visible infrared imaging spec- trometer (AVIRIS) 2006 Yellowstone sequences match the trends of the simulations. The main conclusion that can be drawn is that the noisier the dataset, the lower the CR that can be tolerated, in order to save a prefixed amount of spectral information. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI. (DOI: 10.1117/1.JRS.

    Scalable wavelet-based coding of irregular meshes with interactive region-of-interest support

    Get PDF
    This paper proposes a novel functionality in wavelet-based irregular mesh coding, which is interactive region-of-interest (ROI) support. The proposed approach enables the user to define the arbitrary ROIs at the decoder side and to prioritize and decode these regions at arbitrarily high-granularity levels. In this context, a novel adaptive wavelet transform for irregular meshes is proposed, which enables: 1) varying the resolution across the surface at arbitrarily fine-granularity levels and 2) dynamic tiling, which adapts the tile sizes to the local sampling densities at each resolution level. The proposed tiling approach enables a rate-distortion-optimal distribution of rate across spatial regions. When limiting the highest resolution ROI to the visible regions, the fine granularity of the proposed adaptive wavelet transform reduces the required amount of graphics memory by up to 50%. Furthermore, the required graphics memory for an arbitrary small ROI becomes negligible compared to rendering without ROI support, independent of any tiling decisions. Random access is provided by a novel dynamic tiling approach, which proves to be particularly beneficial for large models of over 10(6) similar to 10(7) vertices. The experiments show that the dynamic tiling introduces a limited lossless rate penalty compared to an equivalent codec without ROI support. Additionally, rate savings up to 85% are observed while decoding ROIs of tens of thousands of vertices

    Rate-Distortion via Markov Chain Monte Carlo

    Full text link
    We propose an approach to lossy source coding, utilizing ideas from Gibbs sampling, simulated annealing, and Markov Chain Monte Carlo (MCMC). The idea is to sample a reconstruction sequence from a Boltzmann distribution associated with an energy function that incorporates the distortion between the source and reconstruction, the compressibility of the reconstruction, and the point sought on the rate-distortion curve. To sample from this distribution, we use a `heat bath algorithm': Starting from an initial candidate reconstruction (say the original source sequence), at every iteration, an index i is chosen and the i-th sequence component is replaced by drawing from the conditional probability distribution for that component given all the rest. At the end of this process, the encoder conveys the reconstruction to the decoder using universal lossless compression. The complexity of each iteration is independent of the sequence length and only linearly dependent on a certain context parameter (which grows sub-logarithmically with the sequence length). We show that the proposed algorithms achieve optimum rate-distortion performance in the limits of large number of iterations, and sequence length, when employed on any stationary ergodic source. Experimentation shows promising initial results. Employing our lossy compressors on noisy data, with appropriately chosen distortion measure and level, followed by a simple de-randomization operation, results in a family of denoisers that compares favorably (both theoretically and in practice) with other MCMC-based schemes, and with the Discrete Universal Denoiser (DUDE).Comment: 35 pages, 16 figures, Submitted to IEEE Transactions on Information Theor
    corecore