507 research outputs found

    The CCSDS 123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression Standard: A comprehensive review

    Get PDF
    The Consultative Committee for Space Data Systems (CCSDS) published the CCSDS 123.0-B-2, “Low- Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression” standard. This standard extends the previous issue, CCSDS 123.0-B-1, which supported only lossless compression, while maintaining backward compatibility. The main novelty of the new issue is support for near-lossless compression, i.e., lossy compression with user-defined absolute and/or relative error limits in the reconstructed images. This new feature is achieved via closed-loop quantization of prediction errors. Two further additions arise from the new near lossless support: first, the calculation of predicted sample values using sample representatives that may not be equal to the reconstructed sample values, and, second, a new hybrid entropy coder designed to provide enhanced compression performance for low-entropy data, prevalent when non lossless compression is used. These new features enable significantly smaller compressed data volumes than those achievable with CCSDS 123.0-B-1 while controlling the quality of the decompressed images. As a result, larger amounts of valuable information can be retrieved given a set of bandwidth and energy consumption constraints

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding

    A novel semi-fragile forensic watermarking scheme for remote sensing images

    Get PDF
    Peer-reviewedA semi-fragile watermarking scheme for multiple band images is presented. We propose to embed a mark into remote sensing images applying a tree structured vector quantization approach to the pixel signatures, instead of processing each band separately. The signature of themmultispectral or hyperspectral image is used to embed the mark in it order to detect any significant modification of the original image. The image is segmented into threedimensional blocks and a tree structured vector quantizer is built for each block. These trees are manipulated using an iterative algorithm until the resulting block satisfies a required criterion which establishes the embedded mark. The method is shown to be able to preserve the mark under lossy compression (above a given threshold) but, at the same time, it detects possibly forged blocks and their position in the whole image.Se presenta un esquema de marcas de agua semi-frágiles para múltiples imágenes de banda. Proponemos incorporar una marca en imágenes de detección remota, aplicando un enfoque de cuantización del vector de árbol estructurado con las definiciones de píxel, en lugar de procesar cada banda por separado. La firma de la imagen hiperespectral se utiliza para insertar la marca en el mismo orden para detectar cualquier modificación significativa de la imagen original. La imagen es segmentada en bloques tridimensionales y un cuantificador de vector de estructura de árbol se construye para cada bloque. Estos árboles son manipulados utilizando un algoritmo iteractivo hasta que el bloque resultante satisface un criterio necesario que establece la marca incrustada. El método se muestra para poder preservar la marca bajo compresión con pérdida (por encima de un umbral establecido) pero, al mismo tiempo, detecta posiblemente bloques forjados y su posición en la imagen entera.Es presenta un esquema de marques d'aigua semi-fràgils per a múltiples imatges de banda. Proposem incorporar una marca en imatges de detecció remota, aplicant un enfocament de quantització del vector d'arbre estructurat amb les definicions de píxel, en lloc de processar cada banda per separat. La signatura de la imatge hiperespectral s'utilitza per inserir la marca en el mateix ordre per detectar qualsevol modificació significativa de la imatge original. La imatge és segmentada en blocs tridimensionals i un quantificador de vector d'estructura d'arbre es construeix per a cada bloc. Aquests arbres són manipulats utilitzant un algoritme iteractiu fins que el bloc resultant satisfà un criteri necessari que estableix la marca incrustada. El mètode es mostra per poder preservar la marca sota compressió amb pèrdua (per sobre d'un llindar establert) però, al mateix temps, detecta possiblement blocs forjats i la seva posició en la imatge sencera

    A fully embedded two-stage coder for hyperspectral near-lossless compression

    Get PDF
    This letter proposes a near-lossless coder for hyperspectral images. The coding technique is fully embedded and minimizes the distortion in the l2 norm initially and in the l∞ norm subsequently. Based on a two-stage near-lossless compression scheme, it includes a lossy and a near-lossless layer. The novelties are: the observation of the convergence of the entropy of the residuals in the original domain and in the spectral-spatial transformed domain; and an embedded near-lossless layer. These contributions enable a progressive transmission while optimising both SNR and PAE performance. The embeddedness is accomplished by bitplane encoding plus arithmetic encoding. Experimental results suggest that the proposed method yields a highly competitive coding performance for hyperspectral images, outperforming multi-component JPEG2000 for l∞ norm and pairing its performance for l2 norm, and also outperforming M-CALIC in the near-lossless case -for PAE ≥5-

    Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding

    Get PDF
    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties

    Image dequantization for hyperspectral lossy compression with convolutional neural networks

    Get PDF
    Significant work has been devoted to methods based on predictive coding for onboard compression of hyperspectral images. This is supported by the new CCSDS 123.0-B-2 recommendation for lossless and near-lossless compression. While lossless compression can achieve high throughput, it can only achieve limited compression ratios. The introduction of a quantizer and local decoder in the prediction loop allows to implement lossy compression with good rate-performance. However, the need to have a locally decoded version of a causal neighborhood of the current pixel under coding is a significant limiting factor in the throughput such encoder can achieve. In this work, we study the rate-distortion performance of a significantly simpler and faster onboard compressor based on prequantizing the pixels of the hyperspectral image and applying a lossless compressor (such as the lossless CCSDS CCSDS 123.0-B-2) to the quantized pixels. While this is suboptimal in terms of rate-distortion performance compared to having an in-loop quantizer, we compensate the lower quality with an on-ground post-processor based on modeling the distortion residual with a convolutional neural network. The task of the neural network is to learn the statistics of the quantization error and apply a dequantization model to restore the image

    JP3D compression of solar data-cubes: photospheric imaging and spectropolarimetry

    Full text link
    Hyperspectral imaging is an ubiquitous technique in solar physics observations and the recent advances in solar instrumentation enabled us to acquire and record data at an unprecedented rate. The huge amount of data which will be archived in the upcoming solar observatories press us to compress the data in order to reduce the storage space and transfer times. The correlation present over all dimensions, spatial, temporal and spectral, of solar data-sets suggests the use of a 3D base wavelet decomposition, to achieve higher compression rates. In this work, we evaluate the performance of the recent JPEG2000 Part 10 standard, known as JP3D, for the lossless compression of several types of solar data-cubes. We explore the differences in: a) The compressibility of broad-band or narrow-band time-sequence; I or V stokes profiles in spectropolarimetric data-sets; b) Compressing data in [x,y,λ\lambda] packages at different times or data in [x,y,t] packages of different wavelength; c) Compressing a single large data-cube or several smaller data-cubes; d) Compressing data which is under-sampled or super-sampled with respect to the diffraction cut-off

    Quality criteria benchmark for hyperspectral imagery

    Get PDF
    Hyperspectral data appear to be of a growing interest over the past few years. However, applications for hyperspectral data are still in their infancy as handling the significant size of the data presents a challenge for the user community. Efficient compression techniques are required, and lossy compression, specifically, will have a role to play, provided its impact on remote sensing applications remains insignificant. To assess the data quality, suitable distortion measures relevant to end-user applications are required. Quality criteria are also of a major interest for the conception and development of new sensors to define their requirements and specifications. This paper proposes a method to evaluate quality criteria in the context of hyperspectral images. The purpose is to provide quality criteria relevant to the impact of degradations on several classification applications. Different quality criteria are considered. Some are traditionnally used in image and video coding and are adapted here to hyperspectral images. Others are specific to hyperspectral data.We also propose the adaptation of two advanced criteria in the presence of different simulated degradations on AVIRIS hyperspectral images. Finally, five criteria are selected to give an accurate representation of the nature and the level of the degradation affecting hyperspectral data

    Reducing data dependencies in the feedback loop of the CCSDS 123.0-B-2 predictor

    Get PDF
    Altres ajuts: European Space Agency (ESA) (Grant Number: 4000136723/22/NL/CRS)On-board multi- and hyperspectral instruments acquire large volumes of data that need to be processed with the limited computational and storage resources. In this context, the CCSDS 123.0-B-2 standard emerges as an interesting option to compress multi- and hyperspectral images on-board satellites, supporting both lossless and near-lossless compression with low complexity and reduced power consumption. Nonetheless, the inclusion of a feedback loop in the CCSDS 123.0-B-2 predictor to support near-lossless compression introduces significant data dependencies that hinder real-time processing, particularly due to the presence of a quantization stage within this loop. This work provides an analysis of the aforementioned data dependencies and proposes two strategies aiming at maximizing throughput in hardware implementations and thus enabling real-time processing. In particular, through an elaborate mathematical derivation, the quantization stage is removed completely from the feedback loop. This reduces the critical path, which allows for shorter initiation intervals in a pipelined hardware implementation and higher throughput. This is achieved without any impact in the compression performance, which is identical to the one obtained by the original data flow of the predictor
    corecore