228 research outputs found

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding

    A novel semi-fragile forensic watermarking scheme for remote sensing images

    Get PDF
    Peer-reviewedA semi-fragile watermarking scheme for multiple band images is presented. We propose to embed a mark into remote sensing images applying a tree structured vector quantization approach to the pixel signatures, instead of processing each band separately. The signature of themmultispectral or hyperspectral image is used to embed the mark in it order to detect any significant modification of the original image. The image is segmented into threedimensional blocks and a tree structured vector quantizer is built for each block. These trees are manipulated using an iterative algorithm until the resulting block satisfies a required criterion which establishes the embedded mark. The method is shown to be able to preserve the mark under lossy compression (above a given threshold) but, at the same time, it detects possibly forged blocks and their position in the whole image.Se presenta un esquema de marcas de agua semi-frágiles para múltiples imágenes de banda. Proponemos incorporar una marca en imágenes de detección remota, aplicando un enfoque de cuantización del vector de árbol estructurado con las definiciones de píxel, en lugar de procesar cada banda por separado. La firma de la imagen hiperespectral se utiliza para insertar la marca en el mismo orden para detectar cualquier modificación significativa de la imagen original. La imagen es segmentada en bloques tridimensionales y un cuantificador de vector de estructura de árbol se construye para cada bloque. Estos árboles son manipulados utilizando un algoritmo iteractivo hasta que el bloque resultante satisface un criterio necesario que establece la marca incrustada. El método se muestra para poder preservar la marca bajo compresión con pérdida (por encima de un umbral establecido) pero, al mismo tiempo, detecta posiblemente bloques forjados y su posición en la imagen entera.Es presenta un esquema de marques d'aigua semi-fràgils per a múltiples imatges de banda. Proposem incorporar una marca en imatges de detecció remota, aplicant un enfocament de quantització del vector d'arbre estructurat amb les definicions de píxel, en lloc de processar cada banda per separat. La signatura de la imatge hiperespectral s'utilitza per inserir la marca en el mateix ordre per detectar qualsevol modificació significativa de la imatge original. La imatge és segmentada en blocs tridimensionals i un quantificador de vector d'estructura d'arbre es construeix per a cada bloc. Aquests arbres són manipulats utilitzant un algoritme iteractiu fins que el bloc resultant satisfà un criteri necessari que estableix la marca incrustada. El mètode es mostra per poder preservar la marca sota compressió amb pèrdua (per sobre d'un llindar establert) però, al mateix temps, detecta possiblement blocs forjats i la seva posició en la imatge sencera

    Constant-SNR, rate control and entropy coding for predictive lossy hyperspectral image compression

    Get PDF
    Predictive lossy compression has been shown to represent a very flexible framework for lossless and lossy onboard compression of multispectral and hyperspectral images with quality and rate control. In this paper, we improve predictive lossy compression in several ways, using a standard issued by the Consultative Committee on Space Data Systems, namely CCSDS-123, as an example of application. First, exploiting the flexibility in the error control process, we propose a constant-signal-to-noise-ratio algorithm that bounds the maximum relative error between each pixel of the reconstructed image and the corresponding pixel of the original image. This is very useful to avoid low-energy areas of the image being affected by large errors. Second, we propose a new rate control algorithm that has very low complexity and provides performance equal to or better than existing work. Third, we investigate several entropy coding schemes that can speed up the hardware implementation of the algorithm and, at the same time, improve coding efficiency. These advances make predictive lossy compression an extremely appealing framework for onboard systems due to its simplicity, flexibility, and coding efficiency

    Information-theoretic assessment of on-board near-lossless compression of hyperspectral data

    Get PDF
    A rate-distortion model to measure the impact of near-lossless compression of raw data, that is, compression with user-defined maximum absolute error, on the information avail- able once the compressed data have been received and decompressed is proposed. Such a model requires the original uncompressed raw data and their measured noise variances. Advanced near- lossless methods are exploited only to measure the entropy of the datasets but are not required for on-board compression. In substance, the acquired raw data are regarded as a noisy realization of a noise-free spectral information source. The useful spectral information at the decoder is the mutual information between the unknown ideal source and the decoded source, which is affected by both instrument noise and compression-induced distortion. Experiments on simulated noisy images, in which the noise-free source and the noise realization are exactly known, show the trend of spectral information versus compression distortion, which in turn is related to the coded bit rate or equivalently to the compression ratio through the rate-distortion characteristic of the encoder used on satellite. Preliminary experiments on airborne visible infrared imaging spec- trometer (AVIRIS) 2006 Yellowstone sequences match the trends of the simulations. The main conclusion that can be drawn is that the noisier the dataset, the lower the CR that can be tolerated, in order to save a prefixed amount of spectral information. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI. (DOI: 10.1117/1.JRS.

    Adaptive multispectral GPU accelerated architecture for Earth Observation satellites

    Get PDF
    In recent years the growth in quantity, diversity and capability of Earth Observation (EO) satellites, has enabled increase’s in the achievable payload data dimensionality and volume. However, the lack of equivalent advancement in downlink technology has resulted in the development of an onboard data bottleneck. This bottleneck must be alleviated in order for EO satellites to continue to efficiently provide high quality and increasing quantities of payload data. This research explores the selection and implementation of state-of-the-art multidimensional image compression algorithms and proposes a new onboard data processing architecture, to help alleviate the bottleneck and increase the data throughput of the platform. The proposed new system is based upon a backplane architecture to provide scalability with different satellite platform sizes and varying mission’s objectives. The heterogeneous nature of the architecture allows benefits of both Field Programmable Gate Array (FPGA) and Graphical Processing Unit (GPU) hardware to be leveraged for maximised data processing throughput

    Fast and Lightweight Rate Control for Onboard Predictive Coding of Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression of hyperspecral images onboard of spacecrafts in light of the excellent rate-distortion performance and low complexity of recent schemes. In this letter we propose a rate control algorithm and integrate it in a lossy extension to the CCSDS-123 lossless compression recommendation. The proposed rate algorithm overhauls our previous scheme by being orders of magnitude faster and simpler to implement, while still providing the same accuracy in terms of output rate and comparable or better image quality

    Compression of Spectral Images

    Get PDF

    Image dequantization for hyperspectral lossy compression with convolutional neural networks

    Get PDF
    Significant work has been devoted to methods based on predictive coding for onboard compression of hyperspectral images. This is supported by the new CCSDS 123.0-B-2 recommendation for lossless and near-lossless compression. While lossless compression can achieve high throughput, it can only achieve limited compression ratios. The introduction of a quantizer and local decoder in the prediction loop allows to implement lossy compression with good rate-performance. However, the need to have a locally decoded version of a causal neighborhood of the current pixel under coding is a significant limiting factor in the throughput such encoder can achieve. In this work, we study the rate-distortion performance of a significantly simpler and faster onboard compressor based on prequantizing the pixels of the hyperspectral image and applying a lossless compressor (such as the lossless CCSDS CCSDS 123.0-B-2) to the quantized pixels. While this is suboptimal in terms of rate-distortion performance compared to having an in-loop quantizer, we compensate the lower quality with an on-ground post-processor based on modeling the distortion residual with a convolutional neural network. The task of the neural network is to learn the statistics of the quantization error and apply a dequantization model to restore the image
    corecore