433 research outputs found

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding

    Creating a native Swift JPEG codec

    Get PDF
    Swift is one of the world’s most popular systems programming languages, however for many applications, such as image decoding and encoding, Apple’s proprietary frameworks are the only options available to users. This project, an open-source, pure-Swift implementation of the ITU-T81 JPEG standard, is motivated by that gap in the language ecosystem. Written as an open source project contributor’s guide, we begin by detailing the problems and considerations inherent to codec design, and how the Swift language allows for highly expressive and safe APIs beyond what older C and C++ frameworks can provide. We continue with an overview of the components of our fully featured JPEG library, including ways in which various performance and safety issues have been addressed. We overview the packaging and encapsulation required to vend a usable framework, as well as the unit, integration, and regression tests essential for its long-term maintenance.Ope

    A practical comparison between two powerful PCC codec’s

    Get PDF
    Recent advances in the consumption of 3D content creates the necessity of efficient ways to visualize and transmit 3D content. As a result, methods to obtain that same content have been evolving, leading to the development of new methods of representations, namely point clouds and light fields. A point cloud represents a set of points with associated Cartesian coordinates associated with each point(x, y, z), as well as being able to contain even more information inside that point (color, material, texture, etc). This kind of representation changes the way on how 3D content in consumed, having a wide range of applications, from videogaming to medical ones. However, since this type of data carries so much information within itself, they are data-heavy, making the storage and transmission of content a daunting task. To resolve this issue, MPEG created a point cloud coding normalization project, giving birth to V-PCC (Video-based Point Cloud Coding) and G-PCC (Geometry-based Point Cloud Coding) for static content. Firstly, a general analysis of point clouds is made, spanning from their possible solutions, to their acquisition. Secondly, point cloud codecs are studied, namely VPCC and G-PCC from MPEG. Then, a state of art study of quality evaluation is performed, namely subjective and objective evaluation. Finally, a report on the JPEG Pleno Point Cloud, in which an active colaboration took place, is made, with the comparative results of the two codecs and used metrics.Os avanços recentes no consumo de conteúdo 3D vêm criar a necessidade de maneiras eficientes de visualizar e transmitir conteúdo 3D. Consequentemente, os métodos de obtenção desse mesmo conteúdo têm vindo a evoluir, levando ao desenvolvimento de novas maneiras de representação, nomeadamente point clouds e lightfields. Um point cloud (núvem de pontos) representa um conjunto de pontos com coordenadas cartesianas associadas a cada ponto (x, y, z), além de poder conter mais informação dentro do mesmo (cor, material, textura, etc). Este tipo de representação abre uma nova janela na maneira como se consome conteúdo 3D, tendo um elevado leque de aplicações, desde videojogos e realidade virtual a aplicações médicas. No entanto, este tipo de dados, ao carregarem com eles tanta informação, tornam-se incrivelmente pesados, tornando o seu armazenamento e transmissão uma tarefa hercúleana. Tendo isto em mente, a MPEG criou um projecto de normalização de codificação de point clouds, dando origem ao V-PCC (Video-based Point Cloud Coding) e G-PCC (Geometry-based Point Cloud Coding) para conteúdo estático. Esta dissertação tem como objectivo uma análise geral sobre os point clouds, indo desde as suas possívei utilizações à sua aquisição. Seguidamente, é efectuado um estudo dos codificadores de point clouds, nomeadamente o V-PCC e o G-PCC da MPEG, o estado da arte da avaliação de qualidade, objectiva e subjectiva, e finalmente, são reportadas as actividades da JPEG Pleno Point Cloud, na qual se teve uma colaboração activa

    An overview of JPEG 2000

    Get PDF
    JPEG-2000 is an emerging standard for still image compression. This paper provides a brief history of the JPEG-2000 standardization process, an overview of the standard, and some description of the capabilities provided by the standard. Part I of the JPEG-2000 standard specifies the minimum compliant decoder, while Part II describes optional, value-added extensions. Although the standard specifies only the decoder and bitstream syntax, in this paper we describe JPEG-2000 from the point of view of encoding. We take this approach, as we believe it is more amenable to a compact description more easily understood by most readers.

    Fast and Lightweight Rate Control for Onboard Predictive Coding of Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression of hyperspecral images onboard of spacecrafts in light of the excellent rate-distortion performance and low complexity of recent schemes. In this letter we propose a rate control algorithm and integrate it in a lossy extension to the CCSDS-123 lossless compression recommendation. The proposed rate algorithm overhauls our previous scheme by being orders of magnitude faster and simpler to implement, while still providing the same accuracy in terms of output rate and comparable or better image quality

    Image dequantization for hyperspectral lossy compression with convolutional neural networks

    Get PDF
    Significant work has been devoted to methods based on predictive coding for onboard compression of hyperspectral images. This is supported by the new CCSDS 123.0-B-2 recommendation for lossless and near-lossless compression. While lossless compression can achieve high throughput, it can only achieve limited compression ratios. The introduction of a quantizer and local decoder in the prediction loop allows to implement lossy compression with good rate-performance. However, the need to have a locally decoded version of a causal neighborhood of the current pixel under coding is a significant limiting factor in the throughput such encoder can achieve. In this work, we study the rate-distortion performance of a significantly simpler and faster onboard compressor based on prequantizing the pixels of the hyperspectral image and applying a lossless compressor (such as the lossless CCSDS CCSDS 123.0-B-2) to the quantized pixels. While this is suboptimal in terms of rate-distortion performance compared to having an in-loop quantizer, we compensate the lower quality with an on-ground post-processor based on modeling the distortion residual with a convolutional neural network. The task of the neural network is to learn the statistics of the quantization error and apply a dequantization model to restore the image

    A Novel Rate-Controlled Predictive Coding Algorithm for Onboard Compression of Multispectral and Hyperspectral Images

    Get PDF
    Predictive compression has always been considered an attractive solution for onboard compression thanks to its low computational demands and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Fixed-rate is considered a challenging problem due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signals energy into few coefficients as in the case of transform coding. In this paper, we show how it is possible to design a rate control algorithm suitable for onboard implementation by providing a general framework to select quantizers in each spatial and spectral region of the image and optimize the choice so that the desired rate is achieved with the best quality. In order to make the computational complexity suitable for onboard implementation, models are used to predict the rate-distortion characteristics of the prediction residuals in each image block. Such models are trained on-the-fly during the execution and small deviations in the output rate due to unmodeled behavior are automatically corrected as new data are acquired. The coupling of predictive coding and rate control allows the design of a single compression algorithm able to manage multiple encoding objectives. We tailor the proposed rate controller to the predictor defined by the CCSDS-123 lossless compression recommendation and study a new entropy coding stage based on the range coder in order to achieve an extension of the standard capable of managing all the following encoding objectives: lossless, variable-rate near-lossless (bounded maximum error), fixed-rate lossy (minimum average error), and any in-between case such as fixed-rate coding with a constraint on the maximum error. We show the performance of the proposed architecture on the CCSDS reference dataset for multispectral and hyperspectral image compression and compare it with state-of-the-art techniques based on transform coding such as the use of the CCSDS-122 Discrete Wavelet Transform encoder paired with the Pairwise Orthogonal Transform working in the spectral dimension. Remarkable results are observed by providing superior image quality both in terms of higher SNR and lower maximum error with respect to state-of-the-art transform coding
    • …
    corecore