733 research outputs found

    Adapted Compressed Sensing: A Game Worth Playing

    Get PDF
    Despite the universal nature of the compressed sensing mechanism, additional information on the class of sparse signals to acquire allows adjustments that yield substantial improvements. In facts, proper exploitation of these priors allows to significantly increase compression for a given reconstruction quality. Since one of the most promising scopes of application of compressed sensing is that of IoT devices subject to extremely low resource constraint, adaptation is especially interesting when it can cope with hardware-related constraint allowing low complexity implementations. We here review and compare many algorithmic adaptation policies that focus either on the encoding part or on the recovery part of compressed sensing. We also review other more hardware-oriented adaptation techniques that are actually able to make the difference when coming to real-world implementations. In all cases, adaptation proves to be a tool that should be mastered in practical applications to unleash the full potential of compressed sensing

    The CCSDS 123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression Standard: A comprehensive review

    Get PDF
    The Consultative Committee for Space Data Systems (CCSDS) published the CCSDS 123.0-B-2, “Low- Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression” standard. This standard extends the previous issue, CCSDS 123.0-B-1, which supported only lossless compression, while maintaining backward compatibility. The main novelty of the new issue is support for near-lossless compression, i.e., lossy compression with user-defined absolute and/or relative error limits in the reconstructed images. This new feature is achieved via closed-loop quantization of prediction errors. Two further additions arise from the new near lossless support: first, the calculation of predicted sample values using sample representatives that may not be equal to the reconstructed sample values, and, second, a new hybrid entropy coder designed to provide enhanced compression performance for low-entropy data, prevalent when non lossless compression is used. These new features enable significantly smaller compressed data volumes than those achievable with CCSDS 123.0-B-1 while controlling the quality of the decompressed images. As a result, larger amounts of valuable information can be retrieved given a set of bandwidth and energy consumption constraints

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    Adaptive Image Compressive Sensing Using Texture Contrast

    Get PDF
    The traditional image Compressive Sensing (CS) conducts block-wise sampling with the same sampling rate. However, some blocking artifacts often occur due to the varying block sparsity, leading to a low rate-distortion performance. To suppress these blocking artifacts, we propose to adaptively sample each block according to texture features in this paper. With the maximum gradient in 8-connected region of each pixel, we measure the texture variation of each pixel and then compute the texture contrast of each block. According to the distribution of texture contrast, we adaptively set the sampling rate of each block and finally build an image reconstruction model using these block texture contrasts. Experimental results show that our adaptive sampling scheme improves the rate-distortion performance of image CS compared with the existing adaptive schemes and the reconstructed images by our method achieve better visual quality

    Compressive Sensing in Communication Systems

    Get PDF

    A bag of words description scheme for image quality assessment

    Get PDF
    Every day millions of images are obtained, processed, compressed, saved, transmitted and reproduced. All these operations can cause distortions that affect their quality. The quality of these images should be measured subjectively. However, that brings the disadvantage of achieving a considerable number of tests with individuals requested to provide a statistical analysis of an image’s perceptual quality. Several objective metrics have been developed, that try to model the human perception of quality. However, in most applications the representation of human quality perception given by these metrics is far from the desired representation. Therefore, this work proposes the usage of machine learning models that allow for a better approximation. In this work, definitions for image and quality are given and some of the difficulties of the study of image quality are mentioned. Moreover, three metrics are initially explained. One uses the image’s original quality has a reference (SSIM) while the other two are no reference (BRISQUE and QAC). A comparison is made, showing a large discrepancy of values between the two kinds of metrics. The database that is used for the tests is TID2013. This database was chosen due to its dimension and by the fact of considering a large number of distortions. A study of each type of distortion in this database is made. Furthermore, some concepts of machine learning are introduced along with algorithms relevant in the context of this dissertation, notably, K-means, KNN and SVM. Description aggregator algorithms like “bag of words” and “fisher-vectors” are also mentioned. This dissertation studies a new model that combines machine learning and a quality metric for quality estimation. This model is based on the division of images in cells, where a specific metric is computed. With this division, it is possible to obtain local quality descriptors that will be aggregated using “bag of words”. A SVM with an RBF kernel is trained and tested on the same database and the results of the model are evaluated using cross-validation. The results are analysed using Pearson, Spearman and Kendall correlations and the RMSE to evaluate the representation of the model when compared with the subjective results. The model improves the results of the metric that was used and shows a new path to apply machine learning for quality evaluation.No nosso dia-a-dia as imagens são obtidas, processadas, comprimidas, guardadas, transmitidas e reproduzidas. Em qualquer destas operações podem ocorrer distorções que prejudicam a sua qualidade. A qualidade destas imagens pode ser medida de forma subjectiva, o que tem a desvantagem de serem necessários vários testes, a um número considerável de indivíduos para ser feita uma análise estatística da qualidade perceptual de uma imagem. Foram desenvolvidas várias métricas objectivas, que de alguma forma tentam modelar a percepção humana de qualidade. Todavia, em muitas aplicações a representação de percepção de qualidade humana dada por estas métricas fica aquém do desejável, razão porque se propõe neste trabalho usar modelos de reconhecimento de padrões que permitam uma maior aproximação. Neste trabalho, são dadas definições para imagem e qualidade e algumas das dificuldades do estudo da qualidade de imagem são referidas. É referida a importância da qualidade de imagem como ramo de estudo, e são estudadas diversas métricas de qualidade. São explicadas três métricas, uma delas que usa a qualidade original como referência (SSIM) e duas métricas sem referência (BRISQUE e QAC). Uma comparação é feita entre elas, mostrando- – se uma grande discrepância de valores entre os dois tipos de métricas. Para os testes feitos é usada a base de dados TID2013, que é muitas vezes considerada para estudos de qualidade de métricas devido à sua dimensão e ao facto de considerar um grande número de distorções. Neste trabalho também se fez um estudo dos tipos de distorção incluidos nesta base de dados e como é que eles são simulados. São introduzidos também alguns conceitos teóricos de reconhecimento de padrões e alguns algoritmos relevantes no contexto da dissertação, são descritos como o K-means, KNN e as SVMs. Algoritmos de agregação de descritores como o “bag of words” e o “fisher-vectors” também são referidos. Esta dissertação adiciona métodos de reconhecimento de padrões a métricas objectivas de qua– lidade de imagem. Uma nova técnica é proposta, baseada na divisão de imagens em células, nas quais uma métrica será calculada. Esta divisão permite obter descritores locais de qualidade que serão agregados usando “bag of words”. Uma SVM com kernel RBF é treinada e testada na mesma base de dados e os resultados do modelo são mostrados usando cross-validation. Os resultados são analisados usando as correlações de Pearson, Spearman e Kendall e o RMSE que permitem avaliar a proximidade entre a métrica desenvolvida e os resultados subjectivos. Este modelo melhora os resultados obtidos com a métrica usada e demonstra uma nova forma de aplicar modelos de reconhecimento de padrões ao estudo de avaliação de qualidade
    • …
    corecore