1,801 research outputs found

    Lossless compression of hyperspectral images

    Get PDF
    Band ordering and the prediction scheme are the two major aspects of hyperspectral imaging which have been studied to improve the performance of the compression system. In the prediction module, we propose spatio-spectral prediction methods. Two non-linear spectral prediction methods have been proposed in this thesis. NPHI (Non-linear Prediction for Hyperspectral Images) is based on a band look-ahead technique wherein a reference band is included in the prediction of pixels in the current band. The prediction technique estimates the variation between the contexts of the two bands to modify the weights computed in the reference band to predict the pixels in the current band. EPHI (Edge-based Prediction for Hyperspectral Images) is the modified NPHI technique wherein an edge-based analysis is used to classify the pixels into edges and non-edges in order to perform the prediction of the pixel in the current band. Three ordering methods have been proposed in this thesis. The first ordering method computes the local and global features in each band to group the bands. The bands in each group are ordered by estimating the compression ratios achieved between the entire band in the group and then ordering them using Kruskal\u27s algorithm. The other two methods of ordering compute the compression ratios between b-neighbors in performing the band ordering

    Digital compression algorithms for HDTV transmission

    Get PDF
    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given

    Band Ordering in Lossless Compression of Multispectral Images

    Get PDF
    In this paper, we consider a model of lossless image compression in which each band of a multispectral image is coded using a prediction function involving values from a previously coded band of the compression, and examine how the ordering of the bands affects the achievable compression. We present an efficient algorithm for computing the optimal band ordering for a multispectral image. This algorithm has time complexity O(n2) for an n-band image, while the naive algorithm takes time &#x03A9(n!). A slight variant of the optimal ordering problem that is motivated by some practical concerns is shown to be NP-hard, and hence, computationally infeasible, in all cases except for the most trivial possibility. In addition, we report on our experimental findings using the algorithms designed in this paper applied to real multispectral satellite data. The results show that the techniques described here hold great promise for application to real-world compression needs

    Exclusive-or preprocessing and dictionary coding of continuous-tone images.

    Get PDF
    The field of lossless image compression studies the various ways to represent image data in the most compact and efficient manner possible that also allows the image to be reproduced without any loss. One of the most efficient strategies used in lossless compression is to introduce entropy reduction through decorrelation. This study focuses on using the exclusive-or logic operator in a decorrelation filter as the preprocessing phase of lossless image compression of continuous-tone images. The exclusive-or logic operator is simply and reversibly applied to continuous-tone images for the purpose of extracting differences between neighboring pixels. Implementation of the exclusive-or operator also does not introduce data expansion. Traditional as well as innovative prediction methods are included for the creation of inputs for the exclusive-or logic based decorrelation filter. The results of the filter are then encoded by a variation of the Lempel-Ziv-Welch dictionary coder. Dictionary coding is selected for the coding phase of the algorithm because it does not require the storage of code tables or probabilities and because it is lower in complexity than other popular options such as Huffman or Arithmetic coding. The first modification of the Lempel-Ziv-Welch dictionary coder is that image data can be read in a sequence that is linear, 2-dimensional, or an adaptive combination of both. The second modification of the dictionary coder is that the coder can instead include multiple, dynamically chosen dictionaries. Experiments indicate that the exclusive-or operator based decorrelation filter when combined with a modified Lempel-Ziv-Welch dictionary coder provides compression comparable to algorithms that represent the current standard in lossless compression. The proposed algorithm provides compression performance that is below the Context-Based, Adaptive, Lossless Image Compression (CALIC) algorithm by 23%, below the Low Complexity Lossless Compression for Images (LOCO-I) algorithm by 19%, and below the Portable Network Graphics implementation of the Deflate algorithm by 7%, but above the Zip implementation of the Deflate algorithm by 24%. The proposed algorithm uses the exclusive-or operator in the modeling phase and uses modified Lempel-Ziv-Welch dictionary coding in the coding phase to form a low complexity, reversible, and dynamic method of lossless image compression

    Adaptive edge-based prediction for lossless image compression

    Get PDF
    Many lossless image compression methods have been suggested with established results hard to surpass. However there are some aspects that can be considered to improve the performance further. This research focuses on two-phase prediction-encoding method, separately studying each and suggesting new techniques.;In the prediction module, proposed Edge-Based-Predictor (EBP) and Least-Squares-Edge-Based-Predictor (LS-EBP) emphasizes on image edges and make predictions accordingly. EBP is a gradient based nonlinear adaptive predictor. EBP switches between prediction-rules based on few threshold parameters automatically determined by a pre-analysis procedure, which makes a first pass. The LS-EBP also uses these parameters, but optimizes the prediction for each pre-analysis assigned edge location, thus applying least-square approach only at the edge points.;For encoding module: a novel Burrows Wheeler Transform (BWT) inspired method is suggested, which performs better than applying the BWT directly on the images. We also present a context-based adaptive error modeling and encoding scheme. When coupled with the above-mentioned prediction schemes, the result is the best-known compression performance in the genre of compression schemes with same time and space complexity

    Empirical analysis of BWT-based lossless image compression

    Get PDF
    The Burrows-Wheeler Transformation (BWT) is a text transformation algorithm originally designed to improve the coherence in text data. This coherence can be exploited by compression algorithms such as run-length encoding or arithmetic coding. However, there is still a debate on its performance on images. Motivated by a theoretical analysis of the performance of BWT and MTF, we perform a detailed empirical study on the role of MTF in compressing images with the BWT. This research studies the compression performance of BWT on digital images using different predictors and context partitions. The major interest of the research is in finding efficient ways to make BWT suitable for lossless image compression.;This research studied three different approaches to improve the compression of image data by BWT. First, the idea of preprocessing the image data before sending it to the BWT compression scheme is studied by using different mapping and prediction schemes. Second, different variations of MTF were investigated to see which one works best for Image compression with BWT. Third, the concept of context partitioning for BWT output before it is forwarded to the next stage in the compression scheme.;For lossless image compression, this thesis proposes the removal of the MTF stage from the BWT compression pipeline and the usage of context partitioning method. The compression performance is further improved by using MED predictor on the image data along with the 8-bit mapping of the prediction residuals before it is processed by BWT.;This thesis proposes two schemes for BWT-based image coding, namely BLIC and BLICx, the later being based on the context-ordering property of the BWT. Our methods outperformed other text compression algorithms such as PPM, GZIP, direct BWT, and WinZip in compressing images. Final results showed that our methods performed better than the state of the art lossless image compression algorithms, such as JPEG-LS, JPEG2000, CALIC, EDP and PPAM on the natural images

    An efficient color image compression technique

    Get PDF
    We present a new image compression method to improve visual perception of the decompressed images and achieve higher image compression ratio. This method balances between the compression rate and image quality by compressing the essential parts of the image-edges. The key subject/edge is of more significance than background/non-edge image. Taking into consideration the value of image components and the effect of smoothness in image compression, this method classifies the image components as edge or non-edge. Low-quality lossy compression is applied to non-edge components whereas high-quality lossy compression is applied to edge components. Outcomes show that our suggested method is efficient in terms of compression ratio, bits per-pixel and peak signal to noise ratio

    GOLLIC: Learning Global Context beyond Patches for Lossless High-Resolution Image Compression

    Full text link
    Neural-network-based approaches recently emerged in the field of data compression and have already led to significant progress in image compression, especially in achieving a higher compression ratio. In the lossless image compression scenario, however, existing methods often struggle to learn a probability model of full-size high-resolution images due to the limitation of the computation source. The current strategy is to crop high-resolution images into multiple non-overlapping patches and process them independently. This strategy ignores long-term dependencies beyond patches, thus limiting modeling performance. To address this problem, we propose a hierarchical latent variable model with a global context to capture the long-term dependencies of high-resolution images. Besides the latent variable unique to each patch, we introduce shared latent variables between patches to construct the global context. The shared latent variables are extracted by a self-supervised clustering module inside the model's encoder. This clustering module assigns each patch the confidence that it belongs to any cluster. Later, shared latent variables are learned according to latent variables of patches and their confidence, which reflects the similarity of patches in the same cluster and benefits the global context modeling. Experimental results show that our global context model improves compression ratio compared to the engineered codecs and deep learning models on three benchmark high-resolution image datasets, DIV2K, CLIC.pro, and CLIC.mobile
    corecore