156 research outputs found

    Image Compression Techniques: A Survey in Lossless and Lossy algorithms

    Get PDF
    The bandwidth of the communication networks has been increased continuously as results of technological advances. However, the introduction of new services and the expansion of the existing ones have resulted in even higher demand for the bandwidth. This explains the many efforts currently being invested in the area of data compression. The primary goal of these works is to develop techniques of coding information sources such as speech, image and video to reduce the number of bits required to represent a source without significantly degrading its quality. With the large increase in the generation of digital image data, there has been a correspondingly large increase in research activity in the field of image compression. The goal is to represent an image in the fewest number of bits without losing the essential information content within. Images carry three main type of information: redundant, irrelevant, and useful. Redundant information is the deterministic part of the information, which can be reproduced without loss from other information contained in the image. Irrelevant information is the part of information that has enormous details, which are beyond the limit of perceptual significance (i.e., psychovisual redundancy). Useful information, on the other hand, is the part of information, which is neither redundant nor irrelevant. Human usually observes decompressed images. Therefore, their fidelities are subject to the capabilities and limitations of the Human Visual System. This paper provides a survey on various image compression techniques, their limitations, compression rates and highlights current research in medical image compression

    The 1995 Science Information Management and Data Compression Workshop

    Get PDF
    This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on October 26-27, 1995, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival, and retrieval of large quantities of data in future Earth and space science missions. It consisted of fourteen presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The Workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center

    DCT Implementation on GPU

    Get PDF
    There has been a great progress in the field of graphics processors. Since, there is no rise in the speed of the normal CPU processors; Designers are coming up with multi-core, parallel processors. Because of their popularity in parallel processing, GPUs are becoming more and more attractive for many applications. With the increasing demand in utilizing GPUs, there is a great need to develop operating systems that handle the GPU to full capacity. GPUs offer a very efficient environment for many image processing applications. This thesis explores the processing power of GPUs for digital image compression using Discrete cosine transform

    Development of Some Efficient Lossless and Lossy Hybrid Image Compression Schemes

    Get PDF
    Digital imaging generates a large amount of data which needs to be compressed, without loss of relevant information, to economize storage space and allow speedy data transfer. Though both storage and transmission medium capacities have been continuously increasing over the last two decades, they dont match the present requirement. Many lossless and lossy image compression schemes exist for compression of images in space domain and transform domain. Employing more than one traditional image compression algorithms results in hybrid image compression techniques. Based on the existing schemes, novel hybrid image compression schemes are developed in this doctoral research work, to compress the images effectually maintaining the quality

    Metrics to evaluate compressions algorithms for RAW SAR data

    Get PDF
    Modern synthetic aperture radar (SAR) systems have size, weight, power and cost (SWAP-C) limitations since platforms are becoming smaller, while SAR operating modes are becoming more complex. Due to the computational complexity of the SAR processing required for modern SAR systems, performing the processing on board the platform is not a feasible option. Thus, SAR systems are producing an ever-increasing volume of data that needs to be transmitted to a ground station for processing. Compression algorithms are utilised to reduce the data volume of the raw data. However, these algorithms can cause degradation and losses that may degrade the effectiveness of the SAR mission. This study addresses the lack of standardised quantitative performance metrics to objectively quantify the performance of SAR data-compression algorithms. Therefore, metrics were established in two different domains, namely the data domain and the image domain. The data-domain metrics are used to determine the performance of the quantisation and the associated losses or errors it induces in the raw data samples. The image-domain metrics evaluate the quality of the SAR image after SAR processing has been performed. In this study three well-known SAR compression algorithms were implemented and applied to three real SAR data sets that were obtained from a prototype airborne SAR system. The performance of these algorithms were evaluated using the proposed metrics. Important metrics in the data domain were found to be the compression ratio, the entropy, statistical parameters like the skewness and kurtosis to measure the deviation from the original distributions of the uncompressed data, and the dynamic range. The data histograms are an important visual representation of the effects of the compression algorithm on the data. An important error measure in the data domain is the signal-to-quantisation-noise ratio (SQNR), and the phase error for applications where phase information is required to produce the output. Important metrics in the image domain include the dynamic range, the impulse response function, the image contrast, as well as the error measure, signal-to-distortion-noise ratio (SDNR). The metrics suggested that all three algorithms performed well and are thus well suited for the compression of raw SAR data. The fast Fourier transform block adaptive quantiser (FFT-BAQ) algorithm had the overall best performance, but the analysis of the computational complexity of its compression steps, indicated that it is has the highest level of complexity compared to the other two algorithms. Since different levels of degradation are acceptable for different SAR applications, a trade-off can be made between the data reduction and the degradation caused by the algorithm. Due to SWAP-C limitations, there also remains a trade-off between the performance and the computational complexity of the compression algorithm.Dissertation (MEng)--University of Pretoria, 2019.Electrical, Electronic and Computer EngineeringMEngUnrestricte

    Distortion-constraint compression of three-dimensional CLSM images using image pyramid and vector quantization

    Get PDF
    The confocal microscopy imaging techniques, which allow optical sectioning, have been successfully exploited in biomedical studies. Biomedical scientists can benefit from more realistic visualization and much more accurate diagnosis by processing and analysing on a three-dimensional image data. The lack of efficient image compression standards makes such large volumetric image data slow to transfer over limited bandwidth networks. It also imposes large storage space requirements and high cost in archiving and maintenance. Conventional two-dimensional image coders do not take into account inter-frame correlations in three-dimensional image data. The standard multi-frame coders, like video coders, although they have good performance in capturing motion information, are not efficiently designed for coding multiple frames representing a stack of optical planes of a real object. Therefore a real three-dimensional image compression approach should be investigated. Moreover the reconstructed image quality is a very important concern in compressing medical images, because it could be directly related to the diagnosis accuracy. Most of the state-of-the-arts methods are based on transform coding, for instance JPEG is based on discrete-cosine-transform CDCT) and JPEG2000 is based on discrete- wavelet-transform (DWT). However in DCT and DWT methods, the control of the reconstructed image quality is inconvenient, involving considerable costs in computation, since they are fundamentally rate-parameterized methods rather than distortion-parameterized methods. Therefore it is very desirable to develop a transform-based distortion-parameterized compression method, which is expected to have high coding performance and also able to conveniently and accurately control the final distortion according to the user specified quality requirement. This thesis describes our work in developing a distortion-constraint three-dimensional image compression approach, using vector quantization techniques combined with image pyramid structures. We are expecting our method to have: 1. High coding performance in compressing three-dimensional microscopic image data, compared to the state-of-the-art three-dimensional image coders and other standardized two-dimensional image coders and video coders. 2. Distortion-control capability, which is a very desirable feature in medical 2. Distortion-control capability, which is a very desirable feature in medical image compression applications, is superior to the rate-parameterized methods in achieving a user specified quality requirement. The result is a three-dimensional image compression method, which has outstanding compression performance, measured objectively, for volumetric microscopic images. The distortion-constraint feature, by which users can expect to achieve a target image quality rather than the compressed file size, offers more flexible control of the reconstructed image quality than its rate-constraint counterparts in medical image applications. Additionally, it effectively reduces the artifacts presented in other approaches at low bit rates and also attenuates noise in the pre-compressed images. Furthermore, its advantages in progressive transmission and fast decoding make it suitable for bandwidth limited tele-communications and web-based image browsing applications

    Sparse representation based hyperspectral image compression and classification

    Get PDF
    Abstract This thesis presents a research work on applying sparse representation to lossy hyperspectral image compression and hyperspectral image classification. The proposed lossy hyperspectral image compression framework introduces two types of dictionaries distinguished by the terms sparse representation spectral dictionary (SRSD) and multi-scale spectral dictionary (MSSD), respectively. The former is learnt in the spectral domain to exploit the spectral correlations, and the latter in wavelet multi-scale spectral domain to exploit both spatial and spectral correlations in hyperspectral images. To alleviate the computational demand of dictionary learning, either a base dictionary trained offline or an update of the base dictionary is employed in the compression framework. The proposed compression method is evaluated in terms of different objective metrics, and compared to selected state-of-the-art hyperspectral image compression schemes, including JPEG 2000. The numerical results demonstrate the effectiveness and competitiveness of both SRSD and MSSD approaches. For the proposed hyperspectral image classification method, we utilize the sparse coefficients for training support vector machine (SVM) and k-nearest neighbour (kNN) classifiers. In particular, the discriminative character of the sparse coefficients is enhanced by incorporating contextual information using local mean filters. The classification performance is evaluated and compared to a number of similar or representative methods. The results show that our approach could outperform other approaches based on SVM or sparse representation. This thesis makes the following contributions. It provides a relatively thorough investigation of applying sparse representation to lossy hyperspectral image compression. Specifically, it reveals the effectiveness of sparse representation for the exploitation of spectral correlations in hyperspectral images. In addition, we have shown that the discriminative character of sparse coefficients can lead to superior performance in hyperspectral image classification.EM201
    corecore