315 research outputs found

    Single event upset hardened embedded domain specific reconfigurable architecture

    Get PDF

    Quality and Rate Control of JPEG XR

    Get PDF
    Driven by the need for seismic data compression with high dynamic range and 32-bit resolution, we propose two algorithms to efficiently and precisely control the signal-to-noise ratio (SNR) and bit rate in JPEG XR image compression to allow users to compress seismic data with a target SNR or a target bit rate. Based on the quantization properties of JPEG XR and the nature of blank macroblocks, we build a reliable model between the quantization parameter (QP) and SNR. This enables us to estimate the right QP with target quality for the JPEG XR encoder

    Image representation and compression via sparse solutions of systems of linear equations

    Get PDF
    We are interested in finding sparse solutions to systems of linear equations mathbfAmathbfx=mathbfbmathbf{A}mathbf{x} = mathbf{b}, where mathbfAmathbf{A} is underdetermined and fully-ranked. In this thesis we examine an implementation of the {em orthogonal matching pursuit} (OMP) algorithm, an algorithm to find sparse solutions to equations like the one described above, and present a logic for its validation and corresponding validation protocol results. The implementation presented in this work improves on the performance reported in previously published work that used software from SparseLab. We also use and test OMP in the study of the compression properties of mathbfAmathbf{A} in the context of image processing. We follow the common technique of image blocking used in the JPEG and JPEG 2000 standards. We make a small modification in the stopping criteria of OMP that results in better compression ratio vs image quality as measured by the structural similarity (SSIM) and mean structural similarity (MSSIM) indices which capture perceptual image quality. This results in slightly better compression than when using the more common peak signal to noise ratio (PSNR). We study various matrices whose column vectors come from the concatenation of waveforms based on the discrete cosine transform (DCT), and the Haar wavelet. We try multiple linearization algorithms and characterize their performance with respect to compression. An introduction and brief historical review on the topics of information theory, quantization and coding, and the theory of rate-distortion leads us to compute the distortion DD properties of the image compression and representation approach presented in this work. A choice for a lossless encoder gammagamma is left open for future work in order to obtain the complete characterization of the rate-distortion properties of the quantization/coding scheme proposed here. However, the analysis of natural image statistics is identified as a good design guideline for the eventual choice of gammagamma. The lossless encoder gammagamma is to be understood under the terms of a quantizer (alpha,gamma,beta)(alpha, gamma, beta) as introduced by Gray and Neuhoff

    Copyright Protection of Color Imaging Using Robust-Encoded Watermarking

    Get PDF
    In this paper we present a robust-encoded watermarking method applied to color images for copyright protection, which presents robustness against several geometric and signal processing distortions. Trade-off between payload, robustness and imperceptibility is a very important aspect which has to be considered when a watermark algorithm is designed. In our proposed scheme, previously to be embedded into the image, the watermark signal is encoded using a convolutional encoder, which can perform forward error correction achieving better robustness performance. Then, the embedding process is carried out through the discrete cosine transform domain (DCT) of an image using the image normalization technique to accomplish robustness against geometric and signal processing distortions. The embedded watermark coded bits are extracted and decoded using the Viterbi algorithm. In order to determine the presence or absence of the watermark into the image we compute the bit error rate (BER) between the recovered and the original watermark data sequence. The quality of the watermarked image is measured using the well-known indices: Peak Signal to Noise Ratio (PSNR), Visual Information Fidelity (VIF) and Structural Similarity Index (SSIM). The color difference between the watermarked and original images is obtained by using the Normalized Color Difference (NCD) measure. The experimental results show that the proposed method provides good performance in terms of imperceptibility and robustness. The comparison among the proposed and previously reported methods based on different techniques is also provided

    Image Compression Techniques: A Survey in Lossless and Lossy algorithms

    Get PDF
    The bandwidth of the communication networks has been increased continuously as results of technological advances. However, the introduction of new services and the expansion of the existing ones have resulted in even higher demand for the bandwidth. This explains the many efforts currently being invested in the area of data compression. The primary goal of these works is to develop techniques of coding information sources such as speech, image and video to reduce the number of bits required to represent a source without significantly degrading its quality. With the large increase in the generation of digital image data, there has been a correspondingly large increase in research activity in the field of image compression. The goal is to represent an image in the fewest number of bits without losing the essential information content within. Images carry three main type of information: redundant, irrelevant, and useful. Redundant information is the deterministic part of the information, which can be reproduced without loss from other information contained in the image. Irrelevant information is the part of information that has enormous details, which are beyond the limit of perceptual significance (i.e., psychovisual redundancy). Useful information, on the other hand, is the part of information, which is neither redundant nor irrelevant. Human usually observes decompressed images. Therefore, their fidelities are subject to the capabilities and limitations of the Human Visual System. This paper provides a survey on various image compression techniques, their limitations, compression rates and highlights current research in medical image compression

    LIDAR data classification and compression

    Get PDF
    Airborne Laser Detection and Ranging (LIDAR) data has a wide range of applications in agriculture, archaeology, biology, geology, meteorology, military and transportation, etc. LIDAR data consumes hundreds of gigabytes in a typical day of acquisition, and the amount of data collected will continue to grow as sensors improve in resolution and functionality. LIDAR data classification and compression are therefore very important for managing, visualizing, analyzing and using this huge amount of data. Among the existing LIDAR data classification schemes, supervised learning has been used and can obtain up to 96% of accuracy. However some of the features used are not readily available, and the training data is also not always available in practice. In existing LIDAR data compression schemes, the compressed size can be 5%-23% of the original size, but still could be in the order of gigabyte, which is impractical for many applications. The objectives of this dissertation are (1) to develop LIDAR classification schemes that can classify airborne LIDAR data more accurately without some features or training data that existing work requires; (2) to explore lossy compression schemes that can compress LIDAR data at a much higher compression rate than is currently available. We first investigate two independent ways to classify LIDAR data depending on the availability of training data: when training data is available, we use supervised machine learning techniques such as support vector machine (SVM); when training data is not readily available, we develop an unsupervised classification method that can classify LIDAR data as good as supervised classification methods. Experimental results show that the accuracy of our classification results are over 99%. We then present two new lossy LIDAR data compression methods and compare their performance. The first one is a wavelet based compression scheme while the second one is geometry based. Our new geometry based compression is a geometry and statistics driven LIDAR point-cloud compression method which combines both application knowledge and scene content to enable fast transmission from the sensor platform while preserving the geometric properties of objects within a scene. The new algorithm is based on the idea of compression by classification. It utilizes the unique height function simplicity as well as the local spatial coherence and linearity of the aerial LIDAR data and can automatically compress the data to the desired level-of-details defined by the user. Either of the two developed classification methods can be used to automatically detect regions that are not locally linear such as vegetations or trees. In those regions, the local statistics descriptions, such as mean, variance, expectation, etc., are stored to efficiently represent the region and restore the geometry in the decompression phase. The new geometry-based compression schemes for building and ground data can compress efficiently and significantly reduce the file size, while retaining a good fit for the scalable "zoom in" requirements. Experimental results show that compared with existing LIDAR lossy compression work, our proposed approach achieves two orders of magnitude lower bit rate with the same quality, making it feasible for applications that were not practical before. The ability to store information into a database and query them efficiently becomes possible with the proposed highly efficient compression scheme.Includes bibliographical references (pages 106-116)
    corecore