16,401 research outputs found

    A new efficient analytically proven lossless data compression for data transmission technique

    Get PDF
    A new lossless data compression method for data transmission is proposed. This new compression mechanism does not face the problem of mapping elements from a domain which is much larger than its range. Our new algorithm sides steps this problem via a pre-defined code word list. The algorithm has fast encoding and decoding mechanism and is also proven analytically to be a lossless data compression technique

    A modified approach to the new lossless data Compression method

    Get PDF
    This paper proposed the modified approach to the new lossless data compression method. It is developed based on hiding data reversibly with a location map. It performs same as the earlier algorithm but it stands on lossless strategy where as the former approach could not do it. It can compress any kind of symbols as it operates on binary symbols. It is faster than many algorithms as it does not have any complex mathematical operations. Experimental results proved that when the symbol probability increases the algorithm shows good compression ratio

    Analysis-preserving video microscopy compression via correlation and mathematical morphology: MICROSCOPY VIDEO COMPRESSION BASED ON CORRELATION AND MATHEMATICAL MORPHOLOGY

    Get PDF
    The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes

    ISSDC: Digram Coding Based Lossless Data Compression Algorithm

    Get PDF
    In this paper, a new lossless data compression method that is based on digram coding is introduced. This data compression method uses semi-static dictionaries: All of the used characters and most frequently used two character blocks (digrams) in the source are found and inserted into a dictionary in the first pass, compression is performed in the second pass. This two-pass structure is repeated several times and in every iteration particular number of elements is inserted in the dictionary until the dictionary is filled. This algorithm (ISSDC: Iterative Semi-Static Digram Coding) also includes some mechanisms that can decide about total number of iterations and dictionary size whenever these values are not given by the user. Our experiments show that ISSDC is better than LZW/GIF and BPE in compression ratio. It is worse than DEFLATE in compression of text and binary data, but better than PNG (which uses DEFLATE compression) in lossless compression of simple images

    PPM performance with BWT complexity: a new method for lossless data compression

    Get PDF
    This work combines a new fast context-search algorithm with the lossless source coding models of PPM to achieve a lossless data compression algorithm with the linear context-search complexity and memory of BWT and Ziv-Lempel codes and the compression performance of PPM-based algorithms. Both sequential and nonsequential encoding are considered. The proposed algorithm yields an average rate of 2.27 bits per character (bpc) on the Calgary corpus, comparing favorably to the 2.33 and 2.34 bpc of PPM5 and PPM* and the 2.43 bpc of BW94 but not matching the 2.12 bpc of PPMZ9, which, at the time of this publication, gives the greatest compression of all algorithms reported on the Calgary corpus results page. The proposed algorithm gives an average rate of 2.14 bpc on the Canterbury corpus. The Canterbury corpus Web page gives average rates of 1.99 bpc for PPMZ9, 2.11 bpc for PPM5, 2.15 bpc for PPM7, and 2.23 bpc for BZIP2 (a BWT-based code) on the same data set

    Comparative analysis of various Image compression techniques for Quasi Fractal lossless compression

    Get PDF
    The most important Entity to be considered in Image Compression methods are Paek to signal noise ratio and Compression ratio. These two parameters are considered to judge the quality of any Image.and they a play vital role in any Image processing applications. Biomedical domain is one of the critical areas where more image datasets are involved for analysis and biomedical image compression is very, much essential. Basically, compression techniques are classified into lossless and lossy. As the name indicates, in the lossless technique the image is compressed without any loss of data. But in the lossy, some information may loss. Here both lossy & lossless techniques for an image compression are used. In this research different compression approaches of these two categories are discussed and brain images for compression techniques are highlighted. Both lossy and lossless techniques are implemented by studying it’s advantages and disadvantages. For this research two important quality parameters i.e. CR & PSNR are calculated. Here existing techniques DCT, DFT, DWT & Fractal are implemented and introduced new techniques i.e Oscillation Concept method, BTC-SPIHT & Hybrid technique using adaptive threshold & Quasi Fractal Algorithm

    Generalization Gap in Amortized Inference

    Get PDF
    The ability of likelihood-based probabilistic models to generalize to unseen data is central to many machine learning applications such as lossless compression. In this work, we study the generalization of a popular class of probabilistic model - the Variational Auto-Encoder (VAE). We discuss the two generalization gaps that affect VAEs and show that overfitting is usually dominated by amortized inference. Based on this observation, we propose a new training objective that improves the generalization of amortized inference. We demonstrate how our method can improve performance in the context of image modeling and lossless compression

    Data compression and computational efficiency

    Get PDF
    In this thesis we seek to make advances towards the goal of effective learned compression. This entails using machine learning models as the core constituent of compression algorithms, rather than hand-crafted components. To that end, we first describe a new method for lossless compression. This method allows a class of existing machine learning models – latent variable models – to be turned into lossless compressors. Thus many future advancements in the field of latent variable modelling can be leveraged in the field of lossless compression. We demonstrate a proof-of-concept of this method on image compression. Further, we show that it can scale to very large models, and image compression problems which closely resemble the real-world use cases that we seek to tackle. The use of the above compression method relies on executing a latent variable model. Since these models can be large in size and slow to run, we consider how to mitigate these computational costs. We show that by implementing much of the models using binary precision parameters, rather than floating-point precision, we can still achieve reasonable modelling performance but requiring a fraction of the storage space and execution time. Lastly, we consider how learned compression can be applied to 3D scene data - a data medium increasing in prevalence, and which can require a significant amount of space. A recently developed class of machine learning models - scene representation functions - has demonstrated good results on modelling such 3D scene data. We show that by compressing these representation functions themselves we can achieve good scene reconstruction with a very small model size

    GPU Lossless Hyperspectral Data Compression System

    Get PDF
    Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments
    corecore