74,158 research outputs found

    Lossless compression of image data products on th e FIFE CD-ROM series

    Get PDF
    How do you store enough of the key data sets, from a total of 120 gigabytes of data collected for a scientific experiment, on a collection of CD-ROM's, small enough to distribute to a broad scientific community? In such an application where information loss in unacceptable, lossless compression algorithms are the only choice. Although lossy compression algorithms can provide an order of magnitude improvement in compression ratios over lossless algorithms the information that is lost is often part of the key scientific precision of the data. Therefore, lossless compression algorithms are and will continue to be extremely important in minimizing archiving storage requirements and distribution of large earth and space (ESS) data sets while preserving the essential scientific precision of the data

    Information-Preserving Markov Aggregation

    Full text link
    We present a sufficient condition for a non-injective function of a Markov chain to be a second-order Markov chain with the same entropy rate as the original chain. This permits an information-preserving state space reduction by merging states or, equivalently, lossless compression of a Markov source on a sample-by-sample basis. The cardinality of the reduced state space is bounded from below by the node degrees of the transition graph associated with the original Markov chain. We also present an algorithm listing all possible information-preserving state space reductions, for a given transition graph. We illustrate our results by applying the algorithm to a bi-gram letter model of an English text.Comment: 7 pages, 3 figures, 2 table

    Optimal Compression of Floating-point Astronomical Images Without Significant Loss of Information

    Get PDF
    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 -- 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process can greatly improve the precision of measurements in the images. This is especially important if the analysis algorithm relies on the mode or the median which would be similarly quantized if the pixel values are not dithered. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.Comment: Accepted PAS

    A Compact Index for Order-Preserving Pattern Matching

    Full text link
    Order-preserving pattern matching was introduced recently but it has already attracted much attention. Given a reference sequence and a pattern, we want to locate all substrings of the reference sequence whose elements have the same relative order as the pattern elements. For this problem we consider the offline version in which we build an index for the reference sequence so that subsequent searches can be completed very efficiently. We propose a space-efficient index that works well in practice despite its lack of good worst-case time bounds. Our solution is based on the new approach of decomposing the indexed sequence into an order component, containing ordering information, and a delta component, containing information on the absolute values. Experiments show that this approach is viable, faster than the available alternatives, and it is the first one offering simultaneously small space usage and fast retrieval.Comment: 16 pages. A preliminary version appeared in the Proc. IEEE Data Compression Conference, DCC 2017, Snowbird, UT, USA, 201

    A joint motion & disparity motion estimation technique for 3D integral video compression using evolutionary strategy

    Get PDF
    3D imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging, which can capture true 3D color images with only one camera, has been seen as the right technology to offer stress-free viewing to audiences of more than one person. Just like any digital video, 3D video sequences must also be compressed in order to make it suitable for consumer domain applications. However, ordinary compression techniques found in state-of-the-art video coding standards such as H.264, MPEG-4 and MPEG-2 are not capable of producing enough compression while preserving the 3D clues. Fortunately, a huge amount of redundancies can be found in an integral video sequence in terms of motion and disparity. This paper discusses a novel approach to use both motion and disparity information to compress 3D integral video sequences. We propose to decompose the integral video sequence down to viewpoint video sequences and jointly exploit motion and disparity redundancies to maximize the compression. We further propose an optimization technique based on evolutionary strategies to minimize the computational complexity of the joint motion disparity estimation. Experimental results demonstrate that Joint Motion and Disparity Estimation can achieve over 1 dB objective quality gain over normal motion estimation. Once combined with Evolutionary strategy, this can achieve up to 94% computational cost saving

    On the Use of Compressed Polyhedral Quadrature Formulas in Embedded Interface Methods

    Get PDF
    The main idea of this paper is to apply a recent quadrature compression technique to algebraic quadrature formulas on complex polyhedra. The quadrature compression substantially reduces the number of integration points but preserves the accuracy of integration. The compression is easy to achieve since it is entirely based on the fundamental methods of numerical linear algebra. The resulting compressed formulas are applied in an embedded interface method to integrate the weak form of the Navier--Stokes equations. Simulations of flow past stationary and moving interface problems demonstrate that the compressed quadratures improve the efficiency of performing the weak form integration, while preserving accuracy and order of convergence

    Compression Bases in Unital Groups

    Full text link
    We study unital groups with a distinguished family of compressions called a compression base. A motivating example is the partially ordered additive group of a von Neumann algebra with all Naimark compressions as the compression base.Comment: 8 page

    Using wavelets for compression and detecting events in anomalous network traffic

    Get PDF
    Monitoring and measuring various metrics of highdata rate networks produces a vast amount of information over a long period of time making the storage of the monitored data a serious issue. Furthermore, for the collected monitoring data to be useful to network analysts, these measurements need to be processed in order to detect interesting characteristics. In this paper wavelet analysis is used as a multi-resolution analysis tool for compression of data rate measurements. Two known thresholds are suggested for lossy compression and event detection purposes. Results show high compression ratios while preserving the quality (quantitative and visual aspects) and the energy of the signal and detection of sudden changes are achievable
    • 

    corecore