171,626 research outputs found

    Compression of spectral meteorological imagery

    Get PDF
    Data compression is essential to current low-earth-orbit spectral sensors with global coverage, e.g., meteorological sensors. Such sensors routinely produce in excess of 30 Gb of data per orbit (over 4 Mb/s for about 110 min) while typically limited to less than 10 Gb of downlink capacity per orbit (15 minutes at 10 Mb/s). Astro-Space Division develops spaceborne compression systems for compression ratios from as little as three to as much as twenty-to-one for high-fidelity reconstructions. Current hardware production and development at Astro-Space Division focuses on discrete cosine transform (DCT) systems implemented with the GE PFFT chip, a 32x32 2D-DCT engine. Spectral relations in the data are exploited through block mean extraction followed by orthonormal transformation. The transformation produces blocks with spatial correlation that are suitable for further compression with any block-oriented spatial compression system, e.g., Astro-Space Division's Laplacian modeler and analytic encoder of DCT coefficients

    On-the-fly memory compression for multibody algorithms.

    Get PDF
    Memory and bandwidth demands challenge developers of particle-based codes that have to scale on new architectures, as the growth of concurrency outperforms improvements in memory access facilities, as the memory per core tends to stagnate, and as communication networks cannot increase bandwidth arbitrary. We propose to analyse each particle of such a code to find out whether a hierarchical data representation storing data with reduced precision caps the memory demands without exceeding given error bounds. For admissible candidates, we perform this compression and thus reduce the pressure on the memory subsystem, lower the total memory footprint and reduce the data to be exchanged via MPI. Notably, our analysis and transformation changes the data compression dynamically, i.e. the choice of data format follows the solution characteristics, and it does not require us to alter the core simulation code

    Understanding the nature of "superhard graphite"

    Get PDF
    Numerous experiments showed that on cold compression graphite transforms into a new superhard and transparent allotrope. Several structures with different topologies have been proposed for this phase. While experimental data are consistent with these models, the only way to solve this puzzle is to find which structure is kinetically easiest to form. Using state-of-the-art molecular-dynamics transition path sampling simulations, we investigate kinetic pathways of the pressure-induced transformation of graphite to various superhard candidate structures. Unlike hitherto applied methods for elucidating nature of superhard graphite, transition path sampling realistically models nucleation events necessary for physically meaningful transformation kinetics. We demonstrate that nucleation mechanism and kinetics lead to MM-carbon as the final product. WW-carbon, initially competitor to MM-carbon, is ruled out by phase growth. Bct-C4_4 structure is not expected to be produced by cold compression due to less probable nucleation and higher barrier of formation

    Compressive Mining: Fast and Optimal Data Mining in the Compressed Domain

    Full text link
    Real-world data typically contain repeated and periodic patterns. This suggests that they can be effectively represented and compressed using only a few coefficients of an appropriate basis (e.g., Fourier, Wavelets, etc.). However, distance estimation when the data are represented using different sets of coefficients is still a largely unexplored area. This work studies the optimization problems related to obtaining the \emph{tightest} lower/upper bound on Euclidean distances when each data object is potentially compressed using a different set of orthonormal coefficients. Our technique leads to tighter distance estimates, which translates into more accurate search, learning and mining operations \textit{directly} in the compressed domain. We formulate the problem of estimating lower/upper distance bounds as an optimization problem. We establish the properties of optimal solutions, and leverage the theoretical analysis to develop a fast algorithm to obtain an \emph{exact} solution to the problem. The suggested solution provides the tightest estimation of the L2L_2-norm or the correlation. We show that typical data-analysis operations, such as k-NN search or k-Means clustering, can operate more accurately using the proposed compression and distance reconstruction technique. We compare it with many other prevalent compression and reconstruction techniques, including random projections and PCA-based techniques. We highlight a surprising result, namely that when the data are highly sparse in some basis, our technique may even outperform PCA-based compression. The contributions of this work are generic as our methodology is applicable to any sequential or high-dimensional data as well as to any orthogonal data transformation used for the underlying data compression scheme.Comment: 25 pages, 20 figures, accepted in VLD

    Techniques for lossless image compression

    Full text link
    Popular lossless image compression techniques used today belong to the Lempel-Ziv family of encoders. These techniques are generic in nature and do not take full advantage of the two-dimensional correlation of digital image data. They process a one-dimensional stream of data replacing repetitions with smaller codes. Techniques for Lossless Image Compression introduces a new model for lossless image compression that consists of two stages: transformation and encoDing Transformation takes advantage of the correlative properties of the data, modifying it in order to maximize the use of encoding techniques. Encoding can be described as replacing data symbols that occur frequently or in repeated groups with codes that are represented in a smaller number of bits. Techniques presented in this thesis include descriptions of Lempel-Ziv encoders in use today as well as several new techniques involving the model of transformation and encoding mentioned previously. Example compression ratios achieved by each technique when applied to a sample set of gray-scale cardiac images are provided for compariSo

    Kompresi File Citra dengan Algoritma Transformasi Walsh-Hadamard

    Get PDF
    Image compression was developed to facilitate image storage and transmission. Current compression techniques allow the image to be compressed so that its size is much smaller than the original size. In general, data compression methods can be divided into two groups, namely lossy and losseless. Lossy is image compression where the compression result of the compressed image is not the same as the original image because there is missing information, but it can still be tolerated by eye perception. The eye can distinguish small changes to the image. This method produces a higher compression ratio than the lossless method. In this study, the algorithms used to compare data compression are the Walsh-Hadamard Transform and Run Length Encoding (RLE). Image transformation is a very important subject in image processing. The image resulting from the transformation process can be re-analyzed, interpreted and used as a reference for further processing. The purpose of applying image transformation is to obtain clearer information (feature extraction) contained in an image. The walsh-hadamard transformation is an orthogonal transformation that turns a signal into a set of waves in the form of perpendicular (orthogonal) and rectangular (rectangular). The Run Length Encoding (RLE) algorithm works based on a sequence of consecutive characters. This algorithm works by moving the repetition of the same byte in a row (continuously). This method is used to compress images that have groups of pixels with the same gray degree

    Comparative Compression of Wavelet Haar Transformation with Discrete Wavelet Transform on Colored Image Compression

    Get PDF
    In this research, the algorithm used to compress images is using the haar wavelet transformation method and the discrete wavelet transform algorithm. The image compression based on Wavelet Wavelet transform uses a calculation system with decomposition with row direction and decomposition with column direction. While discrete wavelet transform-based image compression, the size of the compressed image produced will be more optimal because some information that is not so useful, not so felt, and not so seen by humans will be eliminated so that humans still assume that the data can still be used even though it is compressed. The data used are data taken directly, so the test results are obtained that digital image compression based on Wavelet Wavelet Transformation gets a compression ratio of 41%, while the discrete wavelet transform reaches 29.5%. Based on research problems regarding the efficiency of storage media, it can be concluded that the right algorithm to choose is the Haar Wavelet transformation algorithm. To improve compression results it is recommended to use wavelet transforms other than haar, such as daubechies, symlets, and so on
    corecore