7,657 research outputs found
Efficient LDPC Codes over GF(q) for Lossy Data Compression
In this paper we consider the lossy compression of a binary symmetric source.
We present a scheme that provides a low complexity lossy compressor with near
optimal empirical performance. The proposed scheme is based on b-reduced
ultra-sparse LDPC codes over GF(q). Encoding is performed by the Reinforced
Belief Propagation algorithm, a variant of Belief Propagation. The
computational complexity at the encoder is O(.n.q.log q), where is the
average degree of the check nodes. For our code ensemble, decoding can be
performed iteratively following the inverse steps of the leaf removal
algorithm. For a sparse parity-check matrix the number of needed operations is
O(n).Comment: 5 pages, 3 figure
Data expansion with Huffman codes
The following topics were dealt with: Shannon theory; universal lossless source coding; CDMA; turbo codes; broadband networks and protocols; signal processing and coding; coded modulation; information theory and applications; universal lossy source coding; algebraic geometry codes; modelling analysis and stability in networks; trellis structures and trellis decoding; channel capacity; recording channels; fading channels; convolutional codes; neural networks and learning; estimation; Gaussian channels; rate distortion theory; constrained channels; 2D channel coding; nonparametric estimation and classification; data compression; synchronisation and interference in communication systems; cyclic codes; signal detection; group codes; multiuser systems; entropy and noiseless source coding; dispersive channels and equalisation; block codes; cryptography; image processing; quantisation; random processes; wavelets; sequences for synchronisation; iterative decoding; optical communications
Approachable Error Bounded Lossy Compression
Compression is commonly used in HPC applications to move and store data. Traditional lossless compression, however, does not provide adequate compression of floating point data often found in scientific codes. Recently, researchers and scientists have turned to lossy compression techniques that approximate the original data rather than reproduce it in order to achieve desired levels of compression. Typical lossy compressors do not bound the errors introduced into the data, leading to the development of error bounded lossy compressors (EBLC). These tools provide the desired levels of compression as mathematical guarantees on the errors introduced. However, the current state of EBLC leaves much to be desired. The existing EBLC all have different interfaces requiring codes to be changed to adopt new techniques; EBLC have many more configuration options than their predecessors, making them more difficult to use; and EBLC typically bound quantities like point wise errors rather than higher level metrics such as spectra, p-values, or test statistics that scientists typically use. My dissertation aims to provide a uniform interface to compression and to develop tools to allow application scientists to understand and apply EBLC. This dissertation proposal presents three groups of work: LibPressio, a standard interface for compression and analysis; FRaZ/LibPressio-Opt frameworks for the automated configuration of compressors using LibPressio; and work on tools for analyzing errors in particular domains
Fractal image compression
Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed
Energy Requirements for Quantum Data Compression and 1-1 Coding
By looking at quantum data compression in the second quantisation, we present
a new model for the efficient generation and use of variable length codes. In
this picture lossless data compression can be seen as the {\em minimum energy}
required to faithfully represent or transmit classical information contained
within a quantum state.
In order to represent information we create quanta in some predefined modes
(i.e. frequencies) prepared in one of two possible internal states (the
information carrying degrees of freedom). Data compression is now seen as the
selective annihilation of these quanta, the energy of whom is effectively
dissipated into the environment. As any increase in the energy of the
environment is intricately linked to any information loss and is subject to
Landauer's erasure principle, we use this principle to distinguish lossless and
lossy schemes and to suggest bounds on the efficiency of our lossless
compression protocol.
In line with the work of Bostr\"{o}m and Felbinger \cite{bostroem}, we also
show that when using variable length codes the classical notions of prefix or
uniquely decipherable codes are unnecessarily restrictive given the structure
of quantum mechanics and that a 1-1 mapping is sufficient. In the absence of
this restraint we translate existing classical results on 1-1 coding to the
quantum domain to derive a new upper bound on the compression of quantum
information. Finally we present a simple quantum circuit to implement our
scheme.Comment: 10 pages, 5 figure
- …