9,177 research outputs found
New Algorithms and Lower Bounds for Sequential-Access Data Compression
This thesis concerns sequential-access data compression, i.e., by algorithms
that read the input one or more times from beginning to end. In one chapter we
consider adaptive prefix coding, for which we must read the input character by
character, outputting each character's self-delimiting codeword before reading
the next one. We show how to encode and decode each character in constant
worst-case time while producing an encoding whose length is worst-case optimal.
In another chapter we consider one-pass compression with memory bounded in
terms of the alphabet size and context length, and prove a nearly tight
tradeoff between the amount of memory we can use and the quality of the
compression we can achieve. In a third chapter we consider compression in the
read/write streams model, which allows us passes and memory both
polylogarithmic in the size of the input. We first show how to achieve
universal compression using only one pass over one stream. We then show that
one stream is not sufficient for achieving good grammar-based compression.
Finally, we show that two streams are necessary and sufficient for achieving
entropy-only bounds.Comment: draft of PhD thesi
Weighted universal image compression
We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB
Recommended from our members
Parallel data compression
Data compression schemes remove data redundancy in communicated and stored data and increase the effective capacities of communication and storage devices. Parallel algorithms and implementations for textual data compression are surveyed. Related concepts from parallel computation and information theory are briefly discussed. Static and dynamic methods for codeword construction and transmission on various models of parallel computation are described. Included are parallel methods which boost system speed by coding data concurrently, and approaches which employ multiple compression techniques to improve compression ratios. Theoretical and empirical comparisons are reported and areas for future research are suggested
Improvements on stochastic vector quantization of images
A novel nonadaptive fixed-rate vector quantizer encoding scheme is presented, and preliminary results are shown. The design of the codebook has been based on a stochastic approach in order to match a previously defined model for the image to be encoded. Following this approach, the generation of the codebook is made extremely simple in terms of computational load. Good visual results are shown in the range of 0.5-0.8 bit/pixel. Much better performance is expected for adaptive schemes.Peer ReviewedPostprint (published version
Adaptive spatial mode of space-time and spacefrequency OFDM system over fading channels
In this paper we present a 2 transmit 1 receive (1 Tx : 1 Rx) adaptive spatial
mode (ASM) of space-time (ST) and space-frequency (SF) orthogonal frequency division
multiplexing (OFDM). At low signal to noise ratio (SNR) we employ ST-OFDM and switch
to SF-OFDM at a certain SNR threshold. We determine this threshold from the intersection
of individual performance curves. Results show a gain of 9 dB (at a bit error rate of 10-3) is
achieved by employing adaptive spatial mode compared to a fixed ST-OFDM, almost 6 dB
to fixed SF-OFDM, 4 dB to Coded ST-OFDM and 2 dB to a fixed coded SF-OFDM, at a
delay spread of 700 ns
A comparison of digital transmission techniques under multichannel conditions at 2.4 GHz in the ISM BAND
In order to meet the observation quality criteria of micro-UAVs, and particularly in the context of the « Trophée Micro-Drones », ISAE/SUPAERO is studying technical solutions to transmit a high data rate from a video payload onboard a micro-UAV. The laboratory has to consider the impact of multipath and shadowing effects on the emitted signal. Therefore fading resistant transmission techniques are considered. This techniques paper have to reveal an optimum trade-off between three parameters, namely: the characteristics of the video stream, the complexity of the modulation and coding scheme, and the efficiency of the transmission, in term of BER
- …