91,891 research outputs found
Synchronization recovery and state model reduction for soft decoding of variable length codes
Variable length codes exhibit de-synchronization problems when transmitted
over noisy channels. Trellis decoding techniques based on Maximum A Posteriori
(MAP) estimators are often used to minimize the error rate on the estimated
sequence. If the number of symbols and/or bits transmitted are known by the
decoder, termination constraints can be incorporated in the decoding process.
All the paths in the trellis which do not lead to a valid sequence length are
suppressed. This paper presents an analytic method to assess the expected error
resilience of a VLC when trellis decoding with a sequence length constraint is
used. The approach is based on the computation, for a given code, of the amount
of information brought by the constraint. It is then shown that this quantity
as well as the probability that the VLC decoder does not re-synchronize in a
strict sense, are not significantly altered by appropriate trellis states
aggregation. This proves that the performance obtained by running a
length-constrained Viterbi decoder on aggregated state models approaches the
one obtained with the bit/symbol trellis, with a significantly reduced
complexity. It is then shown that the complexity can be further decreased by
projecting the state model on two state models of reduced size
A vector quantization approach to universal noiseless coding and quantization
A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the first-stage quantizer, using induced measures of rate and distortion, to design locally optimal two-stage codes. On a source of medical images, two-stage variable-rate vector quantizers designed in this way outperform standard (one-stage) fixed-rate vector quantizers by over 9 dB. The tail of the operational distortion-rate function of the first-stage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of two-stage codes. We show that there exist two-stage universal noiseless codes, fixed-rate quantizers, and variable-rate quantizers whose per-letter rate and distortion redundancies converge to zero as (k/2)n -1 log n, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen's theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n-1) when the universe of sources is countable, and as O(n-1+ϵ) when the universe of sources is infinite-dimensional, under appropriate conditions
Weighted universal image compression
We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB
Design of Finite-Length Irregular Protograph Codes with Low Error Floors over the Binary-Input AWGN Channel Using Cyclic Liftings
We propose a technique to design finite-length irregular low-density
parity-check (LDPC) codes over the binary-input additive white Gaussian noise
(AWGN) channel with good performance in both the waterfall and the error floor
region. The design process starts from a protograph which embodies a desirable
degree distribution. This protograph is then lifted cyclically to a certain
block length of interest. The lift is designed carefully to satisfy a certain
approximate cycle extrinsic message degree (ACE) spectrum. The target ACE
spectrum is one with extremal properties, implying a good error floor
performance for the designed code. The proposed construction results in
quasi-cyclic codes which are attractive in practice due to simple encoder and
decoder implementation. Simulation results are provided to demonstrate the
effectiveness of the proposed construction in comparison with similar existing
constructions.Comment: Submitted to IEEE Trans. Communication
Quantization as Histogram Segmentation: Optimal Scalar Quantizer Design in Network Systems
An algorithm for scalar quantizer design on discrete-alphabet sources is proposed. The proposed algorithm can be used to design fixed-rate and entropy-constrained conventional scalar quantizers, multiresolution scalar quantizers, multiple description scalar quantizers, and Wyner–Ziv scalar quantizers. The algorithm guarantees globally optimal solutions for conventional fixed-rate scalar quantizers and entropy-constrained scalar quantizers. For the other coding scenarios, the algorithm yields the best code among all codes that meet a given convexity constraint. In all cases, the algorithm run-time is polynomial in the size of the source alphabet. The algorithm derivation arises from a demonstration of the connection between scalar quantization, histogram segmentation, and the shortest path problem in a certain directed acyclic graph
Rewriting Codes for Joint Information Storage in Flash Memories
Memories whose storage cells transit irreversibly between
states have been common since the start of the data storage
technology. In recent years, flash memories have become a very
important family of such memories. A flash memory cell has q
states—state 0.1.....q-1 - and can only transit from a lower
state to a higher state before the expensive erasure operation takes
place. We study rewriting codes that enable the data stored in a
group of cells to be rewritten by only shifting the cells to higher
states. Since the considered state transitions are irreversible, the
number of rewrites is bounded. Our objective is to maximize the
number of times the data can be rewritten. We focus on the joint
storage of data in flash memories, and study two rewriting codes
for two different scenarios. The first code, called floating code, is for
the joint storage of multiple variables, where every rewrite changes
one variable. The second code, called buffer code, is for remembering
the most recent data in a data stream. Many of the codes
presented here are either optimal or asymptotically optimal. We
also present bounds to the performance of general codes. The results
show that rewriting codes can integrate a flash memory’s
rewriting capabilities for different variables to a high degree
- …