28,166 research outputs found
Spectrum of Sizes for Perfect Deletion-Correcting Codes
One peculiarity with deletion-correcting codes is that perfect
-deletion-correcting codes of the same length over the same alphabet can
have different numbers of codewords, because the balls of radius with
respect to the Levenshte\u{\i}n distance may be of different sizes. There is
interest, therefore, in determining all possible sizes of a perfect
-deletion-correcting code, given the length and the alphabet size~.
In this paper, we determine completely the spectrum of possible sizes for
perfect -ary 1-deletion-correcting codes of length three for all , and
perfect -ary 2-deletion-correcting codes of length four for almost all ,
leaving only a small finite number of cases in doubt.Comment: 23 page
Near-capacity dirty-paper code design : a source-channel coding approach
This paper examines near-capacity dirty-paper code designs based on source-channel coding. We first point out that the performance loss in signal-to-noise ratio (SNR) in our code designs can be broken into the sum of the packing loss from channel coding and a modulo loss, which is a function of the granular loss from source coding and the target dirty-paper coding rate (or SNR). We then examine practical designs by combining trellis-coded quantization (TCQ) with both systematic and nonsystematic irregular repeat-accumulate (IRA) codes. Like previous approaches, we exploit the extrinsic information transfer (EXIT) chart technique for capacity-approaching IRA code design; but unlike previous approaches, we emphasize the role of strong source coding to achieve as much granular gain as possible using TCQ. Instead of systematic doping, we employ two relatively shifted TCQ codebooks, where the shift is optimized (via tuning the EXIT charts) to facilitate the IRA code design. Our designs synergistically combine TCQ with IRA codes so that they work together as well as they do individually. By bringing together TCQ (the best quantizer from the source coding community) and EXIT chart-based IRA code designs (the best from the channel coding community), we are able to approach the theoretical limit of dirty-paper coding. For example, at 0.25 bit per symbol (b/s), our best code design (with 2048-state TCQ) performs only 0.630 dB away from the Shannon capacity
Coding Theory and Algebraic Combinatorics
This chapter introduces and elaborates on the fruitful interplay of coding
theory and algebraic combinatorics, with most of the focus on the interaction
of codes with combinatorial designs, finite geometries, simple groups, sphere
packings, kissing numbers, lattices, and association schemes. In particular,
special interest is devoted to the relationship between codes and combinatorial
designs. We describe and recapitulate important results in the development of
the state of the art. In addition, we give illustrative examples and
constructions, and highlight recent advances. Finally, we provide a collection
of significant open problems and challenges concerning future research.Comment: 33 pages; handbook chapter, to appear in: "Selected Topics in
Information and Coding Theory", ed. by I. Woungang et al., World Scientific,
Singapore, 201
Throughput Scaling Of Convolution For Error-Tolerant Multimedia Applications
Convolution and cross-correlation are the basis of filtering and pattern or
template matching in multimedia signal processing. We propose two throughput
scaling options for any one-dimensional convolution kernel in programmable
processors by adjusting the imprecision (distortion) of computation. Our
approach is based on scalar quantization, followed by two forms of tight
packing in floating-point (one of which is proposed in this paper) that allow
for concurrent calculation of multiple results. We illustrate how our approach
can operate as an optional pre- and post-processing layer for off-the-shelf
optimized convolution routines. This is useful for multimedia applications that
are tolerant to processing imprecision and for cases where the input signals
are inherently noisy (error tolerant multimedia applications). Indicative
experimental results with a digital music matching system and an MPEG-7 audio
descriptor system demonstrate that the proposed approach offers up to 175%
increase in processing throughput against optimized (full-precision)
convolution with virtually no effect in the accuracy of the results. Based on
marginal statistics of the input data, it is also shown how the throughput and
distortion can be adjusted per input block of samples under constraints on the
signal-to-noise ratio against the full-precision convolution.Comment: IEEE Trans. on Multimedia, 201
- …