112,931 research outputs found
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations
Order Statistics Based List Decoding Techniques for Linear Binary Block Codes
The order statistics based list decoding techniques for linear binary block
codes of small to medium block length are investigated. The construction of the
list of the test error patterns is considered. The original order statistics
decoding is generalized by assuming segmentation of the most reliable
independent positions of the received bits. The segmentation is shown to
overcome several drawbacks of the original order statistics decoding. The
complexity of the order statistics based decoding is further reduced by
assuming a partial ordering of the received bits in order to avoid the complex
Gauss elimination. The probability of the test error patterns in the decoding
list is derived. The bit error rate performance and the decoding complexity
trade-off of the proposed decoding algorithms is studied by computer
simulations. Numerical examples show that, in some cases, the proposed decoding
schemes are superior to the original order statistics decoding in terms of both
the bit error rate performance as well as the decoding complexity.Comment: 17 pages, 2 tables, 6 figures, submitted to IEEE Transactions on
Information Theor
Update-Efficiency and Local Repairability Limits for Capacity Approaching Codes
Motivated by distributed storage applications, we investigate the degree to
which capacity achieving encodings can be efficiently updated when a single
information bit changes, and the degree to which such encodings can be
efficiently (i.e., locally) repaired when single encoded bit is lost.
Specifically, we first develop conditions under which optimum
error-correction and update-efficiency are possible, and establish that the
number of encoded bits that must change in response to a change in a single
information bit must scale logarithmically in the block-length of the code if
we are to achieve any nontrivial rate with vanishing probability of error over
the binary erasure or binary symmetric channels. Moreover, we show there exist
capacity-achieving codes with this scaling.
With respect to local repairability, we develop tight upper and lower bounds
on the number of remaining encoded bits that are needed to recover a single
lost bit of the encoding. In particular, we show that if the code-rate is
less than the capacity, then for optimal codes, the maximum number
of codeword symbols required to recover one lost symbol must scale as
.
Several variations on---and extensions of---these results are also developed.Comment: Accepted to appear in JSA
Iterative H.264 Source and Channel Decoding Using Sphere Packing Modulation Aided Layered Steered Space-Time Codes
The conventional two-stage turbo-detection schemes generally suffer from a Bit Error Rate (BER) floor. In this paper we circumvent this deficiency by proposing a three-stage turbo detected Sphere Packing (SP) modulation aided Layered Steered Space-Time Coding (LSSTC) scheme for H.264 coded video transmission over correlated Rayleigh fading channels. The soft-bit assisted H.264 coded bit-stream is protected using low-complexity short-block codes (SBCs), combined with a rate-1 recursive inner precoder is employed as an intermediate code which has an infinite impulse response and hence beneficially spreads the extrinsic information across the constituent decoders. This allows us to avoid having a BER floor. Additionally, the convergence behaviour of this serially concatenated scheme is investigated with the aid of Extrinsic Information Transfer (EXIT) Charts. The proposed system exhibits an Eb/N0 gain of about 12 dB in comparison to the benchmark scheme carrying out iterative source-channel decoding as well as Layered Steered Space-Time Coding (LSSTC) aided Sphere Packing (SP)demodulation, but dispensing with the optimised SBCs
Codes on Graphs and More
Modern communication systems strive to achieve reliable and efficient information transmission and storage with affordable complexity. Hence, efficient low-complexity channel codes providing low probabilities for erroneous receptions are needed. Interpreting codes as graphs and graphs as codes opens new perspectives for constructing such channel codes. Low-density parity-check (LDPC) codes are one of the most recent examples of codes defined on graphs, providing a better bit error probability than other block codes, given the same decoding complexity. After an introduction to coding theory, different graphical representations for channel codes are reviewed. Based on ideas from graph theory, new algorithms are introduced to iteratively search for LDPC block codes with large girth and to determine their minimum distance. In particular, new LDPC block codes of different rates and with girth up to 24 are presented. Woven convolutional codes are introduced as a generalization of graph-based codes and an asymptotic bound on their free distance, namely, the Costello lower bound, is proven. Moreover, promising examples of woven convolutional codes are given, including a rate 5/20 code with overall constraint length 67 and free distance 120. The remaining part of this dissertation focuses on basic properties of convolutional codes. First, a recurrent equation to determine a closed form expression of the exact decoding bit error probability for convolutional codes is presented. The obtained closed form expression is evaluated for various realizations of encoders, including rate 1/2 and 2/3 encoders, of as many as 16 states. Moreover, MacWilliams-type identities are revisited and a recursion for sequences of spectra of truncated as well as tailbitten convolutional codes and their duals is derived. Finally, the dissertation is concluded with exhaustive searches for convolutional codes of various rates with either optimum free distance or optimum distance profile, extending previously published results
- …