445 research outputs found
Efficient Linear Programming Decoding of HDPC Codes
We propose several improvements for Linear Programming (LP) decoding
algorithms for High Density Parity Check (HDPC) codes. First, we use the
automorphism groups of a code to create parity check matrix diversity and to
generate valid cuts from redundant parity checks. Second, we propose an
efficient mixed integer decoder utilizing the branch and bound method. We
further enhance the proposed decoders by removing inactive constraints and by
adapting the parity check matrix prior to decoding according to the channel
observations. Based on simulation results the proposed decoders achieve near-ML
performance with reasonable complexity.Comment: Submitted to the IEEE Transactions on Communications, November 200
Adaptive Linear Programming Decoding of Polar Codes
Polar codes are high density parity check codes and hence the sparse factor
graph, instead of the parity check matrix, has been used to practically
represent an LP polytope for LP decoding. Although LP decoding on this polytope
has the ML-certificate property, it performs poorly over a BAWGN channel. In
this paper, we propose modifications to adaptive cut generation based LP
decoding techniques and apply the modified-adaptive LP decoder to short
blocklength polar codes over a BAWGN channel. The proposed decoder provides
significant FER performance gain compared to the previously proposed LP decoder
and its performance approaches that of ML decoding at high SNRs. We also
present an algorithm to obtain a smaller factor graph from the original sparse
factor graph of a polar code. This reduced factor graph preserves the small
check node degrees needed to represent the LP polytope in practice. We show
that the fundamental polytope of the reduced factor graph can be obtained from
the projection of the polytope represented by the original sparse factor graph
and the frozen bit information. Thus, the LP decoding time complexity is
decreased without changing the FER performance by using the reduced factor
graph representation.Comment: 5 pages, 8 figures, to be presented at the IEEE Symposium on
Information Theory (ISIT) 201
Improved Reception Schemes for Digital Video Broadcasting Based on Hierarchical Modulation
In this paper, we first provide an overview of Hierarchical Modulation (HM) along with the opportunities offered by this modulation in the context of the recent Digital Video Broadcasting standard for Satellite to Handheld devices (DVB-SH).With HM, the binary data is partitioned into a “high-priority” (HP) and a “low-priority” (LP) bit stream that are separately and independently encoded before being mapped on non-uniformly spaced constellation points. We will show that the robustness of the HP stream is obtained at the expense of performance degradation of the less protected LP stream with respect to a non-hierarchical modulation. To overcome this inherent drawback of HM, we propose two different reception schemes for improving the bit error rate performance of the less protected LP stream, while keeping the HP decoding performance unchanged. The important point is that in one of the proposed reception schemes, the performance improvement is achieved together with the reduction of the receiver’s complexity
Design of a fault tolerant airborne digital computer. Volume 1: Architecture
This volume is concerned with the architecture of a fault tolerant digital computer for an advanced commercial aircraft. All of the computations of the aircraft, including those presently carried out by analogue techniques, are to be carried out in this digital computer. Among the important qualities of the computer are the following: (1) The capacity is to be matched to the aircraft environment. (2) The reliability is to be selectively matched to the criticality and deadline requirements of each of the computations. (3) The system is to be readily expandable. contractible, and (4) The design is to appropriate to post 1975 technology. Three candidate architectures are discussed and assessed in terms of the above qualities. Of the three candidates, a newly conceived architecture, Software Implemented Fault Tolerance (SIFT), provides the best match to the above qualities. In addition SIFT is particularly simple and believable. The other candidates, Bus Checker System (BUCS), also newly conceived in this project, and the Hopkins multiprocessor are potentially more efficient than SIFT in the use of redundancy, but otherwise are not as attractive
Analysis and Design of Tuned Turbo Codes
It has been widely observed that there exists a fundamental trade-off between
the minimum (Hamming) distance properties and the iterative decoding
convergence behavior of turbo-like codes. While capacity achieving code
ensembles typically are asymptotically bad in the sense that their minimum
distance does not grow linearly with block length, and they therefore exhibit
an error floor at moderate-to-high signal to noise ratios, asymptotically good
codes usually converge further away from channel capacity. In this paper, we
introduce the concept of tuned turbo codes, a family of asymptotically good
hybrid concatenated code ensembles, where asymptotic minimum distance growth
rates, convergence thresholds, and code rates can be traded-off using two
tuning parameters, {\lambda} and {\mu}. By decreasing {\lambda}, the asymptotic
minimum distance growth rate is reduced in exchange for improved iterative
decoding convergence behavior, while increasing {\lambda} raises the asymptotic
minimum distance growth rate at the expense of worse convergence behavior, and
thus the code performance can be tuned to fit the desired application. By
decreasing {\mu}, a similar tuning behavior can be achieved for higher rate
code ensembles.Comment: Accepted for publication in IEEE Transactions on Information Theor
- …