64 research outputs found
Interior Point Decoding for Linear Vector Channels
In this paper, a novel decoding algorithm for low-density parity-check (LDPC)
codes based on convex optimization is presented. The decoding algorithm, called
interior point decoding, is designed for linear vector channels. The linear
vector channels include many practically important channels such as inter
symbol interference channels and partial response channels. It is shown that
the maximum likelihood decoding (MLD) rule for a linear vector channel can be
relaxed to a convex optimization problem, which is called a relaxed MLD
problem. The proposed decoding algorithm is based on a numerical optimization
technique so called interior point method with barrier function. Approximate
variations of the gradient descent and the Newton methods are used to solve the
convex optimization problem. In a decoding process of the proposed algorithm, a
search point always lies in the fundamental polytope defined based on a
low-density parity-check matrix. Compared with a convectional joint message
passing decoder, the proposed decoding algorithm achieves better BER
performance with less complexity in the case of partial response channels in
many cases.Comment: 18 pages, 17 figures, The paper has been submitted to IEEE
Transaction on Information Theor
Decomposition Methods for Large Scale LP Decoding
When binary linear error-correcting codes are used over symmetric channels, a
relaxed version of the maximum likelihood decoding problem can be stated as a
linear program (LP). This LP decoder can be used to decode error-correcting
codes at bit-error-rates comparable to state-of-the-art belief propagation (BP)
decoders, but with significantly stronger theoretical guarantees. However, LP
decoding when implemented with standard LP solvers does not easily scale to the
block lengths of modern error correcting codes. In this paper we draw on
decomposition methods from optimization theory, specifically the Alternating
Directions Method of Multipliers (ADMM), to develop efficient distributed
algorithms for LP decoding.
The key enabling technical result is a "two-slice" characterization of the
geometry of the parity polytope, which is the convex hull of all codewords of a
single parity check code. This new characterization simplifies the
representation of points in the polytope. Using this simplification, we develop
an efficient algorithm for Euclidean norm projection onto the parity polytope.
This projection is required by ADMM and allows us to use LP decoding, with all
its theoretical guarantees, to decode large-scale error correcting codes
efficiently.
We present numerical results for LDPC codes of lengths more than 1000. The
waterfall region of LP decoding is seen to initiate at a slightly higher
signal-to-noise ratio than for sum-product BP, however an error floor is not
observed for LP decoding, which is not the case for BP. Our implementation of
LP decoding using ADMM executes as fast as our baseline sum-product BP decoder,
is fully parallelizable, and can be seen to implement a type of message-passing
with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the
49th Annual Allerton Conference, September 2011. This version to appear in
IEEE Transactions on Information Theor
Efficient implementation of linear programming decoding
While linear programming (LP) decoding provides more flexibility for
finite-length performance analysis than iterative message-passing (IMP)
decoding, it is computationally more complex to implement in its original form,
due to both the large size of the relaxed LP problem, and the inefficiency of
using general-purpose LP solvers. This paper explores ideas for fast LP
decoding of low-density parity-check (LDPC) codes. We first prove, by modifying
the previously reported Adaptive LP decoding scheme to allow removal of
unnecessary constraints, that LP decoding can be performed by solving a number
of LP problems that contain at most one linear constraint derived from each of
the parity-check constraints. By exploiting this property, we study a sparse
interior-point implementation for solving this sequence of linear programs.
Since the most complex part of each iteration of the interior-point algorithm
is the solution of a (usually ill-conditioned) system of linear equations for
finding the step direction, we propose a preconditioning algorithm to
facilitate iterative solution of such systems. The proposed preconditioning
algorithm is similar to the encoding procedure of LDPC codes, and we
demonstrate its effectiveness via both analytical methods and computer
simulation results.Comment: 44 pages, submitted to IEEE Transactions on Information Theory, Dec.
200
Local Decoders for the 2D and 4D Toric Code
We analyze the performance of decoders for the 2D and 4D toric code which are
local by construction. The 2D decoder is a cellular automaton decoder
formulated by Harrington which explicitly has a finite speed of communication
and computation. For a model of independent and errors and faulty
syndrome measurements with identical probability we report a threshold of
for this Harrington decoder. We implement a decoder for the 4D toric
code which is based on a decoder by Hastings arXiv:1312.2546 . Incorporating a
method for handling faulty syndromes we estimate a threshold of for
the same noise model as in the 2D case. We compare the performance of this
decoder with a decoder based on a 4D version of Toom's cellular automaton rule
as well as the decoding method suggested by Dennis et al.
arXiv:quant-ph/0110143 .Comment: 22 pages, 21 figures; fixed typos, updated Figures 6,7,8,
Power allocation and linear precoding for wireless communications with finite-alphabet inputs
This dissertation proposes a new approach to maximizing data rate/throughput of practical communication system/networks through linear precoding and power allocation. First, the mutual information or capacity region is derived for finite-alphabet inputs such as phase-shift keying (PSK), pulse-amplitude modulation (PAM), and quadrature amplitude modulation (QAM) signals. This approach, without the commonly used Gaussian input assumptions, complicates the mutual information analysis and precoder design but improves performance when the designed precoders are applied to practical systems and networks. Second, several numerical optimization methods are developed for multiple-input multiple-output (MIMO) multiple access channels, dual-hop relay networks, and point-to-point MIMO systems. In MIMO multiple access channels, an iterative weighted sum rate maximization algorithm is proposed which utilizes an alternating optimization strategy and gradient descent update. In dual-hop relay networks, the structure of the optimal precoder is exploited to develop a two-step iterative algorithm based on convex optimization and optimization on the Stiefel manifold. The proposed algorithm is insensitive to initial point selection and able to achieve a near global optimal precoder solution. The gradient descent method is also used to obtain the optimal power allocation scheme which maximizes the mutual information between the source node and destination node in dual-hop relay networks. For point-to-point MIMO systems, a low complexity precoding design method is proposed, which maximizes the lower bound of the mutual information with discretized power allocation vector in a non-iterative fashion, thus reducing complexity. Finally, performances of the proposed power allocation and linear precoding schemes are evaluated in terms of both mutual information and bit error rate (BER). Numerical results show that at the same target mutual information or sum rate, the proposed approaches achieve 3-10dB gains compared to the existing methods in the medium signal-to-noise ratio region. Such significant gains are also indicated in the coded BER systems --Abstract, page iv-v
Sparse Probabilistic Models:Phase Transitions and Solutions via Spatial Coupling
This thesis is concerned with a number of novel uses of spatial coupling, applied to a class of probabilistic graphical models. These models include error correcting codes, random constraint satisfaction problems (CSPs) and statistical physics models called diluted spin systems. Spatial coupling is a technique initially developed for channel coding, which provides a recipe to transform a class of sparse linear codes into codes that are longer but more robust at high noise level. In fact it was observed that for coupled codes there are efficient algorithms whose decoding threshold is the optimal one, a phenomenon called threshold saturation. The main aim of this thesis is to explore alternative applications of spatial coupling. The goal is to study properties of uncoupled probabilistic models (not just coding) through the use of the corresponding spatially coupled models. The methods employed are ranging from the mathematically rigorous to the purely experimental. We first explore spatial coupling as a proof technique in the realm of LDPC codes. The Maxwell conjecture states that for arbitrary BMS channels the optimal (MAP) threshold of the standard (uncoupled) LDPC codes is given by the Maxwell construction. We are able to prove the Maxwell Conjecture for any smooth family of BMS channels by using (i) the fact that coupled codes perform optimally (which was already proved) and (ii) that the optimal thresholds of the coupled and uncoupled LDPC codes coincide. The method is used to derive two more results, namely the equality of GEXIT curves above the MAP threshold and the exactness of the averaged Bethe free energy formula derived under the RS cavity method from statistical physics. As a second application of spatial coupling we show how to derive novel bounds on the phase transitions in random constraint satisfaction problems, and possibly a general class of diluted spin systems. In the case of coloring, we investigate what happens to the dynamic and freezing thresholds. The phenomenon of threshold saturation is present also in this case, with the dynamic threshold moving to the condensation threshold, and the freezing moving to colorability. These claims are supported by experimental evidence, but in some cases, such as the saturation of the freezing threshold it is possible to make part of this claim more rigorous. This allows in principle for the computation of thresholds by use of spatial coupling. The proof is in the spirit of the potential method introduced by Kumar, Young, Macris and Pfister for LDPC codes. Finally, we explore how to find solutions in (uncoupled) probabilistic models. To test this, we start with a typical instance of random K-SAT (the base problem), and we build a spatially coupled structure that locally inherits the structure of the base problem. The goal is to run an algorithm for finding a suitable solution in the coupled structure and then "project" this solution to obtain a solution for the base problem. Experimental evidence points to the fact it is indeed possible to use a form of unit-clause propagation (UCP), a simple algorithm, to achieve this goal. This approach works also in regimes where the standard UCP fails on the base problem
- …