1,151 research outputs found
Low-Complexity LP Decoding of Nonbinary Linear Codes
Linear Programming (LP) decoding of Low-Density Parity-Check (LDPC) codes has
attracted much attention in the research community in the past few years. LP
decoding has been derived for binary and nonbinary linear codes. However, the
most important problem with LP decoding for both binary and nonbinary linear
codes is that the complexity of standard LP solvers such as the simplex
algorithm remains prohibitively large for codes of moderate to large block
length. To address this problem, two low-complexity LP (LCLP) decoding
algorithms for binary linear codes have been proposed by Vontobel and Koetter,
henceforth called the basic LCLP decoding algorithm and the subgradient LCLP
decoding algorithm.
In this paper, we generalize these LCLP decoding algorithms to nonbinary
linear codes. The computational complexity per iteration of the proposed
nonbinary LCLP decoding algorithms scales linearly with the block length of the
code. A modified BCJR algorithm for efficient check-node calculations in the
nonbinary basic LCLP decoding algorithm is also proposed, which has complexity
linear in the check node degree.
Several simulation results are presented for nonbinary LDPC codes defined
over Z_4, GF(4), and GF(8) using quaternary phase-shift keying and
8-phase-shift keying, respectively, over the AWGN channel. It is shown that for
some group-structured LDPC codes, the error-correcting performance of the
nonbinary LCLP decoding algorithms is similar to or better than that of the
min-sum decoding algorithm.Comment: To appear in IEEE Transactions on Communications, 201
Catalyst Acceleration for Gradient-Based Non-Convex Optimization
We introduce a generic scheme to solve nonconvex optimization problems using
gradient-based algorithms originally designed for minimizing convex functions.
Even though these methods may originally require convexity to operate, the
proposed approach allows one to use them on weakly convex objectives, which
covers a large class of non-convex functions typically appearing in machine
learning and signal processing. In general, the scheme is guaranteed to produce
a stationary point with a worst-case efficiency typical of first-order methods,
and when the objective turns out to be convex, it automatically accelerates in
the sense of Nesterov and achieves near-optimal convergence rate in function
values. These properties are achieved without assuming any knowledge about the
convexity of the objective, by automatically adapting to the unknown weak
convexity constant. We conclude the paper by showing promising experimental
results obtained by applying our approach to incremental algorithms such as
SVRG and SAGA for sparse matrix factorization and for learning neural networks
- …