106 research outputs found
Hardware Based Projection onto The Parity Polytope and Probability Simplex
This paper is concerned with the adaptation to hardware of methods for
Euclidean norm projections onto the parity polytope and probability simplex. We
first refine recent efforts to develop efficient methods of projection onto the
parity polytope. Our resulting algorithm can be configured to have either
average computational complexity or worst case
complexity on a serial processor where
is the dimension of projection space. We show how to adapt our projection
routine to hardware. Our projection method uses a sub-routine that involves
another Euclidean projection; onto the probability simplex. We therefore
explain how to adapt to hardware a well know simplex projection algorithm. The
hardware implementations of both projection algorithms achieve area scalings of
at a delay of
. Finally, we present numerical results in
which we evaluate the fixed-point accuracy and resource scaling of these
algorithms when targeting a modern FPGA
Decomposition Methods for Large Scale LP Decoding
When binary linear error-correcting codes are used over symmetric channels, a
relaxed version of the maximum likelihood decoding problem can be stated as a
linear program (LP). This LP decoder can be used to decode error-correcting
codes at bit-error-rates comparable to state-of-the-art belief propagation (BP)
decoders, but with significantly stronger theoretical guarantees. However, LP
decoding when implemented with standard LP solvers does not easily scale to the
block lengths of modern error correcting codes. In this paper we draw on
decomposition methods from optimization theory, specifically the Alternating
Directions Method of Multipliers (ADMM), to develop efficient distributed
algorithms for LP decoding.
The key enabling technical result is a "two-slice" characterization of the
geometry of the parity polytope, which is the convex hull of all codewords of a
single parity check code. This new characterization simplifies the
representation of points in the polytope. Using this simplification, we develop
an efficient algorithm for Euclidean norm projection onto the parity polytope.
This projection is required by ADMM and allows us to use LP decoding, with all
its theoretical guarantees, to decode large-scale error correcting codes
efficiently.
We present numerical results for LDPC codes of lengths more than 1000. The
waterfall region of LP decoding is seen to initiate at a slightly higher
signal-to-noise ratio than for sum-product BP, however an error floor is not
observed for LP decoding, which is not the case for BP. Our implementation of
LP decoding using ADMM executes as fast as our baseline sum-product BP decoder,
is fully parallelizable, and can be seen to implement a type of message-passing
with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the
49th Annual Allerton Conference, September 2011. This version to appear in
IEEE Transactions on Information Theor
Low-Complexity LP Decoding of Nonbinary Linear Codes
Linear Programming (LP) decoding of Low-Density Parity-Check (LDPC) codes has
attracted much attention in the research community in the past few years. LP
decoding has been derived for binary and nonbinary linear codes. However, the
most important problem with LP decoding for both binary and nonbinary linear
codes is that the complexity of standard LP solvers such as the simplex
algorithm remains prohibitively large for codes of moderate to large block
length. To address this problem, two low-complexity LP (LCLP) decoding
algorithms for binary linear codes have been proposed by Vontobel and Koetter,
henceforth called the basic LCLP decoding algorithm and the subgradient LCLP
decoding algorithm.
In this paper, we generalize these LCLP decoding algorithms to nonbinary
linear codes. The computational complexity per iteration of the proposed
nonbinary LCLP decoding algorithms scales linearly with the block length of the
code. A modified BCJR algorithm for efficient check-node calculations in the
nonbinary basic LCLP decoding algorithm is also proposed, which has complexity
linear in the check node degree.
Several simulation results are presented for nonbinary LDPC codes defined
over Z_4, GF(4), and GF(8) using quaternary phase-shift keying and
8-phase-shift keying, respectively, over the AWGN channel. It is shown that for
some group-structured LDPC codes, the error-correcting performance of the
nonbinary LCLP decoding algorithms is similar to or better than that of the
min-sum decoding algorithm.Comment: To appear in IEEE Transactions on Communications, 201
Single-Shot Decoding of Linear Rate LDPC Quantum Codes With High Performance
We construct and analyze a family of low-density parity check (LDPC) quantum codes with a linear encoding rate, distance scaling as nϔ for ϔ>0 and efficient decoding schemes. The code family is based on tessellations of closed, four-dimensional, hyperbolic manifolds, as first suggested by Guth and Lubotzky. The main contribution of this work is the construction of suitable manifolds via finite presentations of Coxeter groups, their linear representations over Galois fields and topological coverings. We establish a lower bound on the encoding rate k/n of 13/72=0.180⊠and we show that the bound is tight for the examples that we construct. Numerical simulations give evidence that parallelizable decoding schemes of low computational complexity suffice to obtain high performance. These decoding schemes can deal with syndrome noise, so that parity check measurements do not have to be repeated to decode. Our data is consistent with a threshold of around 4% in the phenomenological noise model with syndrome noise in the single-shot regime
Single-Shot Decoding of Linear Rate LDPC Quantum Codes with High Performance
We construct and analyze a family of low-density parity check (LDPC) quantum
codes with a linear encoding rate, polynomial scaling distance and efficient
decoding schemes. The code family is based on tessellations of closed,
four-dimensional, hyperbolic manifolds, as first suggested by Guth and
Lubotzky. The main contribution of this work is the construction of suitable
manifolds via finite presentations of Coxeter groups, their linear
representations over Galois fields and topological coverings. We establish a
lower bound on the encoding rate~k/n of~13/72 = 0.180... and we show that the
bound is tight for the examples that we construct. Numerical simulations give
evidence that parallelizable decoding schemes of low computational complexity
suffice to obtain high performance. These decoding schemes can deal with
syndrome noise, so that parity check measurements do not have to be repeated to
decode. Our data is consistent with a threshold of around 4% in the
phenomenological noise model with syndrome noise in the single-shot regime.Comment: 15 pages, 6 figure
Dynamic Neuromechanical Sets for Locomotion
Most biological systems employ multiple redundant actuators, which is a complicated problem of controls and analysis. Unless assumptions about how the brain and body work together, and assumptions about how the body prioritizes tasks are applied, it is not possible to find the actuator controls. The purpose of this research is to develop computational tools for the analysis of arbitrary musculoskeletal models that employ redundant actuators. Instead of relying primarily on optimization frameworks and numerical methods or task prioritization schemes used typically in biomechanics to find a singular solution for actuator controls, tools for feasible sets analysis are instead developed to find the bounds of possible actuator controls. Previously in the literature, feasible sets analysis has been used in order analyze models assuming static poses. Here, tools that explore the feasible sets of actuator controls over the course of a dynamic task are developed. The cost-function agnostic methods of analysis developed in this work run parallel and in concert with other methods of analysis such as principle components analysis, muscle synergies theory and task prioritization. Researchers and healthcare professionals can gain greater insights into decision making during behavioral tasks by layering these other tools on top of feasible sets analysis
Practical recipes for the model order reduction, dynamical simulation, and compressive sampling of large-scale open quantum systems
This article presents numerical recipes for simulating high-temperature and
non-equilibrium quantum spin systems that are continuously measured and
controlled. The notion of a spin system is broadly conceived, in order to
encompass macroscopic test masses as the limiting case of large-j spins. The
simulation technique has three stages: first the deliberate introduction of
noise into the simulation, then the conversion of that noise into an equivalent
continuous measurement and control process, and finally, projection of the
trajectory onto a state-space manifold having reduced dimensionality and
possessing a Kahler potential of multi-linear form. The resulting simulation
formalism is used to construct a positive P-representation for the thermal
density matrix. Single-spin detection by magnetic resonance force microscopy
(MRFM) is simulated, and the data statistics are shown to be those of a random
telegraph signal with additive white noise. Larger-scale spin-dust models are
simulated, having no spatial symmetry and no spatial ordering; the
high-fidelity projection of numerically computed quantum trajectories onto
low-dimensionality Kahler state-space manifolds is demonstrated. The
reconstruction of quantum trajectories from sparse random projections is
demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity
limit is observed, a deterministic construction for sampling matrices is given,
and methods for quantum state optimization by Dantzig selection are given.Comment: 104 pages, 13 figures, 2 table
- âŠ