4,769 research outputs found

    Highly Robust Error Correction by Convex Programming

    Get PDF
    This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x ∈ ℝ^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g., quantization errors). We show that if one encodes the information as Ax where A ∈ ℝ^(m x n) (m ≥ n) is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occurred upon transmission (or equivalently as if one had an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well

    Measurement-free topological protection using dissipative feedback

    Full text link
    Protecting quantum information from decoherence due to environmental noise is vital for fault-tolerant quantum computation. To this end, standard quantum error correction employs parallel projective measurements of individual particles, which makes the system extremely complicated. Here we propose measurement-free topological protection in two dimension without any selective addressing of individual particles. We make use of engineered dissipative dynamics and feedback operations to reduce the entropy generated by decoherence in such a way that quantum information is topologically protected. We calculate an error threshold, below which quantum information is protected, without assuming selective addressing, projective measurements, nor instantaneous classical processing. All physical operations are local and translationally invariant, and no parallel projective measurement is required, which implies high scalability. Furthermore, since the engineered dissipative dynamics we utilized has been well studied in quantum simulation, the proposed scheme can be a promising route progressing from quantum simulation to fault-tolerant quantum information processing.Comment: 17pages, 6 figure

    Decomposition Methods for Large Scale LP Decoding

    Full text link
    When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode error-correcting codes at bit-error-rates comparable to state-of-the-art belief propagation (BP) decoders, but with significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper we draw on decomposition methods from optimization theory, specifically the Alternating Directions Method of Multipliers (ADMM), to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a "two-slice" characterization of the geometry of the parity polytope, which is the convex hull of all codewords of a single parity check code. This new characterization simplifies the representation of points in the polytope. Using this simplification, we develop an efficient algorithm for Euclidean norm projection onto the parity polytope. This projection is required by ADMM and allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently. We present numerical results for LDPC codes of lengths more than 1000. The waterfall region of LP decoding is seen to initiate at a slightly higher signal-to-noise ratio than for sum-product BP, however an error floor is not observed for LP decoding, which is not the case for BP. Our implementation of LP decoding using ADMM executes as fast as our baseline sum-product BP decoder, is fully parallelizable, and can be seen to implement a type of message-passing with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the 49th Annual Allerton Conference, September 2011. This version to appear in IEEE Transactions on Information Theor

    Replacing the Soft FEC Limit Paradigm in the Design of Optical Communication Systems

    Get PDF
    The FEC limit paradigm is the prevalent practice for designing optical communication systems to attain a certain bit-error rate (BER) without forward error correction (FEC). This practice assumes that there is an FEC code that will reduce the BER after decoding to the desired level. In this paper, we challenge this practice and show that the concept of a channel-independent FEC limit is invalid for soft-decision bit-wise decoding. It is shown that for low code rates and high order modulation formats, the use of the soft FEC limit paradigm can underestimate the spectral efficiencies by up to 20%. A better predictor for the BER after decoding is the generalized mutual information, which is shown to give consistent post-FEC BER predictions across different channel conditions and modulation formats. Extensive optical full-field simulations and experiments are carried out in both the linear and nonlinear transmission regimes to confirm the theoretical analysis

    Near-Optimal Noisy Group Testing via Separate Decoding of Items

    Get PDF
    The group testing problem consists of determining a small set of defective items from a larger set of items based on a number of tests, and is relevant in applications such as medical testing, communication protocols, pattern matching, and more. In this paper, we revisit an efficient algorithm for noisy group testing in which each item is decoded separately (Malyutov and Mateev, 1980), and develop novel performance guarantees via an information-theoretic framework for general noise models. For the special cases of no noise and symmetric noise, we find that the asymptotic number of tests required for vanishing error probability is within a factor log20.7\log 2 \approx 0.7 of the information-theoretic optimum at low sparsity levels, and that with a small fraction of allowed incorrectly decoded items, this guarantee extends to all sublinear sparsity levels. In addition, we provide a converse bound showing that if one tries to move slightly beyond our low-sparsity achievability threshold using separate decoding of items and i.i.d. randomized testing, the average number of items decoded incorrectly approaches that of a trivial decoder.Comment: Submitted to IEEE Journal of Selected Topics in Signal Processin

    Highly robust error correction by convex programming

    Full text link
    This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x in R^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g. quantization errors). We show that if one encodes the information as Ax where A is a suitable m by n coding matrix (m >= n), there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occur upon transmission (or equivalently as if one has an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well.Comment: 23 pages, 2 figure

    Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    Get PDF
    We study analytically and numerically decoding properties of finite rate hypergraph-product quantum LDPC codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several non-trival lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models, and a minimum weight decoding threshold of approximately 7%.Comment: 14 pages, 5 figure
    corecore