221 research outputs found

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developements are related to diverse mathematical ïŹelds such as algebraic geometry, combinatorial number theory, probability theory, quantum mechanics, representation theory, and the theory of error-correcting codes

    Noisy Gradient Descent Bit-Flip Decoding for LDPC Codes

    Get PDF
    A modified Gradient Descent Bit Flipping (GDBF) algorithm is proposed for decoding Low Density Parity Check (LDPC) codes on the binary-input additive white Gaussian noise channel. The new algorithm, called Noisy GDBF (NGDBF), introduces a random perturbation into each symbol metric at each iteration. The noise perturbation allows the algorithm to escape from undesirable local maxima, resulting in improved performance. A combination of heuristic improvements to the algorithm are proposed and evaluated. When the proposed heuristics are applied, NGDBF performs better than any previously reported GDBF variant, and comes within 0.5 dB of the belief propagation algorithm for several tested codes. Unlike other previous GDBF algorithms that provide an escape from local maxima, the proposed algorithm uses only local, fully parallelizable operations and does not require computing a global objective function or a sort over symbol metrics, making it highly efficient in comparison. The proposed NGDBF algorithm requires channel state information which must be obtained from a signal to noise ratio (SNR) estimator. Architectural details are presented for implementing the NGDBF algorithm. Complexity analysis and optimizations are also discussed.Comment: 16 pages, 22 figures, 2 table

    Some new results on majority-logic codes for correction of random errors

    Get PDF
    The main advantages of random error-correcting majority-logic codes and majority-logic decoding in general are well known and two-fold. Firstly, they offer a partial solution to a classical coding theory problem, that of decoder complexity. Secondly, a majority-logic decoder inherently corrects many more random error patterns than the minimum distance of the code implies is possible. The solution to the decoder complexity is only a partial one because there are circumstances under which a majority-logic decoder is too complex and expensive to implement. [Continues.

    Algebraic Methods in Computational Complexity

    Get PDF
    From 11.10. to 16.10.2009, the Dagstuhl Seminar 09421 “Algebraic Methods in Computational Complexity “ was held in Schloss Dagstuhl-Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Some results on arithmetic codes of composite length

    Get PDF
    In this paper we present a new upper bound on the minimum distance of binary cyclic arithmetic codes of composite length. Two new classes of binary cyclic codes of composite length are introduced

    Low-Complexity Belief Propagation Decoding by Approximations with Lookup-Tables

    Get PDF
    Abstract — Belief propagation decoding of low-density parity-check codes or one-step majority logic decodable codes has been proven to be a very powerful coding scheme. In this paper an approximation for the belief propagation algorithm, also known as sumproduct decoding, is presented which uses correction functions, implemented as precomputed lookup-tables, to significantly reduce the computational complexity. The new lookup-sum algorithm requires no multiplications, divisions, exponential or logarithmic operations in the iterative process. Already for lookup-tables containing a single entry simulation results show that the performance of non-approximated belief propagation can be approached by 0.1 dB in / 0. With slightly larger tables a performance not noticeably differing from nonapproximated belief propagation can be achieved. I

    Mathematical structures for decoding projective geometry codes

    Get PDF
    Imperial Users onl
    • 

    corecore