913 research outputs found

    Noisy Gradient Descent Bit-Flip Decoding for LDPC Codes

    Get PDF
    A modified Gradient Descent Bit Flipping (GDBF) algorithm is proposed for decoding Low Density Parity Check (LDPC) codes on the binary-input additive white Gaussian noise channel. The new algorithm, called Noisy GDBF (NGDBF), introduces a random perturbation into each symbol metric at each iteration. The noise perturbation allows the algorithm to escape from undesirable local maxima, resulting in improved performance. A combination of heuristic improvements to the algorithm are proposed and evaluated. When the proposed heuristics are applied, NGDBF performs better than any previously reported GDBF variant, and comes within 0.5 dB of the belief propagation algorithm for several tested codes. Unlike other previous GDBF algorithms that provide an escape from local maxima, the proposed algorithm uses only local, fully parallelizable operations and does not require computing a global objective function or a sort over symbol metrics, making it highly efficient in comparison. The proposed NGDBF algorithm requires channel state information which must be obtained from a signal to noise ratio (SNR) estimator. Architectural details are presented for implementing the NGDBF algorithm. Complexity analysis and optimizations are also discussed.Comment: 16 pages, 22 figures, 2 table

    Two-Bit Bit Flipping Decoding of LDPC Codes

    Full text link
    In this paper, we propose a new class of bit flipping algorithms for low-density parity-check (LDPC) codes over the binary symmetric channel (BSC). Compared to the regular (parallel or serial) bit flipping algorithms, the proposed algorithms employ one additional bit at a variable node to represent its "strength." The introduction of this additional bit increases the guaranteed error correction capability by a factor of at least 2. An additional bit can also be employed at a check node to capture information which is beneficial to decoding. A framework for failure analysis of the proposed algorithms is described. These algorithms outperform the Gallager A/B algorithm and the min-sum algorithm at much lower complexity. Concatenation of two-bit bit flipping algorithms show a potential to approach the performance of belief propagation (BP) decoding in the error floor region, also at lower complexity.Comment: 6 pages. Submitted to IEEE International Symposium on Information Theory 201

    Strong connections between quantum encodings, non-locality and quantum cryptography

    Get PDF
    Encoding information in quantum systems can offer surprising advantages but at the same time there are limitations that arise from the fact that measuring an observable may disturb the state of the quantum system. In our work, we provide an in-depth analysis of a simple question: What happens when we perform two measurements sequentially on the same quantum system? This question touches upon some fundamental properties of quantum mechanics, namely the uncertainty principle and the complementarity of quantum measurements. Our results have interesting consequences, for example they can provide a simple proof of the optimal quantum strategy in the famous Clauser-Horne-Shimony-Holt game. Moreover, we show that the way information is encoded in quantum systems can provide a different perspective in understanding other fundamental aspects of quantum information, like non-locality and quantum cryptography. We prove some strong equivalences between these notions and provide a number of applications in all areas.Comment: Version 3. Previous title: "Oblivious transfer, the CHSH game, and quantum encodings

    Decryption Failure Attacks on Post-Quantum Cryptography

    Get PDF
    This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results

    Error-correction on non-standard communication channels

    Get PDF
    Many communication systems are poorly modelled by the standard channels assumed in the information theory literature, such as the binary symmetric channel or the additive white Gaussian noise channel. Real systems suffer from additional problems including time-varying noise, cross-talk, synchronization errors and latency constraints. In this thesis, low-density parity-check codes and codes related to them are applied to non-standard channels. First, we look at time-varying noise modelled by a Markov channel. A low-density parity-check code decoder is modified to give an improvement of over 1dB. Secondly, novel codes based on low-density parity-check codes are introduced which produce transmissions with Pr(bit = 1) ≠ Pr(bit = 0). These non-linear codes are shown to be good candidates for multi-user channels with crosstalk, such as optical channels. Thirdly, a channel with synchronization errors is modelled by random uncorrelated insertion or deletion events at unknown positions. Marker codes formed from low-density parity-check codewords with regular markers inserted within them are studied. It is shown that a marker code with iterative decoding has performance close to the bounds on the channel capacity, significantly outperforming other known codes. Finally, coding for a system with latency constraints is studied. For example, if a telemetry system involves a slow channel some error correction is often needed quickly whilst the code should be able to correct remaining errors later. A new code is formed from the intersection of a convolutional code with a high rate low-density parity-check code. The convolutional code has good early decoding performance and the high rate low-density parity-check code efficiently cleans up remaining errors after receiving the entire block. Simulations of the block code show a gain of 1.5dB over a standard NASA code

    Experimental Realization of A Two Bit Phase Damping Quantum Code

    Full text link
    Using nuclear magnetic resonance techniques, we experimentally investigated the effects of applying a two bit phase error detection code to preserve quantum information in nuclear spin systems. Input states were stored with and without coding, and the resulting output states were compared with the originals and with each other. The theoretically expected result, net reduction of distortion and conditional error probabilities to second order, was indeed observed, despite imperfect coding operations which increased the error probabilities by approximately 5%. Systematic study of the deviations from the ideal behavior provided quantitative measures of different sources of error, and good agreement was found with a numerical model. Theoretical questions in quantum error correction in bulk nuclear spin systems including fidelity measures, signal strength and syndrome measurements are discussed.Comment: 21 pages, 17 figures, mypsfig2, revtex. Minor changes made to appear in PR

    Image pre-processing to improve data matrix barcode read rates

    Get PDF
    The main goal of this study is to research image processing methods in attempts to develop a robust approach to image pre-preprocessing of Data Matrix barcode images that will improve barcode read rates in an open source fashion. This is demonstrated by element state classification to re-create the ideal binary matrix corresponding to the intended barcode layout through pattern recognition theory. The research consisted of implementing and evaluating the effectiveness of many image processing algorithms types, as well as evaluating key features that clearly delineate different element states. The algorithms developed highlight the use of morphological erosion and region growing for object segmentation and edge analysis and Fisher\u27s Linear Discriminant as a means for element classification. The results demonstrate successful barcode binarization for ideal barcodes with improved read rates in most cases. The techniques developed here provide ground work for a test bed environment to continue improvements by analyzing non-ideal barcodes for additional robustness
    • …
    corecore