36 research outputs found

    Time-Invariant Spatially Coupled Low-Density Parity-Check Codes with Small Constraint Length

    Full text link
    We consider a special family of SC-LDPC codes, that is, time-invariant LDPCC codes, which are known in the literature for a long time. Codes of this kind are usually designed by starting from QC block codes, and applying suitable unwrapping procedures. We show that, by directly designing the LDPCC code syndrome former matrix without the constraints of the underlying QC block code, it is possible to achieve smaller constraint lengths with respect to the best solutions available in the literature. We also find theoretical lower bounds on the syndrome former constraint length for codes with a specified minimum length of the local cycles in their Tanner graphs. For this purpose, we exploit a new approach based on a numerical representation of the syndrome former matrix, which generalizes over a technique we already used to study a special subclass of the codes here considered.Comment: 5 pages, 4 figures, to be presented at IEEE BlackSeaCom 201

    Design and Analysis of Time-Invariant SC-LDPC Convolutional Codes With Small Constraint Length

    Full text link
    In this paper, we deal with time-invariant spatially coupled low-density parity-check convolutional codes (SC-LDPC-CCs). Classic design approaches usually start from quasi-cyclic low-density parity-check (QC-LDPC) block codes and exploit suitable unwrapping procedures to obtain SC-LDPC-CCs. We show that the direct design of the SC-LDPC-CCs syndrome former matrix or, equivalently, the symbolic parity-check matrix, leads to codes with smaller syndrome former constraint lengths with respect to the best solutions available in the literature. We provide theoretical lower bounds on the syndrome former constraint length for the most relevant families of SC-LDPC-CCs, under constraints on the minimum length of cycles in their Tanner graphs. We also propose new code design techniques that approach or achieve such theoretical limits.Comment: 30 pages, 5 figures, accepted for publication in IEEE Transactions on Communication

    Quantum serial turbo-codes

    Get PDF
    We present a theory of quantum serial turbo-codes, describe their iterative decoding algorithm, and study their performances numerically on a depolarization channel. Our construction offers several advantages over quantum LDPC codes. First, the Tanner graph used for decoding is free of 4-cycles that deteriorate the performances of iterative decoding. Secondly, the iterative decoder makes explicit use of the code's degeneracy. Finally, there is complete freedom in the code design in terms of length, rate, memory size, and interleaver choice. We define a quantum analogue of a state diagram that provides an efficient way to verify the properties of a quantum convolutional code, and in particular its recursiveness and the presence of catastrophic error propagation. We prove that all recursive quantum convolutional encoder have catastrophic error propagation. In our constructions, the convolutional codes have thus been chosen to be non-catastrophic and non-recursive. While the resulting families of turbo-codes have bounded minimum distance, from a pragmatic point of view the effective minimum distances of the codes that we have simulated are large enough not to degrade the iterative decoding performance up to reasonable word error rates and block sizes. With well chosen constituent convolutional codes, we observe an important reduction of the word error rate as the code length increases.Comment: 24 pages, 15 figures, Published versio

    Analysis and Error Performances of Convolutional Doubly Orthogonal Codes with Non-Binary Alphabets

    Get PDF
    RÉSUMÉ RĂ©cemment, les codes convolutionnels simple-orthogonaux de Massey ont Ă©tĂ© adaptĂ©s au dĂ©codage efficace moderne. Plus spĂ©cifiquement, les caractĂ©ristiques et propriĂ©tĂ©s d'simple-orthogonalitĂ© de ce groupe de codes ont Ă©tĂ© Ă©tendues aux conditions de double-orthogonalitĂ© afin d'accommoder les algorithmes de dĂ©codage itĂ©ratif modernes, donnant lieu aux codes convolutionnels doublement orthogonaux notĂ©s codes CDOs. Ainsi À l'Ă©cart de l'algorithme de propagation de croyance (Belief Propagation, BP), le dĂ©codage itĂ©ratif Ă  seuil, dĂ©veloppĂ© Ă  partir de l'algorithme de dĂ©codage Ă  seuil de Massey, peut aussi ĂȘtre appliquĂ© aux codes CDOs. Cet algorithme est particuliĂšrement attrayant car il offre une complexitĂ© moins Ă©levĂ©e que celle de l'algorithme de dĂ©codage Ă  propagation de croyance (en anglais Belief Propagation, notĂ© BP). Les codes convolutionnels doublement orthogonaux peuvent ĂȘtre divisĂ©s en deux groupes: les codes CDOs non-rĂ©cursifs utilisant des structures d’encodage Ă  un seul registre Ă  dĂ©calage, et les codes CDOs rĂ©cursifs (en anglais Recursive CDO, notĂ©s RCDO) construits Ă  partir de proto-graphes. À des rapports signal-Ă -bruit Eb/N0 modĂ©rĂ©s, les codes non-rĂ©cursifs CDO prĂ©sentent des performances d’erreurs comparables Ă  celles des autres technique courantes lorsqu’ils sont utilisĂ©s avec l'algorithme de dĂ©codage Ă  seuil, prĂ©sentant ainsi une alternative attrayante aux codes de contrĂŽle de paritĂ© Ă  faible densitĂ© (en Anglais Low-Density Parity-Check codes, notĂ©s LDPC). Par contre, les codes CDOs rĂ©cursifs RCDO fournissent des performances d'erreur trĂšs Ă©levĂ©es en utilisant le dĂ©codage BP, se rapprochent de la limite de Shannon. De plus, dans l'Ă©tude des codes LDPC, l'exploitation des corps finis GF(q) avec q>2 comme alphabets du code contribue Ă  l'amĂ©lioration des performances avec l'algorithme de dĂ©codage BP. Ces derniers sont appelĂ©s alors les codes LDPC q-aires. InspirĂ© du succĂšs de l'application des alphabets dans un corps de Galois de q Ă©lĂ©ments GF(q), dans les codes LDPC, nous portons dans cette thĂšse, notre attention aux codes CDO utilisant les corps GF(q) finis, appelĂ©s CDO q-aires. Les codes CDO rĂ©cursifs et non-rĂ©cursifs binaires sont ainsi Ă©tendus Ă  l'utilisation des corps finis GF(q) avec q>2. Leurs performances d’erreur ont Ă©tĂ© dĂ©terminĂ©es par simulation Ă  l’ordinateur en utilisant les deux algorithmes de dĂ©codage itĂ©ratif : Ă  seuil et BP. Bien que l'algorithme de dĂ©codage Ă  seuil souffre d'une perte de performance par rapport Ă  l'algorithme BP, sa complexitĂ© de dĂ©codage est substantiellement rĂ©duite grĂące Ă  la rapide convergence au message estimĂ©. On montre que les codes CDO q-aires fournissent des performances d'erreur supĂ©rieures Ă  celles des codes binaires aussi bien dans le dĂ©codage itĂ©ratif Ă  seuil et dans le dĂ©codage BP. Cette supĂ©rioritĂ© en termes de taux d'erreur est plus prononcĂ©e Ă  haut rapport signal-Ă -bruit Eb/N0. Cependant ces avantages sont obtenus au prix d'une complexitĂ© plus Ă©levĂ©e, complexitĂ© Ă©valuĂ©e par le nombre des diffĂ©rentes opĂ©rations requises dans le processus de dĂ©codage. Afin de faciliter l'implĂ©mentation des codes CDO q-aires, nous avons examinĂ© l'effet des alphabets quantifiĂ©s dans la procĂ©dure de dĂ©codage sur les performances d'erreur. Il a Ă©tĂ© dĂ©montrĂ© que le processus de dĂ©codage nĂ©cessite une quantification plus fine que dans le cas des codes binaires.----------ABSTRACT Recently, the self orthogonal codes due to Massey were adapted in the realm of modern decoding techniques. Specifically, the self orthogonal characteristics of this set of codes are expanded to the doubly orthogonal conditions in order to accommodate the iterative decoding algorithms, giving birth to the convolutional doubly orthogonal (CDO) codes. In addition to the belief propagation (BP) algorithm, the CDO codes also lend themselves to the iterative threshold decoding, which has been developed from the threshold decoding algorithm raised by Massey, offering a lower-complexity alternative for the BP decoding algorithm. The convolutional doubly orthogonal codes are categorized into two subgroups: the non-recursive CDO codes featured by the shift-register structures without feedback, while the recursive CDO (RCDO) codes are constructed based on shift registers with feedback connections from the outputs. The non-recursive CDO codes demonstrate competitive error performances under the iterative threshold decoding algorithm in moderate Eb/N0 region, providing another set of low-density parity-check convolutional (LDPCC) codes with outstanding error performances. On the other hand, the recursive CDO codes enjoy exceptional error performances under BP decoding, enjoying waterfall performances close to the Shannon limit. Additionally, in the study of the LDPC codes, the exploration of the finite fields GF(q) with q>2 as the code alphabets had proved to improve the error performances of the codes under the BP algorithm, giving rise to the q-ary LDPC codes. Inspired by the success of the application of GF(q) alphabets upon the LDPC codes, we focus our attention on the CDO codes with their alphabets generalized with the finite fields; particularly, we investigated the effects of this generalization on the error performances of the CDO codes and investigated their underlying causes. In this thesis, both the recursive and non-recursive CDO codes are extended with the finite fields GF(q) with q>2, referred to as q-ary CDO codes. Their error performances are examined through simulations using both the iterative threshold decoding and the BP decoding algorithms. Whilst the threshold decoding algorithm suffers some performance loss as opposed to the BP algorithm, it phenomenally reduces the complexity in the decoding process mainly due to the fast convergence of the messages. The q-ary CDO codes demonstrated superior error performances as compared to their binary counterparts under both the iterative threshold decoding and the BP decoding algorithms, which is most pronounced in high Eb/N0 region; however, these improvements have been accompanied by an increase in the decoding complexity, which is evaluated through the number of different operations needed in the decoding process. In order to facilitate the implementation of the q-ary CDO codes, we examined the effect of quantized message alphabets in the decoding process on the error performances of the codes

    Near-capacity fixed-rate and rateless channel code constructions

    No full text
    Fixed-rate and rateless channel code constructions are designed for satisfying conflicting design tradeoffs, leading to codes that benefit from practical implementations, whilst offering a good bit error ratio (BER) and block error ratio (BLER) performance. More explicitly, two novel low-density parity-check code (LDPC) constructions are proposed; the first construction constitutes a family of quasi-cyclic protograph LDPC codes, which has a Vandermonde-like parity-check matrix (PCM). The second construction constitutes a specific class of protograph LDPC codes, which are termed as multilevel structured (MLS) LDPC codes. These codes possess a PCM construction that allows the coexistence of both pseudo-randomness as well as a structure requiring a reduced memory. More importantly, it is also demonstrated that these benefits accrue without any compromise in the attainable BER/BLER performance. We also present the novel concept of separating multiple users by means of user-specific channel codes, which is referred to as channel code division multiple access (CCDMA), and provide an example based on MLS LDPC codes. In particular, we circumvent the difficulty of having potentially high memory requirements, while ensuring that each user’s bits in the CCDMA system are equally protected. With regards to rateless channel coding, we propose a novel family of codes, which we refer to as reconfigurable rateless codes, that are capable of not only varying their code-rate but also to adaptively modify their encoding/decoding strategy according to the near-instantaneous channel conditions. We demonstrate that the proposed reconfigurable rateless codes are capable of shaping their own degree distribution according to the nearinstantaneous requirements imposed by the channel, but without any explicit channel knowledge at the transmitter. Additionally, a generalised transmit preprocessing aided closed-loop downlink multiple-input multiple-output (MIMO) system is presented, in which both the channel coding components as well as the linear transmit precoder exploit the knowledge of the channel state information (CSI). More explicitly, we embed a rateless code in a MIMO transmit preprocessing scheme, in order to attain near-capacity performance across a wide range of channel signal-to-ratios (SNRs), rather than only at a specific SNR. The performance of our scheme is further enhanced with the aid of a technique, referred to as pilot symbol assisted rateless (PSAR) coding, whereby a predetermined fraction of pilot bits is appropriately interspersed with the original information bits at the channel coding stage, instead of multiplexing pilots at the modulation stage, as in classic pilot symbol assisted modulation (PSAM). We subsequently demonstrate that the PSAR code-aided transmit preprocessing scheme succeeds in gleaning more information from the inserted pilots than the classic PSAM technique, because the pilot bits are not only useful for sounding the channel at the receiver but also beneficial for significantly reducing the computational complexity of the rateless channel decoder

    Some Notes on Code-Based Cryptography

    Get PDF
    This thesis presents new cryptanalytic results in several areas of coding-based cryptography. In addition, we also investigate the possibility of using convolutional codes in code-based public-key cryptography. The first algorithm that we present is an information-set decoding algorithm, aiming towards the problem of decoding random linear codes. We apply the generalized birthday technique to information-set decoding, improving the computational complexity over previous approaches. Next, we present a new version of the McEliece public-key cryptosystem based on convolutional codes. The original construction uses Goppa codes, which is an algebraic code family admitting a well-defined code structure. In the two constructions proposed, large parts of randomly generated parity checks are used. By increasing the entropy of the generator matrix, this presumably makes structured attacks more difficult. Following this, we analyze a McEliece variant based on quasi-cylic MDPC codes. We show that when the underlying code construction has an even dimension, the system is susceptible to, what we call, a squaring attack. Our results show that the new squaring attack allows for great complexity improvements over previous attacks on this particular McEliece construction. Then, we introduce two new techniques for finding low-weight polynomial multiples. Firstly, we propose a general technique based on a reduction to the minimum-distance problem in coding, which increases the multiplicity of the low-weight codeword by extending the code. We use this algorithm to break some of the instances used by the TCHo cryptosystem. Secondly, we propose an algorithm for finding weight-4 polynomials. By using the generalized birthday technique in conjunction with increasing the multiplicity of the low-weight polynomial multiple, we obtain a much better complexity than previously known algorithms. Lastly, two new algorithms for the learning parities with noise (LPN) problem are proposed. The first one is a general algorithm, applicable to any instance of LPN. The algorithm performs favorably compared to previously known algorithms, breaking the 80-bit security of the widely used (512,1/8) instance. The second one focuses on LPN instances over a polynomial ring, when the generator polynomial is reducible. Using the algorithm, we break an 80-bit security instance of the Lapin cryptosystem

    Sparse graph-based coding schemes for continuous phase modulations

    Get PDF
    The use of the continuous phase modulation (CPM) is interesting when the channel represents a strong non-linearity and in the case of limited spectral support; particularly for the uplink, where the satellite holds an amplifier per carrier, and for downlinks where the terminal equipment works very close to the saturation region. Numerous studies have been conducted on this issue but the proposed solutions use iterative CPM demodulation/decoding concatenated with convolutional or block error correcting codes. The use of LDPC codes has not yet been introduced. Particularly, no works, to our knowledge, have been done on the optimization of sparse graph-based codes adapted for the context described here. In this study, we propose to perform the asymptotic analysis and the design of turbo-CPM systems based on the optimization of sparse graph-based codes. Moreover, an analysis on the corresponding receiver will be done

    Introduction to Mathematical Programming-Based Error-Correction Decoding

    Full text link
    Decoding error-correctiong codes by methods of mathematical optimization, most importantly linear programming, has become an important alternative approach to both algebraic and iterative decoding methods since its introduction by Feldman et al. At first celebrated mainly for its analytical powers, real-world applications of LP decoding are now within reach thanks to most recent research. This document gives an elaborate introduction into both mathematical optimization and coding theory as well as a review of the contributions by which these two areas have found common ground.Comment: LaTeX sources maintained here: https://github.com/supermihi/lpdintr
    corecore