119 research outputs found

    Analysis and Error Performances of Convolutional Doubly Orthogonal Codes with Non-Binary Alphabets

    Get PDF
    RÉSUMÉ Récemment, les codes convolutionnels simple-orthogonaux de Massey ont été adaptés au décodage efficace moderne. Plus spécifiquement, les caractéristiques et propriétés d'simple-orthogonalité de ce groupe de codes ont été étendues aux conditions de double-orthogonalité afin d'accommoder les algorithmes de décodage itératif modernes, donnant lieu aux codes convolutionnels doublement orthogonaux notés codes CDOs. Ainsi À l'écart de l'algorithme de propagation de croyance (Belief Propagation, BP), le décodage itératif à seuil, développé à partir de l'algorithme de décodage à seuil de Massey, peut aussi être appliqué aux codes CDOs. Cet algorithme est particulièrement attrayant car il offre une complexité moins élevée que celle de l'algorithme de décodage à propagation de croyance (en anglais Belief Propagation, noté BP). Les codes convolutionnels doublement orthogonaux peuvent être divisés en deux groupes: les codes CDOs non-récursifs utilisant des structures d’encodage à un seul registre à décalage, et les codes CDOs récursifs (en anglais Recursive CDO, notés RCDO) construits à partir de proto-graphes. À des rapports signal-à-bruit Eb/N0 modérés, les codes non-récursifs CDO présentent des performances d’erreurs comparables à celles des autres technique courantes lorsqu’ils sont utilisés avec l'algorithme de décodage à seuil, présentant ainsi une alternative attrayante aux codes de contrôle de parité à faible densité (en Anglais Low-Density Parity-Check codes, notés LDPC). Par contre, les codes CDOs récursifs RCDO fournissent des performances d'erreur très élevées en utilisant le décodage BP, se rapprochent de la limite de Shannon. De plus, dans l'étude des codes LDPC, l'exploitation des corps finis GF(q) avec q>2 comme alphabets du code contribue à l'amélioration des performances avec l'algorithme de décodage BP. Ces derniers sont appelés alors les codes LDPC q-aires. Inspiré du succès de l'application des alphabets dans un corps de Galois de q éléments GF(q), dans les codes LDPC, nous portons dans cette thèse, notre attention aux codes CDO utilisant les corps GF(q) finis, appelés CDO q-aires. Les codes CDO récursifs et non-récursifs binaires sont ainsi étendus à l'utilisation des corps finis GF(q) avec q>2. Leurs performances d’erreur ont été déterminées par simulation à l’ordinateur en utilisant les deux algorithmes de décodage itératif : à seuil et BP. Bien que l'algorithme de décodage à seuil souffre d'une perte de performance par rapport à l'algorithme BP, sa complexité de décodage est substantiellement réduite grâce à la rapide convergence au message estimé. On montre que les codes CDO q-aires fournissent des performances d'erreur supérieures à celles des codes binaires aussi bien dans le décodage itératif à seuil et dans le décodage BP. Cette supériorité en termes de taux d'erreur est plus prononcée à haut rapport signal-à-bruit Eb/N0. Cependant ces avantages sont obtenus au prix d'une complexité plus élevée, complexité évaluée par le nombre des différentes opérations requises dans le processus de décodage. Afin de faciliter l'implémentation des codes CDO q-aires, nous avons examiné l'effet des alphabets quantifiés dans la procédure de décodage sur les performances d'erreur. Il a été démontré que le processus de décodage nécessite une quantification plus fine que dans le cas des codes binaires.----------ABSTRACT Recently, the self orthogonal codes due to Massey were adapted in the realm of modern decoding techniques. Specifically, the self orthogonal characteristics of this set of codes are expanded to the doubly orthogonal conditions in order to accommodate the iterative decoding algorithms, giving birth to the convolutional doubly orthogonal (CDO) codes. In addition to the belief propagation (BP) algorithm, the CDO codes also lend themselves to the iterative threshold decoding, which has been developed from the threshold decoding algorithm raised by Massey, offering a lower-complexity alternative for the BP decoding algorithm. The convolutional doubly orthogonal codes are categorized into two subgroups: the non-recursive CDO codes featured by the shift-register structures without feedback, while the recursive CDO (RCDO) codes are constructed based on shift registers with feedback connections from the outputs. The non-recursive CDO codes demonstrate competitive error performances under the iterative threshold decoding algorithm in moderate Eb/N0 region, providing another set of low-density parity-check convolutional (LDPCC) codes with outstanding error performances. On the other hand, the recursive CDO codes enjoy exceptional error performances under BP decoding, enjoying waterfall performances close to the Shannon limit. Additionally, in the study of the LDPC codes, the exploration of the finite fields GF(q) with q>2 as the code alphabets had proved to improve the error performances of the codes under the BP algorithm, giving rise to the q-ary LDPC codes. Inspired by the success of the application of GF(q) alphabets upon the LDPC codes, we focus our attention on the CDO codes with their alphabets generalized with the finite fields; particularly, we investigated the effects of this generalization on the error performances of the CDO codes and investigated their underlying causes. In this thesis, both the recursive and non-recursive CDO codes are extended with the finite fields GF(q) with q>2, referred to as q-ary CDO codes. Their error performances are examined through simulations using both the iterative threshold decoding and the BP decoding algorithms. Whilst the threshold decoding algorithm suffers some performance loss as opposed to the BP algorithm, it phenomenally reduces the complexity in the decoding process mainly due to the fast convergence of the messages. The q-ary CDO codes demonstrated superior error performances as compared to their binary counterparts under both the iterative threshold decoding and the BP decoding algorithms, which is most pronounced in high Eb/N0 region; however, these improvements have been accompanied by an increase in the decoding complexity, which is evaluated through the number of different operations needed in the decoding process. In order to facilitate the implementation of the q-ary CDO codes, we examined the effect of quantized message alphabets in the decoding process on the error performances of the codes

    Error-Correction Coding and Decoding: Bounds, Codes, Decoders, Analysis and Applications

    Get PDF
    Coding; Communications; Engineering; Networks; Information Theory; Algorithm

    Channel Estimation Architectures for Mobile Reception in Emerging DVB Standards

    Get PDF
    Throughout this work, channel estimation techniques have been analyzed and proposed for moderate and very high mobility DVB (digital video broadcasting) receivers, focusing on the DVB-T2 (Digital Video Broadcasting - Terrestrial 2) framework and the forthcoming DVB-NGH (Digital Video Broadcasting - Next Generation Handheld) standard. Mobility support is one of the key features of these DVB specifications, which try to deal with the challenge of enabling HDTV (high definition television) delivery at high vehicular speed. In high-mobility scenarios, the channel response varies within an OFDM (orthogonal frequency-division multiplexing) block and the subcarriers are no longer orthogonal, which leads to the so-called ICI (inter-carrier interference), making the system performance drop severely. Therefore, in order to successfully decode the transmitted data, ICI-aware detectors are necessary and accurate CSI (channel state information), including the ICI terms, is required at the receiver. With the aim of reducing the number of parameters required for such channel estimation while ensuring accurate CSI, BEM (basis expansion model) techniques have been analyzed and proposed for the high-mobility DVB-T2 scenario. A suitable clustered pilot structure has been proposed and its performance has been compared to the pilot patterns proposed in the standard. Different reception schemes that effectively cancel ICI in combination with BEM channel estimation have been proposed, including a Turbo scheme that includes a BP (belief propagation) based ICI canceler, a soft-input decision-directed BEM channel estimator and the LDPC (low-density parity check) decoder. Numerical results have been presented for the most common channel models, showing that the proposed receiver schemes allow good reception, even in receivers with extremely high mobility (up to 0.5 of normalized Doppler frequency).Doktoretza tesi honetan, hainbat kanal estimazio teknika ezberdin aztertu eta proposatu dira mugikortasun ertain eta handiko DVB (Digital Video Broadcasting) hartzaileentzat, bigarren belaunaldiko Lurreko Telebista Digitalean DVB-T2 (Digital Video Broadcasting - Terrestrial 2 ) eta hurrengo DVB-NGH (Digital Video Broadcasting - Next Generation Handheld) estandarretan oinarrututa. Mugikortasuna bigarren belaunaldiko telebista estandarrean funtsezko ezaugarri bat da, HDTV (high definition television) zerbitzuak abiadura handiko hartzaileetan ahalbidetzeko erronkari aurre egiteko nahian. Baldintza horietan, kanala OFDM (ortogonalak maiztasun-zatiketa multiplexing ) sinbolo baten barruan aldatzen da, eta subportadorak jada ez dira ortogonalak, ICI-a (inter-carrier interference) sortuz, eta sistemaren errendimendua hondatuz. Beraz, transmititutako datuak behar bezala deskodeatzeko, ICI-a ekiditeko gai diren detektagailuak eta CSI-a (channel state information) zehatza, ICI osagaiak barne, ezinbestekoak egiten dira hartzailean. Kanalaren estimazio horretarako beharrezkoak diren parametro kopurua murrizteko eta aldi berean CSI zehatza bermatzeko, BEM (basis expansion model) teknika aztertu eta proposatu da ICI kanala identifikatzeko mugikortasun handiko DVB-T2 eszenatokitan. Horrez gain, pilotu egitura egokia proposatu da, estandarrean proposatutako pilotu ereduekin alderatuz BEM estimazioan oinarritua. ICI-a baliogabetzen duten hartzaile sistema ezberdin proposatu dira, Turbo sistema barne, non BP (belief propagation) detektagailua, soft BEM estimazioa eta LDPC (low-density parity check ) deskodetzailea uztartzen diren. Ohiko kanal ereduak erabilita, simulazio emaitzak aurkeztu dira, proposatutako hartzaile sistemak mugikortasun handiko kasuetan harrera ona dutela erakutsiz, 0.5 Doppler maiztasun normalizaturaino.Esta tesis doctoral analiza y propone diferentes técnicas de estimación de canal para receptores DVB (Digital Video Broadcasting) con movilidad moderada y alta, centrándose en el estándar de segunda generación DVB-T2 (Digital Video Broadcasting - Terrestrial 2 ) y en el próximó estándar DVB-NGH (Digital Video Broadcasting - Next Generation Handheld ). La movilidad es una de las principales claves de estas especificaciones, que tratan de lidiar con el reto de permitir la recepción de señal HDTV (high definition television) en receptores móviles. En escenarios de alta movilidad, la respuesta del canal varía dentro de un símbolo OFDM (orthogonal frequency-division multiplexing ) y las subportadoras ya no son ortogonales, lo que genera la llamada ICI (inter-carrier interference), deteriorando el rendimiento de los receptores severamente. Por lo tanto, con el fin de decodificar correctamente los datos transmitidos, detectores capaces de suprimir la ICI y una precisa CSI (channel state information), incluyendo los términos de ICI, son necesarios en el receptor. Con el objetivo de reducir el número de parámetros necesarios para dicha estimación de canal, y al mismo tiempo garantizar una CSI precisa, la técnica de estimación BEM (basis expansion model) ha sido analizada y propuesta para identificar el canal con ICI en receptores DVB-T2 de alta movilidad. Además se ha propuesto una estructura de pilotos basada en clústers, comparando su rendimiento con los patrones de pilotos establecidos en el estándar. Se han propuesto diferentes sistemas de recepción que cancelan ICI en combinación con la estimación BEM, incluyendo un esquema Turbo que incluye un detector BP (belief propagation), un estimador BEM soft y un decodificador LDPC (low-density parity check). Se han presentado resultados numéricos para los modelos de canal más comunes, demostrando que los sistemas de recepción propuestos permiten la decodificación correcta de la señal incluso en receptores con movilidad muy alta (hasta 0,5 de frecuencia de Doppler normalizada)

    A STUDY OF LINEAR ERROR CORRECTING CODES

    Get PDF
    Since Shannon's ground-breaking work in 1948, there have been two main development streams of channel coding in approaching the limit of communication channels, namely classical coding theory which aims at designing codes with large minimum Hamming distance and probabilistic coding which places the emphasis on low complexity probabilistic decoding using long codes built from simple constituent codes. This work presents some further investigations in these two channel coding development streams. Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents and Mattson-Solomon polynomials, and are complementary to each other. The two methods generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and projective geometry codes. Their extension to non binary fields is shown to be straightforward. These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative decoding. It is also shown that for some of these codes, maximum likelihood performance may be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords of the dual code for each iteration. Following a property of the revolving-door combination generator, multi-threaded minimum Hamming distance computation algorithms are developed. Using these algorithms, the previously unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated. In addition, the highest minimum Hamming distance attainable by all binary cyclic codes of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes which have higher minimum Hamming distance than the previously considered best known linear code have been found. It is shown that by exploiting the structure of circulant matrices, the number of codewords required, to compute the minimum Hamming distance and the number of codewords of a given Hamming weight of binary double-circulant codes based on primes, may be reduced. A means of independently verifying the exhaustively computed number of codewords of a given Hamming weight of these double-circulant codes is developed and in coiyunction with this, it is proved that some published results are incorrect and the correct weight spectra are presented. Moreover, it is shown that it is possible to estimate the minimum Hamming distance of this family of prime-based double-circulant codes. It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection mechanism that offers much better throughput and performance than the conventional ORG scheme is described. Using the same method it is shown that the performance of conventional CRC scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy communications system and it is shown that sequences of good error correction codes, suitable for use in incremental redundancy communications systems may be obtained using the Constructions X and XX. Examples are given and their performances presented in comparison to conventional CRC schemes

    Advanced receivers for distributed cooperation in mobile ad hoc networks

    Get PDF
    Mobile ad hoc networks (MANETs) are rapidly deployable wireless communications systems, operating with minimal coordination in order to avoid spectral efficiency losses caused by overhead. Cooperative transmission schemes are attractive for MANETs, but the distributed nature of such protocols comes with an increased level of interference, whose impact is further amplified by the need to push the limits of energy and spectral efficiency. Hence, the impact of interference has to be mitigated through with the use PHY layer signal processing algorithms with reasonable computational complexity. Recent advances in iterative digital receiver design techniques exploit approximate Bayesian inference and derivative message passing techniques to improve the capabilities of well-established turbo detectors. In particular, expectation propagation (EP) is a flexible technique which offers attractive complexity-performance trade-offs in situations where conventional belief propagation is limited by computational complexity. Moreover, thanks to emerging techniques in deep learning, such iterative structures are cast into deep detection networks, where learning the algorithmic hyper-parameters further improves receiver performance. In this thesis, EP-based finite-impulse response decision feedback equalizers are designed, and they achieve significant improvements, especially in high spectral efficiency applications, over more conventional turbo-equalization techniques, while having the advantage of being asymptotically predictable. A framework for designing frequency-domain EP-based receivers is proposed, in order to obtain detection architectures with low computational complexity. This framework is theoretically and numerically analysed with a focus on channel equalization, and then it is also extended to handle detection for time-varying channels and multiple-antenna systems. The design of multiple-user detectors and the impact of channel estimation are also explored to understand the capabilities and limits of this framework. Finally, a finite-length performance prediction method is presented for carrying out link abstraction for the EP-based frequency domain equalizer. The impact of accurate physical layer modelling is evaluated in the context of cooperative broadcasting in tactical MANETs, thanks to a flexible MAC-level simulato

    A STUDY OF ERASURE CORRECTING CODES

    Get PDF
    This work focus on erasure codes, particularly those that of high performance, and the related decoding algorithms, especially with low computational complexity. The work is composed of different pieces, but the main components are developed within the following two main themes. Ideas of message passing are applied to solve the erasures after the transmission. Efficient matrix-representation of the belief propagation (BP) decoding algorithm on the BEG is introduced as the recovery algorithm. Gallager's bit-flipping algorithm are further developed into the guess and multi-guess algorithms especially for the application to recover the unsolved erasures after the recovery algorithm. A novel maximum-likelihood decoding algorithm, the In-place algorithm, is proposed with a reduced computational complexity. A further study on the marginal number of correctable erasures by the In-place algoritinn determines a lower bound of the average number of correctable erasures. Following the spirit in search of the most likable codeword based on the received vector, we propose a new branch-evaluation- search-on-the-code-tree (BESOT) algorithm, which is powerful enough to approach the ML performance for all linear block codes. To maximise the recovery capability of the In-place algorithm in network transmissions, we propose the product packetisation structure to reconcile the computational complexity of the In-place algorithm. Combined with the proposed product packetisation structure, the computational complexity is less than the quadratic complexity bound. We then extend this to application of the Rayleigh fading channel to solve the errors and erasures. By concatenating an outer code, such as BCH codes, the product-packetised RS codes have the performance of the hard-decision In-place algorithm significantly better than that of the soft-decision iterative algorithms on optimally designed LDPC codes

    Near-capacity fixed-rate and rateless channel code constructions

    No full text
    Fixed-rate and rateless channel code constructions are designed for satisfying conflicting design tradeoffs, leading to codes that benefit from practical implementations, whilst offering a good bit error ratio (BER) and block error ratio (BLER) performance. More explicitly, two novel low-density parity-check code (LDPC) constructions are proposed; the first construction constitutes a family of quasi-cyclic protograph LDPC codes, which has a Vandermonde-like parity-check matrix (PCM). The second construction constitutes a specific class of protograph LDPC codes, which are termed as multilevel structured (MLS) LDPC codes. These codes possess a PCM construction that allows the coexistence of both pseudo-randomness as well as a structure requiring a reduced memory. More importantly, it is also demonstrated that these benefits accrue without any compromise in the attainable BER/BLER performance. We also present the novel concept of separating multiple users by means of user-specific channel codes, which is referred to as channel code division multiple access (CCDMA), and provide an example based on MLS LDPC codes. In particular, we circumvent the difficulty of having potentially high memory requirements, while ensuring that each user’s bits in the CCDMA system are equally protected. With regards to rateless channel coding, we propose a novel family of codes, which we refer to as reconfigurable rateless codes, that are capable of not only varying their code-rate but also to adaptively modify their encoding/decoding strategy according to the near-instantaneous channel conditions. We demonstrate that the proposed reconfigurable rateless codes are capable of shaping their own degree distribution according to the nearinstantaneous requirements imposed by the channel, but without any explicit channel knowledge at the transmitter. Additionally, a generalised transmit preprocessing aided closed-loop downlink multiple-input multiple-output (MIMO) system is presented, in which both the channel coding components as well as the linear transmit precoder exploit the knowledge of the channel state information (CSI). More explicitly, we embed a rateless code in a MIMO transmit preprocessing scheme, in order to attain near-capacity performance across a wide range of channel signal-to-ratios (SNRs), rather than only at a specific SNR. The performance of our scheme is further enhanced with the aid of a technique, referred to as pilot symbol assisted rateless (PSAR) coding, whereby a predetermined fraction of pilot bits is appropriately interspersed with the original information bits at the channel coding stage, instead of multiplexing pilots at the modulation stage, as in classic pilot symbol assisted modulation (PSAM). We subsequently demonstrate that the PSAR code-aided transmit preprocessing scheme succeeds in gleaning more information from the inserted pilots than the classic PSAM technique, because the pilot bits are not only useful for sounding the channel at the receiver but also beneficial for significantly reducing the computational complexity of the rateless channel decoder

    Multi-carrier CDMA using convolutional coding and interference cancellation

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN016251 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Étude des propriétés des codes convolutionnels récursifs doublement-orthogonaux

    Get PDF
    Résumé Cette thèse propose l'étude des codes convolutionnels récursifs doublement orthogonaux (RCDO). Ces codes correcteur d'erreur trouvent leur source dans les travaux développés au cours des dernières années à l'École Polytechnique de Montréal et ont pour objectif de corriger les erreurs se produisant lors du transfert de l'information entre une source et un destinataire. Les codes RCDO représentent une famille de codes convolutionnels récursifs qui offrent des performances d'erreur qui s'approchent des limites prédites par la théorie pour les canaux de communications considérés dans cette thèse. Outre les excellentes performances d'erreur offertes par les codes RCDO, il s'avère que cette famille de codes correcteurs d'erreur peut être générée par un encodage qui est très simple à réaliser en comparaison avec des techniques de codage qui offrent des performances d'erreur similaires. De plus, le décodage de ces codes s'effectue à l'aide d'un décodeur itératif qui est composé d'une chaîne successive du même décodeur. Le nombre de décodeurs qui se succèdent dans la chaîne représente le nombre d'itérations effectuées lors du décodage. Le fait de répéter la même structure facilite grandement la conception du décodeur itératif, car uniquement la conception d'un seul décodeur doit être prise en compte pour réaliser l'ensemble du décodeur itératif. La simplicité de mise en oeuvre de la technique de codage proposée facilite donc les opérations d'encodage et de décodage et est ainsi adaptée à des sources d'information délivrant des symboles d'information à hauts débits et qui nécessitent de hautes performances d'erreur. Les objectifs de cette thèse sont multiples. Dans un premier temps, ce travail permet d'établir la correspondance entre les codes Low-Density Parity-Check (LDPC) et les codes RCDO. À partir de cette correspondance, il devient possible de quantifier un seuil de convergence asymptotique associé à une famille de codes RCDO. C'est-à-dire que sous ce seuil, la probabilité d'effectuer une erreur de décodage ne converge pas vers zéro. Nous avons aussi analysé la complexité associée aux opérations liées à l'encodage et au décodage des codes RCDO. Suite à ce travail, il devient donc possible d'imposer des critères de conception matérielle et ainsi générer les codes RCDO qui offrent les meilleurs seuils de convergence théorique.---------- Abstract This thesis presents Recursive Convolutional Doubly-Orthogonal (RCDO) codes. These new error correcting codes represent a class of convolutionnal Low-Density Parity-Check (LDPC) codes that can be easily decoded iteratively. The doubly-orthogonal conditions of RCDO codes allow the decoder to estimate a symbol with a set of equations that are independent over two successive iterations. This reduces the error propagation throughout the iterative decoding process and therefore improves the error performances. The fondation of this research founds his source in recent works that has been presented in the last decade at École Polytechnique de Montréal. As presented in this document, the error performances of RCDO codes are near the Shannon capacity for the additive white gaussian noise and the binary erasure channels. In order to achieve these error performances, only a simple multi shift registers recursive convolutional encoder is required at the encoder. Moreover, the iterative decoder is realized only by concatenating, a certain number of time, the same simple threshold decoder. Therefore, the complete decoder is a cascade of the same threshold decoder. It follows that only the design of one simple threshold decoder is needed for constructing the complete iterative decoder. The implementation simplicities of the encoder and of the iterative decoder of RCDO codes is advantageous as compared to the implementation complexity of error correcting techniques that achieve similar error performances. This thesis has many objectives. First of all, this work presents a bridge between the family of LDPC block codes and the RCDO codes, indeed both families of codes are constructed from their parity-check matrix. From this fact, it becomes possible to identify an asymptotic threshold value that represents the limit above which the error performances of a family of RCDO codes can converge to zero. Moreover, we also present the complexity analysis associated to the encoding and to the decoding of RCDO codes. From this analysis, it becomes now possible to impose material criterions and to search for an ensemble of RCDO codes that meet all the material requirements and have the best theoretical threshold value. The question that motivates this thesis is the following. Is it possible to approach the theoretical Shannon limits associated to error correcting codes over binary symmetric memoryless channels using the doubly-orthogonal conditions imposed on convolutional codes? For this thesis, the symmetric channels are : the Binary Erasure Channel (BEC) and the Additive White Gaussian Noise (AWGN) channel
    corecore