35 research outputs found

    Low-Density Arrays of Circulant Matrices: Rank and Row-Redundancy Analysis, and Quasi-Cyclic LDPC Codes

    Full text link
    This paper is concerned with general analysis on the rank and row-redundancy of an array of circulants whose null space defines a QC-LDPC code. Based on the Fourier transform and the properties of conjugacy classes and Hadamard products of matrices, we derive tight upper bounds on rank and row-redundancy for general array of circulants, which make it possible to consider row-redundancy in constructions of QC-LDPC codes to achieve better performance. We further investigate the rank of two types of construction of QC-LDPC codes: constructions based on Vandermonde Matrices and Latin Squares and give combinatorial expression of the exact rank in some specific cases, which demonstrates the tightness of the bound we derive. Moreover, several types of new construction of QC-LDPC codes with large row-redundancy are presented and analyzed.Comment: arXiv admin note: text overlap with arXiv:1004.118

    Hierarchical and High-Girth QC LDPC Codes

    Full text link
    We present a general approach to designing capacity-approaching high-girth low-density parity-check (LDPC) codes that are friendly to hardware implementation. Our methodology starts by defining a new class of "hierarchical" quasi-cyclic (HQC) LDPC codes that generalizes the structure of quasi-cyclic (QC) LDPC codes. Whereas the parity check matrices of QC LDPC codes are composed of circulant sub-matrices, those of HQC LDPC codes are composed of a hierarchy of circulant sub-matrices that are in turn constructed from circulant sub-matrices, and so on, through some number of levels. We show how to map any class of codes defined using a protograph into a family of HQC LDPC codes. Next, we present a girth-maximizing algorithm that optimizes the degrees of freedom within the family of codes to yield a high-girth HQC LDPC code. Finally, we discuss how certain characteristics of a code protograph will lead to inevitable short cycles, and show that these short cycles can be eliminated using a "squashing" procedure that results in a high-girth QC LDPC code, although not a hierarchical one. We illustrate our approach with designed examples of girth-10 QC LDPC codes obtained from protographs of one-sided spatially-coupled codes.Comment: Submitted to IEEE Transactions on Information THeor

    SIGNAL PROCESSING TECHNIQUES AND APPLICATIONS

    Get PDF
    As the technologies scaling down, more transistors can be fabricated into the same area, which enables the integration of many components into the same substrate, referred to as system-on-chip (SoC). The components on SoC are connected by on-chip global interconnects. It has been shown in the recent International Technology Roadmap of Semiconductors (ITRS) that when scaling down, gate delay decreases, but global interconnect delay increases due to crosstalk. The interconnect delay has become a bottleneck of the overall system performance. Many techniques have been proposed to address crosstalk, such as shielding, buffer insertion, and crosstalk avoidance codes (CACs). The CAC is a promising technique due to its good crosstalk reduction, less power consumption and lower area. In this dissertation, I will present analytical delay models for on-chip interconnects with improved accuracy. This enables us to have a more accurate control of delays for transition patterns and lead to a more efficient CAC, whose worst-case delay is 30-40% smaller than the best of previously proposed CACs. As the clock frequency approaches multi-gigahertz, the parasitic inductance of on-chip interconnects has become significant and its detrimental effects, including increased delay, voltage overshoots and undershoots, and increased crosstalk noise, cannot be ignored. We introduce new CACs to address both capacitive and inductive couplings simultaneously.Quantum computers are more powerful in solving some NP problems than the classical computers. However, quantum computers suffer greatly from unwanted interactions with environment. Quantum error correction codes (QECCs) are needed to protect quantum information against noise and decoherence. Given their good error-correcting performance, it is desirable to adapt existing iterative decoding algorithms of LDPC codes to obtain LDPC-based QECCs. Several QECCs based on nonbinary LDPC codes have been proposed with a much better error-correcting performance than existing quantum codes over a qubit channel. In this dissertation, I will present stabilizer codes based on nonbinary QC-LDPC codes for qubit channels. The results will confirm the observation that QECCs based on nonbinary LDPC codes appear to achieve better performance than QECCs based on binary LDPC codes.As the technologies scaling down further to nanoscale, CMOS devices suffer greatly from the quantum mechanical effects. Some emerging nano devices, such as resonant tunneling diodes (RTDs), quantum cellular automata (QCA), and single electron transistors (SETs), have no such issues and are promising candidates to replace the traditional CMOS devices. Threshold gate, which can implement complex Boolean functions within a single gate, can be easily realized with these devices. Several applications dealing with real-valued signals have already been realized using nanotechnology based threshold gates. Unfortunately, the applications using finite fields, such as error correcting coding and cryptography, have not been realized using nanotechnology. The main obstacle is that they require a great number of exclusive-ORs (XORs), which cannot be realized in a single threshold gate. Besides, the fan-in of a threshold gate in RTD nanotechnology needs to be bounded for both reliability and performance purpose. In this dissertation, I will present a majority-class threshold architecture of XORs with bounded fan-in, and compare it with a Boolean-class architecture. I will show an application of the proposed XORs for the finite field multiplications. The analysis results will show that the majority class outperforms the Boolean class architectures in terms of hardware complexity and latency. I will also introduce a sort-and-search algorithm, which can be used for implementations of any symmetric functions. Since XOR is a special symmetric function, it can be implemented via the sort-and-search algorithm. To leverage the power of multi-input threshold functions, I generalize the previously proposed sort-and-search algorithm from a fan-in of two to arbitrary fan-ins, and propose an architecture of multi-input XORs with bounded fan-ins

    ADVANCED SIGNAL PROCESSING FOR MAGNETIC RECORDING ON PERPENDICULARLY MAGNETIZED MEDIA

    Get PDF
    In magnetic recording channels (MRCs) the readback signal is corrupted by many kinds of impairments, such as electronic noise, media noise, intersymbol interference (ISI), inter-track interference (ITI) and different types of erasures. The growth in demand for the information storage, leads to the continuing pursuit of higher recording density, which enhances the impact of the noise contamination and makes the recovery of the user data from magnetic media more challenging. In this dissertation, we develop advanced signal processing techniques to mitigate these impairments in MRCs.We focus on magnetic recording on perpendicularly magnetized media, from the state-of-the art continuous media to bit-patterned media, which is a possible choice for the next generation of products. We propose novel techniques for soft-input soft-output channel detection, soft iterative decoding of low-density parity-check (LDPC) codes as well as LDPC code designs for MRCs.First we apply the optimal subblock-by-subblock detector (OBBD) to nonbinary LDPC coded perpendicular magnetic recording channels (PMRCs) and derive a symbol-based detector to do the turbo equalization exactly. Second, we propose improved belief-propagation (BP) decoders for both binary and nonbinary LDPC coded PMRCs, which provide significant gains over the standard BP decoder. Third, we introduce novel LDPC code design techniques to construct LDPC codes with fewer short cycles. Performance improvement is achieved by applying the new LDPC codes to PMRCs. Fourth, we do a substantial investigation on Reed-Solomon (RS) plus LDPC coded PMRCs. Finally, we continue our research on bit-patterned magnetic recording (BPMR) channels at extremely high recording densities. A multi-track detection technique is proposed to mitigate the severe ITI in BPMR channels. The multi-track detection with both joint-track and two-dimensional (2D) equalization provide significant performance improvement compared to conventional equalization and detection methods

    Analysis and Error Performances of Convolutional Doubly Orthogonal Codes with Non-Binary Alphabets

    Get PDF
    RÉSUMÉ Récemment, les codes convolutionnels simple-orthogonaux de Massey ont été adaptés au décodage efficace moderne. Plus spécifiquement, les caractéristiques et propriétés d'simple-orthogonalité de ce groupe de codes ont été étendues aux conditions de double-orthogonalité afin d'accommoder les algorithmes de décodage itératif modernes, donnant lieu aux codes convolutionnels doublement orthogonaux notés codes CDOs. Ainsi À l'écart de l'algorithme de propagation de croyance (Belief Propagation, BP), le décodage itératif à seuil, développé à partir de l'algorithme de décodage à seuil de Massey, peut aussi être appliqué aux codes CDOs. Cet algorithme est particulièrement attrayant car il offre une complexité moins élevée que celle de l'algorithme de décodage à propagation de croyance (en anglais Belief Propagation, noté BP). Les codes convolutionnels doublement orthogonaux peuvent être divisés en deux groupes: les codes CDOs non-récursifs utilisant des structures d’encodage à un seul registre à décalage, et les codes CDOs récursifs (en anglais Recursive CDO, notés RCDO) construits à partir de proto-graphes. À des rapports signal-à-bruit Eb/N0 modérés, les codes non-récursifs CDO présentent des performances d’erreurs comparables à celles des autres technique courantes lorsqu’ils sont utilisés avec l'algorithme de décodage à seuil, présentant ainsi une alternative attrayante aux codes de contrôle de parité à faible densité (en Anglais Low-Density Parity-Check codes, notés LDPC). Par contre, les codes CDOs récursifs RCDO fournissent des performances d'erreur très élevées en utilisant le décodage BP, se rapprochent de la limite de Shannon. De plus, dans l'étude des codes LDPC, l'exploitation des corps finis GF(q) avec q>2 comme alphabets du code contribue à l'amélioration des performances avec l'algorithme de décodage BP. Ces derniers sont appelés alors les codes LDPC q-aires. Inspiré du succès de l'application des alphabets dans un corps de Galois de q éléments GF(q), dans les codes LDPC, nous portons dans cette thèse, notre attention aux codes CDO utilisant les corps GF(q) finis, appelés CDO q-aires. Les codes CDO récursifs et non-récursifs binaires sont ainsi étendus à l'utilisation des corps finis GF(q) avec q>2. Leurs performances d’erreur ont été déterminées par simulation à l’ordinateur en utilisant les deux algorithmes de décodage itératif : à seuil et BP. Bien que l'algorithme de décodage à seuil souffre d'une perte de performance par rapport à l'algorithme BP, sa complexité de décodage est substantiellement réduite grâce à la rapide convergence au message estimé. On montre que les codes CDO q-aires fournissent des performances d'erreur supérieures à celles des codes binaires aussi bien dans le décodage itératif à seuil et dans le décodage BP. Cette supériorité en termes de taux d'erreur est plus prononcée à haut rapport signal-à-bruit Eb/N0. Cependant ces avantages sont obtenus au prix d'une complexité plus élevée, complexité évaluée par le nombre des différentes opérations requises dans le processus de décodage. Afin de faciliter l'implémentation des codes CDO q-aires, nous avons examiné l'effet des alphabets quantifiés dans la procédure de décodage sur les performances d'erreur. Il a été démontré que le processus de décodage nécessite une quantification plus fine que dans le cas des codes binaires.----------ABSTRACT Recently, the self orthogonal codes due to Massey were adapted in the realm of modern decoding techniques. Specifically, the self orthogonal characteristics of this set of codes are expanded to the doubly orthogonal conditions in order to accommodate the iterative decoding algorithms, giving birth to the convolutional doubly orthogonal (CDO) codes. In addition to the belief propagation (BP) algorithm, the CDO codes also lend themselves to the iterative threshold decoding, which has been developed from the threshold decoding algorithm raised by Massey, offering a lower-complexity alternative for the BP decoding algorithm. The convolutional doubly orthogonal codes are categorized into two subgroups: the non-recursive CDO codes featured by the shift-register structures without feedback, while the recursive CDO (RCDO) codes are constructed based on shift registers with feedback connections from the outputs. The non-recursive CDO codes demonstrate competitive error performances under the iterative threshold decoding algorithm in moderate Eb/N0 region, providing another set of low-density parity-check convolutional (LDPCC) codes with outstanding error performances. On the other hand, the recursive CDO codes enjoy exceptional error performances under BP decoding, enjoying waterfall performances close to the Shannon limit. Additionally, in the study of the LDPC codes, the exploration of the finite fields GF(q) with q>2 as the code alphabets had proved to improve the error performances of the codes under the BP algorithm, giving rise to the q-ary LDPC codes. Inspired by the success of the application of GF(q) alphabets upon the LDPC codes, we focus our attention on the CDO codes with their alphabets generalized with the finite fields; particularly, we investigated the effects of this generalization on the error performances of the CDO codes and investigated their underlying causes. In this thesis, both the recursive and non-recursive CDO codes are extended with the finite fields GF(q) with q>2, referred to as q-ary CDO codes. Their error performances are examined through simulations using both the iterative threshold decoding and the BP decoding algorithms. Whilst the threshold decoding algorithm suffers some performance loss as opposed to the BP algorithm, it phenomenally reduces the complexity in the decoding process mainly due to the fast convergence of the messages. The q-ary CDO codes demonstrated superior error performances as compared to their binary counterparts under both the iterative threshold decoding and the BP decoding algorithms, which is most pronounced in high Eb/N0 region; however, these improvements have been accompanied by an increase in the decoding complexity, which is evaluated through the number of different operations needed in the decoding process. In order to facilitate the implementation of the q-ary CDO codes, we examined the effect of quantized message alphabets in the decoding process on the error performances of the codes

    Modulation, Coding, and Receiver Design for Gigabit mmWave Communication

    Get PDF
    While wireless communication has become an ubiquitous part of our daily life and the world around us, it has not been able yet to deliver the multi-gigabit throughput required for applications like high-definition video transmission or cellular backhaul communication. The throughput limitation of current wireless systems is mainly the result of a shortage of spectrum and the problem of congestion. Recent advancements in circuit design allow the realization of analog frontends for mmWave frequencies between 30GHz and 300GHz, making abundant unused spectrum accessible. However, the transition to mmWave carrier frequencies and GHz bandwidths comes with new challenges for wireless receiver design. Large variations of the channel conditions and high symbol rates require flexible but power-efficient receiver designs. This thesis investigates receiver algorithms and architectures that enable multi-gigabit mmWave communication. Using a system-level approach, the design options between low-power time-domain and power-hungry frequency-domain signal processing are explored. The system discussion is started with an analysis of the problem of parameter synchronization in mmWave systems and its impact on system design. The proposed synchronization architecture extends known synchronization techniques to provide greater flexibility regarding the operating environments and for system efficiency optimization. For frequency-selective environments, versatile single-carrier frequency domain equalization (SC-FDE) offers not only excellent channel equalization, but also the possibility to integrate additional baseband tasks without overhead. Hence, the high initial complexity of SC-FDE needs to be put in perspective to the complexity savings in the other parts of the baseband. Furthermore, an extension to the SC-FDE architecture is proposed that allows an adaptation of the equalization complexity by switching between a cyclic-prefix mode and a reduced block length overlap-save mode based on the delay spread. Approaching the problem of complexity adaptation from time-domain, a high-speed hardware architecture for the delayed decision feedback sequence estimation (DDFSE) algorithm is presented. DDFSE uses decision feedback to reduce the complexity of the sequence estimation and allows to set the system performance between the performance of full maximum-likelihood detection and pure decision feedback equalization. An implementation of the DDFSE architecture is demonstrated as part of an all-digital IEEE802.11ad baseband ASIC manufactured in 40nm CMOS. A flexible architecture for wideband mmWave receivers based on complex sub-sampling is presented. Complex sub-sampling combines the design advantages of sub-sampling receivers with the flexibility of direct-conversion receivers using a single passive component and a digital compensation scheme. Feasibility of the architecture is proven with a 16Gb/s hardware demonstrator. The demonstrator is used to explore the potential gain of non-equidistant constellations for high-throughput mmWave links. Specifically crafted amplitude phase-shift keying (APSK) modulation achieve 1dB average mutual information (AMI) advantage over quadrature amplitude modulation (QAM) in simulation and on the testbed hardware. The AMI advantage of APSK can be leveraged for a practical transmission using Polar codes which are trained specifically for the constellation

    Algorithms for 5G physical layer

    Get PDF
    There is a great activity in the research community towards the investigations of the various aspects of 5G at different protocol layers and parts of the network. Among all, physical layer design plays a very important role to satisfy high demands in terms of data rates, latency, reliability and number of connected devices for 5G deployment. This thesis addresses he latest developments in the physical layer algorithms regarding the channel coding, signal detection, frame synchronization and multiple access technique in the light of 5G use cases. These developments are governed by the requirements of the different use case scenarios that are envisioned to be the driving force in 5G. All chapters from chapter 2 to 5 are developed around the need of physical layer algorithms dedicated to 5G use cases. In brief, this thesis focuses on design, analysis, simulation and he advancement of physical layer aspects such as 1. Reliability based decoding of short length Linear Block Codes (LBCs) with very good properties in terms of minimum hamming istance for very small latency requiring applications. In this context, we enlarge the grid of possible candidates by considering, in particular, short length LBCs (especially extended CH codes) with soft-decision decoding; 2. Efficient synchronization of preamble/postamble in a short bursty frame using modified Massey correlator; 3. Detection of Primary User activity using semiblind spectrum sensing algorithms and analysis of such algorithms under practical imperfections; 4. Design of optimal spreading matrix for a Low Density Spreading (LDS) technique in the context of non-orthogonal multiple access. In such spreading matrix, small number of elements in a spreading sequences are non zero allowing each user to spread its data over small number of chips (tones), thus simplifying the decoding procedure using Message Passing Algorithm (MPA)

    Reversible Computation: Extending Horizons of Computing

    Get PDF
    This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first
    corecore