67 research outputs found

    Polar Coding for the Large Hadron Collider: Challenges in Code Concatenation

    Full text link
    In this work, we present a concatenated repetition-polar coding scheme that is aimed at applications requiring highly unbalanced unequal bit-error protection, such as the Beam Interlock System of the Large Hadron Collider at CERN. Even though this concatenation scheme is simple, it reveals significant challenges that may be encountered when designing a concatenated scheme that uses a polar code as an inner code, such as error correlation and unusual decision log-likelihood ratio distributions. We explain and analyze these challenges and we propose two ways to overcome them.Comment: Presented at the 51st Asilomar Conference on Signals, Systems, and Computers, November 201

    Concatenated Polar Codes and Joint Source-Channel Decoding

    Get PDF
    In this dissertation, we mainly address two issues: 1. improving the finite-length performance of capacity-achieving polar codes; 2. use polar codes to efficiently exploit the source redundancy to improve the reliability of the data storage system. In the first part of the dissertation, we propose interleaved concatenation schemes of polar codes with outer binary BCH and convolutional codes to improve the finite-length performance of polar codes. For asymptotically long blocklength, we show that our schemes achieve exponential error decay rate which is much larger than the sub-exponential decay rate of standalone polar codes. In practice we show by simulation that our schemes outperform stand-alone polar codes decoded with successive cancellation or belief propagation decoding. The performance of concatenated polar and convolutional codes can be comparable to stand-alone polar codes with list decoding in the high signal to noise ratio regime. In addition to this, we show that the proposed concatenation schemes require lower memory and decoding complexity in comparison to belief propagation and list decoding of polar codes. With the proposed schemes, polar codes are able to strike a good balance between performance, memory and decoding complexity. The second part of the dissertation is devoted to improving the decoding performance of polar codes where there is leftover redundancy after source compression. We focus on language-based sources, and propose a joint-source channel decoding scheme for polar codes. We show that if the language decoder is modeled as erasure correcting outer block codes, the rate of inner polar codes can be improved while still guaranteeing a vanishing probability of error. The improved rate depends on the frozen bit distribution of polar codes and we provide a formal proof for the convergence of that distribution. Both lower bound and maximum improved rate analysis are provided. To compare with the non-iterative joint list decoding scheme for polar codes, we study a joint iterative decoding scheme with graph codes. In particular, irregular repeat accumulate codes are exploited because of low encoding/decoding complexity and capacity achieving property for the binary erasure channel. We propose how to design optimal irregular repeat accumulate codes with different models of language decoder. We show that our scheme achieves improved decoding thresholds. A comparison of joint polar decoding and joint irregular repeat accumulate decoding is given

    Constructing Linear Encoders with Good Spectra

    Full text link
    Linear encoders with good joint spectra are suitable candidates for optimal lossless joint source-channel coding (JSCC), where the joint spectrum is a variant of the input-output complete weight distribution and is considered good if it is close to the average joint spectrum of all linear encoders (of the same coding rate). In spite of their existence, little is known on how to construct such encoders in practice. This paper is devoted to their construction. In particular, two families of linear encoders are presented and proved to have good joint spectra. The first family is derived from Gabidulin codes, a class of maximum-rank-distance codes. The second family is constructed using a serial concatenation of an encoder of a low-density parity-check code (as outer encoder) with a low-density generator matrix encoder (as inner encoder). In addition, criteria for good linear encoders are defined for three coding applications: lossless source coding, channel coding, and lossless JSCC. In the framework of the code-spectrum approach, these three scenarios correspond to the problems of constructing linear encoders with good kernel spectra, good image spectra, and good joint spectra, respectively. Good joint spectra imply both good kernel spectra and good image spectra, and for every linear encoder having a good kernel (resp., image) spectrum, it is proved that there exists a linear encoder not only with the same kernel (resp., image) but also with a good joint spectrum. Thus a good joint spectrum is the most important feature of a linear encoder.Comment: v5.5.5, no. 201408271350, 40 pages, 3 figures, extended version of the paper to be published in IEEE Transactions on Information Theor

    Sparse graph-based coding schemes for continuous phase modulations

    Get PDF
    The use of the continuous phase modulation (CPM) is interesting when the channel represents a strong non-linearity and in the case of limited spectral support; particularly for the uplink, where the satellite holds an amplifier per carrier, and for downlinks where the terminal equipment works very close to the saturation region. Numerous studies have been conducted on this issue but the proposed solutions use iterative CPM demodulation/decoding concatenated with convolutional or block error correcting codes. The use of LDPC codes has not yet been introduced. Particularly, no works, to our knowledge, have been done on the optimization of sparse graph-based codes adapted for the context described here. In this study, we propose to perform the asymptotic analysis and the design of turbo-CPM systems based on the optimization of sparse graph-based codes. Moreover, an analysis on the corresponding receiver will be done

    Implementação de códigos LDPC em OFDM e SC-FDE

    Get PDF
    Os desenvolvimentos dos sistemas de comunicação sem fios apontam para transmissões de alta velocidade e alta qualidade de serviço com um uso eficiente de energia. Eficiência espectral pode ser obtida por modulações multinível, enquanto que melhorias na eficiência de potência podem ser proporcionadas pelo uso de códigos corretores de erros. Os códigos Low-Density Parity-Check (LDPC), devido ao seu desempenho próximo do limite de Shannon e baixa complexidade na implementação e descodificação são apropriados para futuros sistemas de comunicações sem fios. Por outro lado, o uso de modulações multinível acarreta limitações na amplificação. Contudo, uma amplificação eficiente pode ser assegurada por estruturas de transmissão onde as modulações multinível são decompostas em submodulações com envolvente constante que podem ser amplificadas por amplificadores não lineares a operar na zona de saturação. Neste tipo de estruturas surgem desvios de fase e ganho, produzindo distorções na constelação resultante da soma de todos os sinais amplificados. O trabalho foca-se no uso dos códigos LDPC em esquemas multiportadora e monoportadora, com especial ênfase na performance de uma equalização iterativa implementada no domínio da frequência por um Iterative Block-Decision Feedback Equalizer (IB-DFE). São analisados aspectos como o impacto do número de iterações no processo de descodificação dentro das iterações do processo de equalização. Os códigos LDPC também serão utilizados para compensar os desvios de fase em recetores iterativos para sistemas baseados em transmissores com vários ramos de amplificação. É feito um estudo sobre o modo como estes códigos podem aumentar a tolerância a erros de fase que incluí uma análise da complexidade e um algoritmo para estimação dos desequilíbrios de fase

    Multiple Parallel Concatenated Gallager Codes and Their Applications

    Get PDF
    Due to the increasing demand of high data rate of modern wireless communications, there is a significant interest in error control coding. It now plays a significant role in digital communication systems in order to overcome the weaknesses in communication channels. This thesis presents a comprehensive investigation of a class of error control codes known as Multiple Parallel Concatenated Gallager Codes (MPCGCs) obtained by the parallel concatenation of well-designed LDPC codes. MPCGCs are constructed by breaking a long and high complexity of conventional single LDPC code into three or four smaller and lower complexity LDPC codes. This design of MPCGCs is simplified as the option of selecting the component codes completely at random based on a single parameter of Mean Column Weight (MCW). MPCGCs offer flexibility and scope for improving coding performance in theoretical and practical implementation. The performance of MPCGCs is explored by evaluating these codes for both AWGN and flat Rayleigh fading channels and investigating the puncturing of these codes by a proposed novel and efficient puncturing methods for improving the coding performance. Another investigating in the deployment of MPCGCs by enhancing the performance of WiMAX system. The bit error performances are compared and the results confirm that the proposed MPCGCs-WiMAX based IEEE 802.16 standard physical layer system provides better gain compared to the single conventional LDPC-WiMAX system. The incorporation of Quasi-Cyclic QC-LDPC codes in the MPCGC structure (called QC-MPCGC) is shown to improve the overall BER performance of MPCGCs with reduced overall decoding complexity and improved flexibility by using Layered belief propagation decoding instead of the sum-product algorithm (SPA). A proposed MIMO-MPCGC structure with both a 2X2 MIMO and 2X4 MIMO configurations is developed in this thesis and shown to improve the BER performance over fading channels over the conventional LDPC structure

    Sociophysics & Sociocybernetics: An Essay on the Natural Roots & Limits of Political Control

    Get PDF
    One of the critical problems of sociocybernetics is to determine the necessity, possibility and desirability of social control by political institutions. This conundrum has been tackled repeatedly in history with various responses; some of which have been tried and failed, while others are still going on locally and temporally. Although the problems of social control is pervading and continuing, changing circumstances make all solutions parochial and ephemeral at best. On this assumption, the question is how much further can this issue be pursued in a more general or theoretical manner. Given the complexity, extensity and intensity of contemporary social systems, can some general sociocybernetic principles be found to apply here and now, as well as everywhere and always? It is fortunate that recent scientific discoveries give new insights to old puzzles. The latest advances of General Systems, Complexity, Quantum, and Chaos Theories emphasize the multiplicity of reality and thereby show great promise for various social applications. Combining these theories, this paper will apply the Sociophysics paradigm, which is particularly suitable here because it renders explicit the already implicit metaphors and fundamental isometries between the natural and social sciences, thus contributing to their mutual consolidation and convergence. The central hypothesis here is that some measure of social control is necessary, possible and desirable; so the practical question becomes when, where and how it can be optimized. On the thesis that complex natural and cultural systems are difficult to know and understand, trying to manipulate them is precarious; so any attempt to control them must be thought and carried out in conformity with nature: humbly, carefully and responsibly. Under the circumstances, human interference with fragile or chaotic systems found in both nature and culture, should be based on the principles of minimizing environmental disturbance and maximizing holistic balance. The best policy would then seem to be choosing a post modern sociocybernetic strategy, which approaches a golden mean between the libertarian and totalitarian extreme

    Understanding Quantum Technologies 2022

    Full text link
    Understanding Quantum Technologies 2022 is a creative-commons ebook that provides a unique 360 degrees overview of quantum technologies from science and technology to geopolitical and societal issues. It covers quantum physics history, quantum physics 101, gate-based quantum computing, quantum computing engineering (including quantum error corrections and quantum computing energetics), quantum computing hardware (all qubit types, including quantum annealing and quantum simulation paradigms, history, science, research, implementation and vendors), quantum enabling technologies (cryogenics, control electronics, photonics, components fabs, raw materials), quantum computing algorithms, software development tools and use cases, unconventional computing (potential alternatives to quantum and classical computing), quantum telecommunications and cryptography, quantum sensing, quantum technologies around the world, quantum technologies societal impact and even quantum fake sciences. The main audience are computer science engineers, developers and IT specialists as well as quantum scientists and students who want to acquire a global view of how quantum technologies work, and particularly quantum computing. This version is an extensive update to the 2021 edition published in October 2021.Comment: 1132 pages, 920 figures, Letter forma
    corecore