141 research outputs found

    Absorbing Set Analysis and Design of LDPC Codes from Transversal Designs over the AWGN Channel

    Full text link
    In this paper we construct low-density parity-check (LDPC) codes from transversal designs with low error-floors over the additive white Gaussian noise (AWGN) channel. The constructed codes are based on transversal designs that arise from sets of mutually orthogonal Latin squares (MOLS) with cyclic structure. For lowering the error-floors, our approach is twofold: First, we give an exhaustive classification of so-called absorbing sets that may occur in the factor graphs of the given codes. These purely combinatorial substructures are known to be the main cause of decoding errors in the error-floor region over the AWGN channel by decoding with the standard sum-product algorithm (SPA). Second, based on this classification, we exploit the specific structure of the presented codes to eliminate the most harmful absorbing sets and derive powerful constraints for the proper choice of code parameters in order to obtain codes with an optimized error-floor performance.Comment: 15 pages. arXiv admin note: text overlap with arXiv:1306.511

    Novel irregular LDPC codes and their application to iterative detection of MIMO systems

    Get PDF
    Low-density parity-check (LDPC) codes are among the best performing error correction codes currently known. For higher performing irregular LDPC codes, degree distributions have been found which produce codes with optimum performance in the infinite block length case. Significant performance degradation is seen at more practical short block lengths. A significant focus in the search for practical LDPC codes is to find a construction method which minimises this reduction in performance as codes approach short lengths. In this work, a novel irregular LDPC code is proposed which makes use of the SPA decoder at the design stage in order to make the best choice of edge placement with respect to iterative decoding performance in the presence of noise. This method, a modification of the progressive edge growth (PEG) algorithm for edge placement in parity-check matrix (PCM) construction is named the DOPEG algorithm. The DOPEG design algorithm is highly flexible in that the decoder optimisation stage may be applied to any modification or extension of the original PEG algorithm with relative ease. To illustrate this fact, the decoder optimisation step was applied to the IPEG modification to the PEG algorithm, which produces codes with comparatively excellent performance. This extension to the DOPEG is called the DOIPEG. A spatially multiplexed coded iteratively detected and decoded multiple-input multiple-output (MIMO) system is then considered. The MIMO system to be investigated is developed through theory and a number of results are presented which illustrate its performance characteristics. The novel DOPEG code is tested for the MIMO system under consideration and a significant performance gain is achieved

    An Efficient Maximum-Likelihood Decoding of LDPC Codes Over the Binary Erasure Channel

    Full text link

    Characterization and Efficient Search of Non-Elementary Trapping Sets of LDPC Codes with Applications to Stopping Sets

    Full text link
    In this paper, we propose a characterization for non-elementary trapping sets (NETSs) of low-density parity-check (LDPC) codes. The characterization is based on viewing a NETS as a hierarchy of embedded graphs starting from an ETS. The characterization corresponds to an efficient search algorithm that under certain conditions is exhaustive. As an application of the proposed characterization/search, we obtain lower and upper bounds on the stopping distance smins_{min} of LDPC codes. We examine a large number of regular and irregular LDPC codes, and demonstrate the efficiency and versatility of our technique in finding lower and upper bounds on, and in many cases the exact value of, smins_{min}. Finding smins_{min}, or establishing search-based lower or upper bounds, for many of the examined codes are out of the reach of any existing algorithm

    Structural Design and Analysis of Low-Density Parity-Check Codes and Systematic Repeat-Accumulate Codes

    Get PDF
    The discovery of two fundamental error-correcting code families, known as turbo codes and low-density parity-check (LDPC) codes, has led to a revolution in coding theory and to a paradigm shift from traditional algebraic codes towards modern graph-based codes that can be decoded by iterative message passing algorithms. From then on, it has become a focal point of research to develop powerful LDPC and turbo-like codes. Besides the classical domain of randomly constructed codes, an alternative and competitive line of research is concerned with highly structured LDPC and turbo-like codes based on combinatorial designs. Such codes are typically characterized by high code rates already at small to moderate code lengths and good code properties such as the avoidance of harmful 4-cycles in the code's factor graph. Furthermore, their structure can usually be exploited for an efficient implementation, in particular, they can be encoded with low complexity as opposed to random-like codes. Hence, these codes are suitable for high-speed applications such as magnetic recording or optical communication. This thesis greatly contributes to the field of structured LDPC codes and systematic repeat-accumulate (sRA) codes as a subclass of turbo-like codes by presenting new combinatorial construction techniques and algebraic methods for an improved code design. More specifically, novel and infinite families of high-rate structured LDPC codes and sRA codes are presented based on balanced incomplete block designs (BIBDs), which form a subclass of combinatorial designs. Besides of showing excellent error-correcting capabilites under iterative decoding, these codes can be implemented efficiently, since their inner structure enables low-complexity encoding and accelerated decoding algorithms. A further infinite series of structured LDPC codes is presented based on the notion of transversal designs, which form another subclass of combinatorial designs. By a proper configuration of these codes, they reveal an excellent decoding performance under iterative decoding, in particular, with very low error-floors. The approach for lowering these error-floors is threefold. First, a thorough analysis of the decoding failures is carried out, resulting in an extensive classification of so-called stopping sets and absorbing sets. These combinatorial entities are known to be the main cause of decoding failures in the error-floor region over the binary erasure channel (BEC) and additive white Gaussian noise (AWGN) channel, respectively. Second, the specific code structures are exploited in order to calculate conditions for the avoidance of the most harmful stopping and absorbing sets. Third, powerful design strategies are derived for the identification of those code instances with the best error-floor performances. The resulting codes can additionally be encoded with low complexity and thus are ideally suited for practical high-speed applications. Further investigations are carried out on the infinite family of structured LDPC codes based on finite geometries. It is known that these codes perform very well under iterative decoding and that their encoding can be achieved with low complexity. By combining the latest findings in the fields of finite geometries and combinatorial designs, we generate new theoretical insights about the decoding failures of such codes under iterative decoding. These examinations finally help to identify the geometric codes with the most beneficial error-correcting capabilities over the BEC

    Optimizing the Decoding Complexity of PEG-Based Methods with an Improved Hybrid Iterative/Gaussian Elimination Decoding Algorithm

    Get PDF
    This paper focuses on optimizing the decoding complexity of the progressive-edge-growth-based (PEG-based) method for the extended grouping of radio frequency identification (RFID) tags using a hybrid iterative/Gaussian elimination decoding algorithm. To further reduce the decoding time, the hybrid decoding is improved by including an early stopping criterion to avoid unnecessary iterations of iterative decoding for undecodable blocks. Various simulations have been carried out to analyse and assess the performance achieved with the PEG-based method under the improved hybrid decoding, both in terms of missing recovery capabilities and decoding complexities. Simulation results are presented, demonstrating that the improved hybrid decoding achieves the optimal missing recovery capabilities of full Gaussian elimination decoding at a lower complexity, as some of the missing tag identifiers are recovered iteratively

    Video over DSL with LDGM Codes for Interactive Applications

    Get PDF
    Digital Subscriber Line (DSL) network access is subject to error bursts, which, for interactive video, can introduce unacceptable latencies if video packets need to be re-sent. If the video packets are protected against errors with Forward Error Correction (FEC), calculation of the application-layer channel codes themselves may also introduce additional latency. This paper proposes Low-Density Generator Matrix (LDGM) codes rather than other popular codes because they are more suitable for interactive video streaming, not only for their computational simplicity but also for their licensing advantage. The paper demonstrates that a reduction of up to 4 dB in video distortion is achievable with LDGM Application Layer (AL) FEC. In addition, an extension to the LDGM scheme is demonstrated, which works by rearranging the columns of the parity check matrix so as to make it even more resilient to burst errors. Telemedicine and video conferencing are typical target applications
    • …
    corecore