29 research outputs found

    Construction of multiple-rate QC-LDPC codes using hierarchical row-splitting

    No full text
    In this letter, we propose an improved method called hierarchical row-splitting with edge variation for designing multiple-rate quasi-cyclic low-density parity-check (QC-LDPC) codes, which constructs lower-rate codes from a high-rate mother code by row-splitting operations. Consequently, the obtained QC-LDPC codes with various code rates have the same blocklength and can share common hardware resources to reduce the implementation complexity. Compared with the conventional row-combining-based algorithms, a wider range of code rates are supported. Moreover, each individual rate code could be separately optimized, making it easier to find a set of multiple-rate QC-LDPC codes with good performance for all different rates. Simulation results demonstrate that the obtained codes outperform the counterparts from digital video broadcasting-second generation terrestrial

    Sparse graph-based coding schemes for continuous phase modulations

    Get PDF
    The use of the continuous phase modulation (CPM) is interesting when the channel represents a strong non-linearity and in the case of limited spectral support; particularly for the uplink, where the satellite holds an amplifier per carrier, and for downlinks where the terminal equipment works very close to the saturation region. Numerous studies have been conducted on this issue but the proposed solutions use iterative CPM demodulation/decoding concatenated with convolutional or block error correcting codes. The use of LDPC codes has not yet been introduced. Particularly, no works, to our knowledge, have been done on the optimization of sparse graph-based codes adapted for the context described here. In this study, we propose to perform the asymptotic analysis and the design of turbo-CPM systems based on the optimization of sparse graph-based codes. Moreover, an analysis on the corresponding receiver will be done

    Research and developments of distributed video coding

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The recent developed Distributed Video Coding (DVC) is typically suitable for the applications such as wireless/wired video sensor network, mobile camera etc. where the traditional video coding standard is not feasible due to the constrained computation at the encoder. With DVC, the computational burden is moved from encoder to decoder. The compression efficiency is achieved via joint decoding at the decoder. The practical application of DVC is referred to Wyner-Ziv video coding (WZ) where the side information is available at the decoder to perform joint decoding. This join decoding inevitably causes a very complex decoder. In current WZ video coding issues, many of them emphasise how to improve the system coding performance but neglect the huge complexity caused at the decoder. The complexity of the decoder has direct influence to the system output. The beginning period of this research targets to optimise the decoder in pixel domain WZ video coding (PDWZ), while still achieves similar compression performance. More specifically, four issues are raised to optimise the input block size, the side information generation, the side information refinement process and the feedback channel respectively. The transform domain WZ video coding (TDWZ) has distinct superior performance to the normal PDWZ due to the exploitation in spatial direction during the encoding. However, since there is no motion estimation at the encoder in WZ video coding, the temporal correlation is not exploited at all at the encoder in all current WZ video coding issues. In the middle period of this research, the 3D DCT is adopted in the TDWZ to remove redundancy in both spatial and temporal direction thus to provide even higher coding performance. In the next step of this research, the performance of transform domain Distributed Multiview Video Coding (DMVC) is also investigated. Particularly, three types transform domain DMVC frameworks which are transform domain DMVC using TDWZ based 2D DCT, transform domain DMVC using TDWZ based on 3D DCT and transform domain residual DMVC using TDWZ based on 3D DCT are investigated respectively. One of the important applications of WZ coding principle is error-resilience. There have been several attempts to apply WZ error-resilient coding for current video coding standard e.g. H.264/AVC or MEPG 2. The final stage of this research is the design of WZ error-resilient scheme for wavelet based video codec. To balance the trade-off between error resilience ability and bandwidth consumption, the proposed scheme emphasises the protection of the Region of Interest (ROI) area. The efficiency of bandwidth utilisation is achieved by mutual efforts of WZ coding and sacrificing the quality of unimportant area. In summary, this research work contributed to achieves several advances in WZ video coding. First of all, it is targeting to build an efficient PDWZ with optimised decoder. Secondly, it aims to build an advanced TDWZ based on 3D DCT, which then is applied into multiview video coding to realise advanced transform domain DMVC. Finally, it aims to design an efficient error-resilient scheme for wavelet video codec, with which the trade-off between bandwidth consumption and error-resilience can be better balanced

    Hypergraph product codes: a bridge to scalable quantum computers

    Get PDF
    A physical machine for storage and manipulation of information, being physical, will always be subject to noise and failure. For this reason, the design of fault-tolerant architectures is of prime importance for building a working quantum computer. Quantum error correction codes offer a possible elegant framework for fault-tolerance when provided with methods to operate qubits without corrupting the information stored therein. This work specialises in hypergraph product (HGP) codes and seeks to lay the groundwork for a quantum computer architecture based on them. The leading approach to fault-tolerant quantum computation is, today, based on the planar code. A planar-code-based quantum computer, however, would require dramatic qubit overhead and we believe that good low-density parity-check (LDPC) codes are necessary to attain the full potential of quantum computing. The HGP codes, of which the planar code is an instance, are not, strictly speaking, good LDPC codes. Still, they are an efficient alternative. On the one hand, the best HGP codes improve upon the planar code as they can store multiple logical qubits. On the other, they are not considered good because their noise robustness is sub-optimal. Nonetheless, we see the design of a HGP-based quantum computer as a bridge between the currently-favoured planar code design and the gold standard of good LDPC codes. A HGP-based architecture would inform our knowledge on how to design fault-tolerant protocols when a code stores multiple logical qubits, which is, to a large extent, still an open question. Our first original contribution is a decoding algorithm for all families of two-fold HGP codes. Second, we exhibit a constructive method to implement some logical encoded operations, given HGP codes with particular symmetries. Last, we propose the concept of confinement as an essential characteristic for a code family to be robust against syndrome measurement errors. Importantly, we show that both expander and three-dimensional HGP codes have the desired confinement property

    Coding and Probabilistic Inference Methods for Data-Dependent Two-Dimensional Channels

    Get PDF
    Recent advances in magnetic recording systems, optical recording devices and flash memory drives necessitate to study two-dimensional (2-D) coding techniques for reliable storage/retrieval of information. Most channels in such systems introduce errors in messages in response to certain data patterns, and messages containing these patterns are more prone to errors than others. For example, in a single-level cell flash memory channel, inter-cell interference (ICI) is at its maximum when 101 patterns are programmed over adjacent cells in either horizontal or vertical directions. As another example, in two-dimensional magnetic recording channels, 2-D isolated-bits patterns are shown empirically to be the dominant error event, and during the read-back process inter-symbol interference (ISI) and inter-track interference (ITI) arise when these patterns are recorded over the magnetic medium. Shannon in his seminal work, ``A Mathematical Theory of Communications," presented two techniques for reliable transmission of messages over noisy channels, namely error correction coding and constrained coding. In the first method, messages are protected via an error correction code (ECC) from random errors which are independent of input data. The theory of ECCs is well studied, and efficient code construction methods are developed for simple binary channels, additive white Gaussian noise (AWGN) channels and partial response channels. On the other hand, constrained coding reduces the likelihood of corruption by removing problematic patterns before transmission over data-dependent channels. Prominent examples of constraints include a family of binary one-dimensional (1-D) and 2-D (d,k)\left(d,k\right)-run-length-limited (RLL) constraints which improves resilience to ISI timing recovery and synchronization for bandwidth limited partial response channels, where d and k represent the minimum and maximum number of admissible zeros between two successive ones in any direction of array. In principle, the ultimate coding approach for such data-dependent channels is to design a set of sufficiently distinct error correction codewords that also satisfy channel constraints. Designing channel codewords satisfying both ECC and channel constraints is important as it would achieve the channel capacity. However, in practice this is difficult, and we rely on sub-optimal methods such as forward concatenation method (standard concatenation), reverse concatenation method (modified concatenation), and combinations of these approaches. In this dissertation, we focus on the problem of reliable transmission of binary messages over data-dependent 2-D communication channels. Our work is concerned with several challenges in regard to the transmission of binary messages over data-dependent 2-D channels. Design of Two-Dimensional Magnetic Recording (TDMR) Detector and Decoder: TDMR achieves high areal densities by reducing the size of a bit comparable to the size of the magnetic grains resulting in 2-D ISI and very high media noise. Therefore, it is critical to handle the media noise along with the 2-D ISI detection. In this work, we tune the Generalized Belief Propagation (GBP) algorithm to handle the media noise seen in TDMR. We also provide an intuition into the nature of hard decisions provided by the GBP algorithm. Investigation into Harmful Patterns for TDMR channels: This work investigates into the Voronoi based media model to study the harmful patterns over multi-track shingled recording systems. Through realistic quasi micromagnetic simulations studies, we identify 2-D data patterns that contribute to high media noise. We look into the generic Voronoi model and present our analysis on multi-track detection with constrained coded data. We show that 2-D constraints imposed on input patterns result in an order of magnitude improvement in the bit error rate for TDMR systems. Understanding of Constraint Gain for TDMR Channels: We study performance gains of constrained codes in TDMR channels using the notion of constraint gain. We consider Voronoi based TDMR channels with realistic grain, bit, track and magnetic-head dimensions. Specifically, we investigate the constraint gain for 2-D no-isolated-bits constraint over Voronoi based TDMR channels. We focus on schemes that employ the GBP algorithm for obtaining information rate estimates for TDMR channels. Design of Novel Constrained Coding Methods: In this work, we present a deliberate bit flipping (DBF) coding scheme for binary 2-D channels, where specific patterns in channel inputs are the significant cause of errors. The idea is to eliminate a constrained encoder and, instead, embed a constraint into an error correction codeword that is arranged into a 2-D array by deliberately flipping the bits that violate the constraint. The DBF method relies on the error correction capability of the code being used so that it should be able to correct both deliberate errors and channel errors. Therefore, it is crucial to flip minimum number of bits in order not to overburden the error correction decoder. We devise a constrained combinatorial formulation for minimizing the number of flipped bits for a given set of harmful patterns. The GBP algorithm is used to find an approximate solution for the problem. Devising Reduced Complexity Probabilistic Inference Methods: We propose a reduced complexity GBP that propagates messages in Log-Likelihood Ratio (LLR) domain. The key novelties of the proposed LLR-GBP are: (i) reduced fixed point precision for messages instead of computational complex floating point format, (ii) operations performed in logarithm domain, thus eliminating the need for multiplications and divisions, (iii) usage of message ratios that leads to simple hard decision mechanisms

    Iterative message-passing-based algorithms to detect spreading codes

    Get PDF
    This thesis tackles the issue of the rapid acquisition of spreading codes in Direct-Sequence Spread-Spectrum (DS/SS) communication systems. In particular, a new algorithm is proposed that exploits the experience of the iterative decoding of modern codes (LDPC and turbo codes) to detect these sequences. This new method is a Message-Passing-based algorithm. Specifically, instead of correlating the received signal with local replicas of the transmitted linear feedback shift register (LFSR) sequence, an iterative Message-Passing algorithm is implemented to be run on a loopy graph. In particular, these graphical models are designed by manipulating the generating polynomial structure of the considered LFSR sequence. Therefore, this contribution is a detailed analysis of the detection technique based on Message-Passing algorithms to acquire m-Sequences and Gold codes. More in detail, a unified treatment to design and implement a specific set of graphical models for these codes is reported. A theoretical study on the acquisition time performance and their comparison to the standard algorithms (full-parallel, simple-serial, and hybrid searches) is done. A preliminary architectural design is also provided. Finally, the analysis is also enriched by comparing this new technique to the standard algorithms in terms of computational complexity and (missed/wrong/correct) acquisition probabilities as derived by simulations

    Pertanika Journal of Science & Technology

    Get PDF
    corecore