13 research outputs found

    A 2.0 Gb/s Throughput Decoder for QC-LDPC Convolutional Codes

    Full text link
    This paper propose a decoder architecture for low-density parity-check convolutional code (LDPCCC). Specifically, the LDPCCC is derived from a quasi-cyclic (QC) LDPC block code. By making use of the quasi-cyclic structure, the proposed LDPCCC decoder adopts a dynamic message storage in the memory and uses a simple address controller. The decoder efficiently combines the memories in the pipelining processors into a large memory block so as to take advantage of the data-width of the embedded memory in a modern field-programmable gate array (FPGA). A rate-5/6 QC-LDPCCC has been implemented on an Altera Stratix FPGA. It achieves up to 2.0 Gb/s throughput with a clock frequency of 100 MHz. Moreover, the decoder displays an excellent error performance of lower than 101310^{-13} at a bit-energy-to-noise-power-spectral-density ratio (Eb/N0E_b/N_0) of 3.55 dB.Comment: accepted to IEEE Transactions on Circuits and Systems

    Configurable LDPC Decoder Architecture for Regular and Irregular Codes

    Get PDF
    Low Density Parity Check (LDPC) codes are one of the best error correcting codes that enable the future generations of wireless devices to achieve higher data rates with excellent quality of service. This paper presents two novel flexible decoder architectures. The first one supports (3, 6) regular codes of rate 1/2 that can be used for different block lengths. The second decoder is more general and supports both regular and irregular LDPC codes with twelve combinations of code lengths −648, 1296, 1944-bits and code rates-1/2, 2/3, 3/4, 5/6- based on the IEEE 802.11n standard. All codes correspond to a block-structured parity check matrix, in which the sub-blocks are either a shifted identity matrix or a zero matrix. Prototype architectures for both LDPC decoders have been implemented and tested on a Xilinx field programmable gate array.NokiaNational Science Foundatio

    Structural Design and Analysis of Low-Density Parity-Check Codes and Systematic Repeat-Accumulate Codes

    Get PDF
    The discovery of two fundamental error-correcting code families, known as turbo codes and low-density parity-check (LDPC) codes, has led to a revolution in coding theory and to a paradigm shift from traditional algebraic codes towards modern graph-based codes that can be decoded by iterative message passing algorithms. From then on, it has become a focal point of research to develop powerful LDPC and turbo-like codes. Besides the classical domain of randomly constructed codes, an alternative and competitive line of research is concerned with highly structured LDPC and turbo-like codes based on combinatorial designs. Such codes are typically characterized by high code rates already at small to moderate code lengths and good code properties such as the avoidance of harmful 4-cycles in the code's factor graph. Furthermore, their structure can usually be exploited for an efficient implementation, in particular, they can be encoded with low complexity as opposed to random-like codes. Hence, these codes are suitable for high-speed applications such as magnetic recording or optical communication. This thesis greatly contributes to the field of structured LDPC codes and systematic repeat-accumulate (sRA) codes as a subclass of turbo-like codes by presenting new combinatorial construction techniques and algebraic methods for an improved code design. More specifically, novel and infinite families of high-rate structured LDPC codes and sRA codes are presented based on balanced incomplete block designs (BIBDs), which form a subclass of combinatorial designs. Besides of showing excellent error-correcting capabilites under iterative decoding, these codes can be implemented efficiently, since their inner structure enables low-complexity encoding and accelerated decoding algorithms. A further infinite series of structured LDPC codes is presented based on the notion of transversal designs, which form another subclass of combinatorial designs. By a proper configuration of these codes, they reveal an excellent decoding performance under iterative decoding, in particular, with very low error-floors. The approach for lowering these error-floors is threefold. First, a thorough analysis of the decoding failures is carried out, resulting in an extensive classification of so-called stopping sets and absorbing sets. These combinatorial entities are known to be the main cause of decoding failures in the error-floor region over the binary erasure channel (BEC) and additive white Gaussian noise (AWGN) channel, respectively. Second, the specific code structures are exploited in order to calculate conditions for the avoidance of the most harmful stopping and absorbing sets. Third, powerful design strategies are derived for the identification of those code instances with the best error-floor performances. The resulting codes can additionally be encoded with low complexity and thus are ideally suited for practical high-speed applications. Further investigations are carried out on the infinite family of structured LDPC codes based on finite geometries. It is known that these codes perform very well under iterative decoding and that their encoding can be achieved with low complexity. By combining the latest findings in the fields of finite geometries and combinatorial designs, we generate new theoretical insights about the decoding failures of such codes under iterative decoding. These examinations finally help to identify the geometric codes with the most beneficial error-correcting capabilities over the BEC

    Channel Detection and Decoding With Deep Learning

    Full text link
    In this thesis, we investigate the designs of pragmatic data detectors and channel decoders with the assistance of deep learning. We focus on three emerging and fundamental research problems, including the designs of message passing algorithms for data detection in faster-than-Nyquist (FTN) signalling, soft-decision decoding algorithms for high-density parity-check codes and user identification for massive machine-type communications (mMTC). These wireless communication research problems are addressed by the employment of deep learning and an outline of the main contributions are given below. In the first part, we study a deep learning-assisted sum-product detection algorithm for FTN signalling. The proposed data detection algorithm works on a modified factor graph which concatenates a neural network function node to the variable nodes of the conventional FTN factor graph to compensate any detrimental effects that degrade the detection performance. By investigating the maximum-likelihood bit-error rate performance of a finite length coded FTN system, we show that the error performance of the proposed algorithm approaches the maximum a posterior performance, which might not be approachable by employing the sum-product algorithm on conventional FTN factor graph. After investigating the deep learning-assisted message passing algorithm for data detection, we move to the design of an efficient channel decoder. Specifically, we propose a node-classified redundant decoding algorithm based on the received sequence’s channel reliability for Bose-Chaudhuri-Hocquenghem (BCH) codes. Two preprocessing steps are proposed prior to decoding, to mitigate the unreliable information propagation and to improve the decoding performance. On top of the preprocessing, we propose a list decoding algorithm to augment the decoder’s performance. Moreover, we show that the node-classified redundant decoding algorithm can be transformed into a neural network framework, where multiplicative tuneable weights are attached to the decoding messages to optimise the decoding performance. We show that the node-classified redundant decoding algorithm provides a performance gain compared to the random redundant decoding algorithm. Additional decoding performance gain can be obtained by both the list decoding method and the neural network “learned” node-classified redundant decoding algorithm. Finally, we consider one of the practical services provided by the fifth-generation (5G) wireless communication networks, mMTC. Two separate system models for mMTC are studied. The first model assumes that low-resolution digital-to-analog converters are equipped by the devices in mMTC. The second model assumes that the devices' activities are correlated. In the first system model, two rounds of signal recoveries are performed. A neural network is employed to identify a suspicious device which is most likely to be falsely alarmed during the first round of signal recovery. The suspicious device is enforced to be inactive in the second round of signal recovery. The proposed scheme can effectively combat the interference caused by the suspicious device and thus improve the user identification performance. In the second system model, two deep learning-assisted algorithms are proposed to exploit the user activity correlation to facilitate channel estimation and user identification. We propose a deep learning modified orthogonal approximate message passing algorithm to exploit the correlation structure among devices. In addition, we propose a neural network framework that is dedicated for the user identification. More specifically, the neural network aims to minimise the missed detection probability under a pre-determined false alarm probability. The proposed algorithms substantially reduce the mean squared error between the estimate and unknown sequence, and largely improve the trade-off between the missed detection probability and the false alarm probability compared to the conventional orthogonal approximate message passing algorithm. All the aforementioned three parts of research works demonstrate that deep learning is a powerful technology in the physical layer designs of wireless communications

    Synchronization in digital communication systems: performance bounds and practical algorithms

    Get PDF
    Communication channels often transfer signals from different transmitters. To avoid interference the available frequency spectrum is divided into non-overlapping frequency bands (bandpass channels) and each transmitter is assigned to a different bandpass channel. The transmission of a signal over a bandpass channel requires a shift of its frequency-content to a frequency range that is compatible with the designated frequency band (modulation). At the receiver, the modulated signal is demodulated (frequency shifted back to the original frequency band) in order to recover the original signal. The modulation/demodulation process requires the presence of a locally generated sinusoidal signal at both the transmitter and the receiver. To enable a reliable information transfer, it is imperative that these two sinusoids are accurately synchronized. Recently, several powerful channel codes have been developed which enable reliable communication at a very low signal-to-noise ratio (SNR). A by-product of these developments is that synchronization must now be performed at a SNR that is lower than ever before. Of course, this imposes high requirements on the synchronizer design. This doctoral thesis investigates to what extent (performance bounds) and in what way (practical algorithms) the structure that the channel code enforces upon the transmitted signal can be exploited to improve the synchronization accuracy at low SNR

    Codes on Graphs and More

    Get PDF
    Modern communication systems strive to achieve reliable and efficient information transmission and storage with affordable complexity. Hence, efficient low-complexity channel codes providing low probabilities for erroneous receptions are needed. Interpreting codes as graphs and graphs as codes opens new perspectives for constructing such channel codes. Low-density parity-check (LDPC) codes are one of the most recent examples of codes defined on graphs, providing a better bit error probability than other block codes, given the same decoding complexity. After an introduction to coding theory, different graphical representations for channel codes are reviewed. Based on ideas from graph theory, new algorithms are introduced to iteratively search for LDPC block codes with large girth and to determine their minimum distance. In particular, new LDPC block codes of different rates and with girth up to 24 are presented. Woven convolutional codes are introduced as a generalization of graph-based codes and an asymptotic bound on their free distance, namely, the Costello lower bound, is proven. Moreover, promising examples of woven convolutional codes are given, including a rate 5/20 code with overall constraint length 67 and free distance 120. The remaining part of this dissertation focuses on basic properties of convolutional codes. First, a recurrent equation to determine a closed form expression of the exact decoding bit error probability for convolutional codes is presented. The obtained closed form expression is evaluated for various realizations of encoders, including rate 1/2 and 2/3 encoders, of as many as 16 states. Moreover, MacWilliams-type identities are revisited and a recursion for sequences of spectra of truncated as well as tailbitten convolutional codes and their duals is derived. Finally, the dissertation is concluded with exhaustive searches for convolutional codes of various rates with either optimum free distance or optimum distance profile, extending previously published results

    Compute-and-Forward Relay Networks with Asynchronous, Mobile, and Delay-Sensitive Users

    Get PDF
    We consider a wireless network consisting of multiple source nodes, a set of relays and a destination node. Suppose the sources transmit their messages simultaneously to the relays and the destination aims to decode all the messages. At the physical layer, a conventional approach would be for the relay to decode the individual message one at a time while treating rest of the messages as interference. Compute-and-forward is a novel strategy which attempts to turn the situation around by treating the interference as a constructive phenomenon. In compute-and-forward, each relay attempts to directly compute a combination of the transmitted messages and then forwards it to the destination. Upon receiving the combinations of messages from the relays, the destination can recover all the messages by solving the received equations. When identical lattice codes are employed at the sources, error correction to integer combination of messages is a viable option by exploiting the algebraic structure of lattice codes. Therefore, compute-and-forward with lattice codes enables the relay to manage interference and perform error correction concurrently. It is shown that compute-and-forward exhibits substantial improvement in the achievable rate compared with other state-of-the-art schemes for medium to high signal-to-noise ratio regime. Despite several results that show the excellent performance of compute-and-forward, there are still important challenges to overcome before we can utilize compute-and- forward in practice. Some important challenges include the assumptions of \perfect timing synchronization "and \quasi-static fading", since these assumptions rarely hold in realistic wireless channels. So far, there are no conclusive answers to whether compute-and-forward can still provide substantial gains even when these assumptions are removed. When lattice codewords are misaligned and mixed up, decoding integer combination of messages is not straightforward since the linearity of lattice codes is generally not invariant to time shift. When channel exhibits time selectivity, it brings challenges to compute-and-forward since the linearity of lattice codes does not suit the time varying nature of the channel. Another challenge comes from the emerging technologies for future 5G communication, e.g., autonomous driving and virtual reality, where low-latency communication with high reliability is necessary. In this regard, powerful short channel codes with reasonable encoding/decoding complexity are indispensable. Although there are fruitful results on designing short channel codes for point-to-point communication, studies on short code design specifically for compute-and-forward are rarely found. The objective of this dissertation is threefold. First, we study compute-and-forward with timing-asynchronous users. Second, we consider the problem of compute-and- forward over block-fading channels. Finally, the problem of compute-and-forward for low-latency communication is studied. Throughout the dissertation, the research methods and proposed remedies will center around the design of lattice codes in order to facilitate the use of compute-and-forward in the presence of these challenges

    Capacity -based parameter optimization of bandwidth constrained CPM

    Get PDF
    Continuous phase modulation (CPM) is an attractive modulation choice for bandwidth limited systems due to its small side lobes, fast spectral decay and the ability to be noncoherently detected. Furthermore, the constant envelope property of CPM permits highly power efficient amplification. The design of bit-interleaved coded continuous phase modulation is characterized by the code rate, modulation order, modulation index, and pulse shape. This dissertation outlines a methodology for determining the optimal values of these parameters under bandwidth and receiver complexity constraints. The cost function used to drive the optimization is the information-theoretic minimum ratio of energy-per-bit to noise-spectral density found by evaluating the constrained channel capacity. The capacity can be reliably estimated using Monte Carlo integration. A search for optimal parameters is conducted over a range of coded CPM parameters, bandwidth efficiencies, and channels. Results are presented for a system employing a trellis-based coherent detector. To constrain complexity and allow any modulation index to be considered, a soft output differential phase detector has also been developed.;Building upon the capacity results, extrinsic information transfer (EXIT) charts are used to analyze a system that iterates between demodulation and decoding. Convergence thresholds are determined for the iterative system for different outer convolutional codes, alphabet sizes, modulation indices and constellation mappings. These are used to identify the code and modulation parameters with the best energy efficiency at different spectral efficiencies for the AWGN channel. Finally, bit error rate curves are presented to corroborate the capacity and EXIT chart designs

    Coded-OFDM for PLC systems in non-Gaussian noise channels

    Get PDF
    PhD ThesisNowadays, power line communication (PLC) is a technology that uses the power line grid for communication purposes along with transmitting electrical energy, for providing broadband services to homes and offices such as high-speed data, audio, video and multimedia applications. The advantages of this technology are to eliminate the need for new wiring and AC outlet plugs by using an existing infrastructure, ease of installation and reduction of the network deployment cost. However, the power line grid is originally designed for the transmission of the electric power at low frequencies; i.e. 50/60 Hz. Therefore, the PLC channel appears as a harsh medium for low-power high-frequency communication signals. The development of PLC systems for providing high-speed communication needs precise knowledge of the channel characteristics such as the attenuation, non-Gaussian noise and selective fading. Non-Gaussian noise in PLC channels can classify into Nakagami-m background interference (BI) noise and asynchronous impulsive noise (IN) modelled by a Bernoulli-Gaussian mixture (BGM) model or Middleton class A (MCA) model. Besides the effects of the multipath PLC channel, asynchronous impulsive noise is the main reason causing performance degradation in PLC channels. Binary/non-binary low-density parity check B/NB-(LDPC) codes and turbo codes (TC) with soft iterative decoders have been proposed for Orthogonal Frequency Division Multiplexing (OFDM) system to improve the bit error rate (BER) performance degradation by exploiting frequency diversity. The performances are investigated utilizing high-order quadrature amplitude modulation (QAM) in the presence of non-Gaussian noise over multipath broadband power-line communication (BBPLC) channels. OFDM usually spreads the effect of IN over multiple sub-carriers after discrete Fourier transform (DFT) operation at the receiver, hence, it requires only a simple single-tap zero forcing (ZF) equalizer at the receiver. The thesis focuses on improving the performance of iterative decoders by deriving the effective, complex-valued, ratio distributions of the noise samples at the zeroforcing (ZF) equalizer output considering the frequency-selective multipath PLCs, background interference noise and impulsive noise, and utilizing the outcome for computing the apriori log likelihood ratios (LLRs) required for soft decoding algorithms. On the other hand, Physical-Layer Network Coding (PLNC) is introduced to help the PLC system to extend the range of operation for exchanging information between two users (devices) using an intermediate relay (hub) node in two-time slots in the presence of non-Gaussian noise over multipath PLC channels. A novel detection scheme is proposed to transform the transmit signal constellation based on the frequency-domain channel coefficients to optimize detection at the relay node with newly derived noise PDF at the relay and end nodes. Additionally, conditions for optimum detection utilizing a high-order constellation are derived. The closedform expressions of the BER and average BER upper-bound (AUB) are derived for a point-to-point system, and for a PLNC system at the end node to relay, relay to end node and at the end-to-end nodes. Moreover, the convergence behaviour of iterative decoders is evaluated using EXtrinsic Information Transfer (EXIT) chart analysis and upper bound analyses. Furthermore, an optimization of the threshold determination for clipping and blanking impulsive noise mitigation methods are derived. The proposed systems are compared in performance using simulation in MATLAB and analytical methods.Ministry of Higher Education in Ira

    CONVERGENCE IMPROVEMENT OF ITERATIVE DECODERS

    Get PDF
    Iterative decoding techniques shaked the waters of the error correction and communications field in general. Their amazing compromise between complexity and performance offered much more freedom in code design and made highly complex codes, that were being considered undecodable until recently, part of almost any communication system. Nevertheless, iterative decoding is a sub-optimum decoding method and as such, it has attracted huge research interest. But the iterative decoder still hides many of its secrets, as it has not been possible yet to fully describe its behaviour and its cost function. This work presents the convergence problem of iterative decoding from various angles and explores methods for reducing any sub-optimalities on its operation. The decoding algorithms for both LDPC and turbo codes were investigated and aspects that contribute to convergence problems were identified. A new algorithm was proposed, capable of providing considerable coding gain in any iterative scheme. Moreover, it was shown that for some codes the proposed algorithm is sufficient to eliminate any sub-optimality and perform maximum likelihood decoding. Its performance and efficiency was compared to that of other convergence improvement schemes. Various conditions that can be considered critical to the outcome of the iterative decoder were also investigated and the decoding algorithm of LDPC codes was followed analytically to verify the experimental results
    corecore