91 research outputs found

    Cross-layer hybrid automatic repeat request error control with turbo processing for wireless system

    Get PDF
    The increasing demand for wireless communication system requires an efficient design in wireless communication system. One of the main challenges is to design error control mechanism in noisy wireless channel. Forward Error Correction (FEC) and Automatic Repeat reQuest (ARQ) are two main error control mechanisms. Hybrid ARQ allows the use of either FEC or ARQ when required. The issues with existing Hybrid ARQ are reliability, complexity and inefficient design. Therefore, the design of Hybrid ARQ needs to be further improved in order to achieve performance close to the Shannon capacity. The objective of this research is to develop a Cross-Layer Design Hybrid ARQ defined as CLD_ARQ to further minimize error in wireless communication system. CLD_ARQ comprises of three main stages. First, a low complexity FEC defined as IRC_FEC for error detection and correction has been developed by using Irregular Repetition Code (IRC) with Turbo processing. The second stage is the enhancement of IRC_FEC defined as EM_IRC_FEC to improve the reliability of error detection by adopting extended mapping. The last stage is the development of efficient CLD_ARQ to include retransmission for error correction that exploits EM_IRC_FEC and ARQ. In the proposed design, serial iterative decoding and parallel iterative decoding are deployed in the error detection and correction. The performance of the CLD_ARQ is evaluated in the Additive White Gaussian Noise (AWGN) channel using EXtrinsic Information Transfer (EXIT) chart, bit error rate (BER) and throughput analysis. The results show significant Signal-to-Noise Ratio (SNR) gain from the theoretical limit at BER of 10-5. IRC_FEC outperforms Recursive Systematic Convolutional Code (RSCC) by SNR gain up to 7% due to the use of IRC as a simple channel coding code. The usage of CLD_ARQ enhances the SNR gain by 53% compared to without ARQ due to feedback for retransmission. The adoption of extended mapping in the CLD_ARQ improves the SNR gain up to 50% due to error detection enhancement. In general, the proposed CLD_ARQ can achieve low BER and close to the Shannon‘s capacity even in worse channel condition

    Recent Trends and Considerations for High Speed Data in Chips and System Interconnects

    Get PDF
    This paper discusses key issues related to the design of large processing volume chip architectures and high speed system interconnects. Design methodologies and techniques are discussed, where recent trends and considerations are highlighted

    ENSURE: A Time Sensitive Transport Protocol to Achieve Reliability Over Wireless in Petrochemical Plants

    Get PDF
    As society becomes more reliant on the resources extracted in petroleum refinement the production demand for petrochemical plants increases. A key element is producing efficiently while maintaining safety through constant monitoring of equipment feedback. Currently, temperature and flow sensors are deployed at various points of production and 10/100 Ethernet cable is installed to connect them to a master control unit. This comes at a great monetary cost, not only at the time of implementation but also when repairs are required. The capability to provide plant wide wireless networks would both decrease investment cost and downtime needed for repairs. However, the current state of wireless networks does not provide any guarantee of reliability, which is critical to the industry. When factoring in the need for real-time information, network reliability further decreases. This work presents the design and development of a series of transport layer protocols (coined ENSURE) to provide time-sensitive reliability. More specifically three versions were developed to meet specific needs of the data being sent. ENSURE 1.0 addresses reliability, 2.0 enforces a time limit and the final version, 3.0, provides a balance of the two. A network engineer can set each specific area of the plant to use a different version of ENSURE based network performance needs for the data it produces. The end result being a plant wide wireless network that performs in a timely and reliable fashion

    Forward Error Correcting Codes for 100 Gbit/s Optical Communication Systems

    Get PDF

    Bit flipping decoding for binary product codes

    Get PDF
    Error control coding has been used to mitigate the impact of noise on the wireless channel. Today, wireless communication systems have in their design Forward Error Correction (FEC) techniques to help reduce the amount of retransmitted data. When designing a coding scheme, three challenges need to be addressed, the error correcting capability of the code, the decoding complexity of the code and the delay introduced by the coding scheme. While it is easy to design coding schemes with a large error correcting capability, it is a challenge finding decoding algorithms for these coding schemes. Generally increasing the length of a block code increases its error correcting capability and its decoding complexity. Product codes have been identified as a means to increase the block length of simpler codes, yet keep their decoding complexity low. Bit flipping decoding has been identified as simple to implement decoding algorithm. Research has generally been focused on improving bit flipping decoding for Low Density Parity Check codes. In this study we develop a new decoding algorithm based on syndrome checking and bit flipping to use for binary product codes, to address the major challenge of coding systems, i.e., developing codes with a large error correcting capability yet have a low decoding complexity. Simulated results show that the proposed decoding algorithm outperforms the conventional decoding algorithm proposed by P. Elias in BER and more significantly in WER performance. The algorithm offers comparable complexity to the conventional algorithm in the Rayleigh fading channel

    100 Gb/s Data Link Layer - from a Simulation to FPGA Implementation, Journal of Telecommunications and Information Technology

    Get PDF
    In this paper, a simulation and hardware implementation of a data link layer for 100 Gb/s terahertz wireless communications is presented. In this solution the overhead of protocols and coding should be reduced to a minimum. This is especially important for high-speed networks, where a small degradation of efficiency will lower the user data throughput by several gigabytes per second. The following aspects are explained: an acknowledge frame compression, the optimal frame segmentation and aggregation, Reed-Solomon forward error correction, an algorithm to control the transmitted data redundancy (link adaptation), and FPGA implementation of a demonstrator. The most important conclusion is that changing the segment size influences the uncoded transmissions mostly, and the FPGA memory footprint can be significantly reduced when the hybrid automatic repeat request type II is replaced by the type I with a link adaptation. Additionally, an algorithm for controlling the Reed-Solomon redundancy is presented. Hardware implementation is demonstrated, and the device achieves net data rate of 97 Gb/s

    Enhanced Machine Learning Techniques for Early HARQ Feedback Prediction in 5G

    Full text link
    We investigate Early Hybrid Automatic Repeat reQuest (E-HARQ) feedback schemes enhanced by machine learning techniques as a path towards ultra-reliable and low-latency communication (URLLC). To this end, we propose machine learning methods to predict the outcome of the decoding process ahead of the end of the transmission. We discuss different input features and classification algorithms ranging from traditional methods to newly developed supervised autoencoders. These methods are evaluated based on their prospects of complying with the URLLC requirements of effective block error rates below 10510^{-5} at small latency overheads. We provide realistic performance estimates in a system model incorporating scheduling effects to demonstrate the feasibility of E-HARQ across different signal-to-noise ratios, subcode lengths, channel conditions and system loads, and show the benefit over regular HARQ and existing E-HARQ schemes without machine learning.Comment: 14 pages, 15 figures; accepted versio

    Reconfigurable architectures for beyond 3G wireless communication systems

    Get PDF

    Capacity-Achieving Coding Mechanisms: Spatial Coupling and Group Symmetries

    Get PDF
    The broad theme of this work is in constructing optimal transmission mechanisms for a wide variety of communication systems. In particular, this dissertation provides a proof of threshold saturation for spatially-coupled codes, low-complexity capacity-achieving coding schemes for side-information problems, a proof that Reed-Muller and primitive narrow-sense BCH codes achieve capacity on erasure channels, and a mathematical framework to design delay sensitive communication systems. Spatially-coupled codes are a class of codes on graphs that are shown to achieve capacity universally over binary symmetric memoryless channels (BMS) under belief-propagation decoder. The underlying phenomenon behind spatial coupling, known as “threshold saturation via spatial coupling”, turns out to be general and this technique has been applied to a wide variety of systems. In this work, a proof of the threshold saturation phenomenon is provided for irregular low-density parity-check (LDPC) and low-density generator-matrix (LDGM) ensembles on BMS channels. This proof is far simpler than published alternative proofs and it remains as the only technique to handle irregular and LDGM codes. Also, low-complexity capacity-achieving codes are constructed for three coding problems via spatial coupling: 1) rate distortion with side-information, 2) channel coding with side-information, and 3) write-once memory system. All these schemes are based on spatially coupling compound LDGM/LDPC ensembles. Reed-Muller and Bose-Chaudhuri-Hocquengham (BCH) are well-known algebraic codes introduced more than 50 years ago. While these codes are studied extensively in the literature it wasn’t known whether these codes achieve capacity. This work introduces a technique to show that Reed-Muller and primitive narrow-sense BCH codes achieve capacity on erasure channels under maximum a posteriori (MAP) decoding. Instead of relying on the weight enumerators or other precise details of these codes, this technique requires that these codes have highly symmetric permutation groups. In fact, any sequence of linear codes with increasing blocklengths whose rates converge to a number between 0 and 1, and whose permutation groups are doubly transitive achieve capacity on erasure channels under bit-MAP decoding. This pro-vides a rare example in information theory where symmetry alone is sufficient to achieve capacity. While the channel capacity provides a useful benchmark for practical design, communication systems of the day also demand small latency and other link layer metrics. Such delay sensitive communication systems are studied in this work, where a mathematical framework is developed to provide insights into the optimal design of these systems
    corecore