195 research outputs found

    Improving decoding speed for parallel distributed video coding architectures

    Get PDF
    The research work disclosed in this publication is partially funded by the Strategic Educational Pathways Scholarship Scheme (Malta). The scholarship is part-financed by the European Union – European Social Fund. (ESF 1.25).The Distributed Video Coding (DVC) paradigm is suitable for devices which have limited encoding capabilities. However, it is characterized by excessive decoding delays which compromise their application for time constrained services. This limitation can be mitigated by adopting parallel DVC architectures. Yet, the traditional Gray-code or binary-code representations have a non-uniform distribution of mismatch across bit-planes, resulting in uneven decoding times which hinder parallel decoding. This work proposes an alternative indexing scheme, where mismatch is distributed more uniformly amongst bit-planes and thus comparable decoding delays are expected, facilitating parallel implementations. This method reduces decoding time by up to 32% compared to architectures using simple parallel techniques, with a slight loss of 0.06dB in RD performance.peer-reviewe

    Turbo multiuser detection with integrated channel estimation for differentially coded CDMA systems.

    Get PDF

    Advanced Coding And Modulation For Ultra-wideband And Impulsive Noises

    Get PDF
    The ever-growing demand for higher quality and faster multimedia content delivery over short distances in home environments drives the quest for higher data rates in wireless personal area networks (WPANs). One of the candidate IEEE 802.15.3a WPAN proposals support data rates up to 480 Mbps by using punctured convolutional codes with quadrature phase shift keying (QPSK) modulation for a multi-band orthogonal frequency-division multiplexing (MB-OFDM) system over ultra wideband (UWB) channels. In the first part of this dissertation, we combine more powerful near-Shannon-limit turbo codes with bandwidth efficient trellis coded modulation, i.e., turbo trellis coded modulation (TTCM), to further improve the data rates up to 1.2 Gbps. A modified iterative decoder for this TTCM coded MB-OFDM system is proposed and its bit error rate performance under various impulsive noises over both Gaussian and UWB channel is extensively investigated, especially in mismatched scenarios. A robust decoder which is immune to noise mismatch is provided based on comparison of impulsive noises in time domain and frequency domain. The accurate estimation of the dynamic noise model could be very difficult or impossible at the receiver, thus a significant performance degradation may occur due to noise mismatch. In the second part of this dissertation, we prove that the minimax decoder in \cite, which instead of minimizing the average bit error probability aims at minimizing the worst bit error probability, is optimal and robust to certain noise model with unknown prior probabilities in two and higher dimensions. Besides turbo codes, another kind of error correcting codes which approach the Shannon capacity is low-density parity-check (LDPC) codes. In the last part of this dissertation, we extend the density evolution method for sum-product decoding using mismatched noises. We will prove that as long as the true noise type and the estimated noise type used in the decoder are both binary-input memoryless output symmetric channels, the output from mismatched log-likelihood ratio (LLR) computation is also symmetric. We will show the Shannon capacity can be evaluated for mismatched LLR computation and it can be reduced if the mismatched LLR computation is not an one-to-one mapping function. We will derive the Shannon capacity, threshold and stable condition of LDPC codes for mismatched BIAWGN and BIL noise types. The results show that the noise variance estimation errors will not affect the Shannon capacity and stable condition, but the errors do reduce the threshold. The mismatch in noise type will only reduce Shannon capacity when LLR computation is based on BIL

    Raptor Codes for BIAWGN Channel: SNR Mismatch and the Optimality of the Inner and Outer Rates

    Get PDF
    Fountain codes are a class of rateless codes with two interesting properties, first, they can generate potentially limitless numbers of encoded symbols given a finite set of source symbols, and second, the source symbols can be recovered from any subset of encoded symbols with cardinality greater than the number of source symbols. Raptor codes are the first implementation of fountain codes with linear complexity and vanishing error floors on noisy channels. Raptor codes are designed by the serial concatenation of an inner Luby trans-form (LT) code, the first practical realization of fountain codes, and an outer low-density parity-check (LDPC) code. Raptor codes were designed to operate on the binary erasure channel (BEC), however, since their invention they received considerable attention in or-der to improve their performance on noisy channels, and especially additive white Gaussiannoise (AWGN) channels. This dissertation considers two issues that face Raptor codes on the binary input additive white Gaussian noise (BIAWGN) channel: inaccurate estimation of signal to noise ratio (SNR) and the optimality of inner and outer rates. First, for codes that use a belief propagation algorithm (BPA) in decoding, such as Raptor codes on the BIAWGN channel, accurate estimation of the channel SNR is crucial to achieving optimal performance by the decoder. A difference between the estimated SNR and the actual channel SNR is known as signal to noise ratio mismatch (SNRM). Using asymptomatic analysis and simulation, we show the degrading effects of SNRM on Raptor codes and observe that if the mismatch is large enough, it can cause the decoding to fail. Using the discretized density evolution (DDE) algorithm with the modifications required to simulate the asymptotic performance in the case of SNRM, we determine the decoding threshold of Raptor codes for different values of SNRM ratio. Determining the threshold under SNRM enables us to quantify its effects which in turn can be used to reach important conclusions about the effects of SNRM on Raptor codes. Also, it can be used to compare Raptor codes with different designs in terms of their tolerance to SNRM. Based on the threshold response to SNRM, we observe that SNR underestimation is slightly less detrimental to Raptor codes than SNR overestimation for lower levels of mismatch ratio, however, as the mismatch increases, underestimation becomes more detrimental. Further, it can help estimate the tolerance of a Raptor code, with certain code parameters when transmitted at some SNR value, to SNRM. Or equivalently, help estimate the SNR needed for a given code to achieve a certain level of tolerance to SNRM. Using our observations about the performance of Raptor codes under SNRM, we propose an optimization method to design output degree distributions of the LT part that can be used to construct Raptor codes with more tolerance to high levels of SNRM. Second, we study the effects of choosing different values of inner and outer code rate pairs on the decoding threshold and performance of Raptor codes on the BIAWGN channel. For concatenated codes such as Raptor codes, given any instance of the overall code rate R, different inner (Ri) and outer (Ro) code rate combinations can be used to share the available redundancy as long asR=RiRo. Determining the optimal inner and outer rate pair can improve the threshold and performance of Raptor codes. Using asymptotic analysis, we show the effect of the rate pair choice on the threshold of Raptor codes on the BIAWGN channel and how the optimal rate pair is decided. We also show that Raptor codes with different output degree distributions can have different optimal rate pairs, therefore, by identifying the optimal rate pair we can further improve the performance and avoid suboptimal use of the code. We make the observation that as the outer rate of Raptor codes increases the potential of achieving better threshold increases, and provide the reason why the optimal outer rate of Raptor codes cannot occur at lower values. Finally, we present an optimization method that considers the optimality of the inner and outer rates in designing the output degree distribution of the inner LT part of Raptor codes. The designed distributions show improvement in both the decoding threshold and performance compared to other code designs that do not consider the optimality of the inner and outer rates

    ProductAE: Toward Deep Learning Driven Error-Correction Codes of Large Dimensions

    Full text link
    While decades of theoretical research have led to the invention of several classes of error-correction codes, the design of such codes is an extremely challenging task, mostly driven by human ingenuity. Recent studies demonstrate that such designs can be effectively automated and accelerated via tools from machine learning (ML), thus enabling ML-driven classes of error-correction codes with promising performance gains compared to classical designs. A fundamental challenge, however, is that it is prohibitively complex, if not impossible, to design and train fully ML-driven encoder and decoder pairs for large code dimensions. In this paper, we propose Product Autoencoder (ProductAE) -- a computationally-efficient family of deep learning driven (encoder, decoder) pairs -- aimed at enabling the training of relatively large codes (both encoder and decoder) with a manageable training complexity. We build upon ideas from classical product codes and propose constructing large neural codes using smaller code components. ProductAE boils down the complex problem of training the encoder and decoder for a large code dimension kk and blocklength nn to less-complex sub-problems of training encoders and decoders for smaller dimensions and blocklengths. Our training results show successful training of ProductAEs of dimensions as large as k=300k = 300 bits with meaningful performance gains compared to state-of-the-art classical and neural designs. Moreover, we demonstrate excellent robustness and adaptivity of ProductAEs to channel models different than the ones used for training.Comment: arXiv admin note: text overlap with arXiv:2110.0446
    • …
    corecore