1,032 research outputs found

    The complexity of information set decoding

    Get PDF
    Information set decoding is an algorithm for decoding any linear code. Expressions for the complexity of the procedure that are logarithmically exact for virtually all codes are presented. The expressions cover the cases of complete minimum distance decoding and bounded hard-decision decoding, as well as the important case of bounded soft-decision decoding. It is demonstrated that these results are vastly better than those for the trivial algorithms of searching through all codewords or through all syndromes, and are significantly better than those for any other general algorithm currently known. For codes over large symbol fields, the procedure tends towards a complexity that is subexponential in the symbol size

    The complexity of information set decoding

    Get PDF
    Information set decoding is an algorithm for decoding any linear code. Expressions for the complexity of the procedure that are logarithmically exact for virtually all codes are presented. The expressions cover the cases of complete minimum distance decoding and bounded hard-decision decoding, as well as the important case of bounded soft-decision decoding. It is demonstrated that these results are vastly better than those for the trivial algorithms of searching through all codewords or through all syndromes, and are significantly better than those for any other general algorithm currently known. For codes over large symbol fields, the procedure tends towards a complexity that is subexponential in the symbol size

    Critical phenomena in complex networks

    Full text link
    The combination of the compactness of networks, featuring small diameters, and their complex architectures results in a variety of critical effects dramatically different from those in cooperative systems on lattices. In the last few years, researchers have made important steps toward understanding the qualitatively new critical phenomena in complex networks. We review the results, concepts, and methods of this rapidly developing field. Here we mostly consider two closely related classes of these critical phenomena, namely structural phase transitions in the network architectures and transitions in cooperative models on networks as substrates. We also discuss systems where a network and interacting agents on it influence each other. We overview a wide range of critical phenomena in equilibrium and growing networks including the birth of the giant connected component, percolation, k-core percolation, phenomena near epidemic thresholds, condensation transitions, critical phenomena in spin models placed on networks, synchronization, and self-organized criticality effects in interacting systems on networks. We also discuss strong finite size effects in these systems and highlight open problems and perspectives.Comment: Review article, 79 pages, 43 figures, 1 table, 508 references, extende

    Advanced Coding And Modulation For Ultra-wideband And Impulsive Noises

    Get PDF
    The ever-growing demand for higher quality and faster multimedia content delivery over short distances in home environments drives the quest for higher data rates in wireless personal area networks (WPANs). One of the candidate IEEE 802.15.3a WPAN proposals support data rates up to 480 Mbps by using punctured convolutional codes with quadrature phase shift keying (QPSK) modulation for a multi-band orthogonal frequency-division multiplexing (MB-OFDM) system over ultra wideband (UWB) channels. In the first part of this dissertation, we combine more powerful near-Shannon-limit turbo codes with bandwidth efficient trellis coded modulation, i.e., turbo trellis coded modulation (TTCM), to further improve the data rates up to 1.2 Gbps. A modified iterative decoder for this TTCM coded MB-OFDM system is proposed and its bit error rate performance under various impulsive noises over both Gaussian and UWB channel is extensively investigated, especially in mismatched scenarios. A robust decoder which is immune to noise mismatch is provided based on comparison of impulsive noises in time domain and frequency domain. The accurate estimation of the dynamic noise model could be very difficult or impossible at the receiver, thus a significant performance degradation may occur due to noise mismatch. In the second part of this dissertation, we prove that the minimax decoder in \cite, which instead of minimizing the average bit error probability aims at minimizing the worst bit error probability, is optimal and robust to certain noise model with unknown prior probabilities in two and higher dimensions. Besides turbo codes, another kind of error correcting codes which approach the Shannon capacity is low-density parity-check (LDPC) codes. In the last part of this dissertation, we extend the density evolution method for sum-product decoding using mismatched noises. We will prove that as long as the true noise type and the estimated noise type used in the decoder are both binary-input memoryless output symmetric channels, the output from mismatched log-likelihood ratio (LLR) computation is also symmetric. We will show the Shannon capacity can be evaluated for mismatched LLR computation and it can be reduced if the mismatched LLR computation is not an one-to-one mapping function. We will derive the Shannon capacity, threshold and stable condition of LDPC codes for mismatched BIAWGN and BIL noise types. The results show that the noise variance estimation errors will not affect the Shannon capacity and stable condition, but the errors do reduce the threshold. The mismatch in noise type will only reduce Shannon capacity when LLR computation is based on BIL

    Efficient Maximum-Likelihood Soft-Decision Decoding of Linear Block Codes Using Algorithm A

    Get PDF
    In this report we present a novel and efficient maximum-likelihood soft-decision decoding algorithm for linear block codes. The approach used here is to convert the decoding problem into a search problem through a graph which is a trellis for an equivalent code of the transmitted code. Algorithm A*, which uses a priority-first search strategy, is employed to search through this graph. This search is guided by an evaluation function f defined to take advantage of the information provided by the received vector and the inherent properties of the transmitted code. This function f is used to drastically reduce the search space and to make the decoding efforts of this decoding algorithm adaptable to the noise level. Simulation results for the ( 48, 24) and the (72, 36) binary extended quadratic residue codes and the (128, 64) binary extended BCH code are given to substantiate the above claim
    • …
    corecore