95 research outputs found

    Coding with Encoding Uncertainty

    Full text link
    We study the channel coding problem when errors and uncertainty occur in the encoding process. For simplicity we assume the channel between the encoder and the decoder is perfect. Focusing on linear block codes, we model the encoding uncertainty as erasures on the edges in the factor graph of the encoder generator matrix. We first take a worst-case approach and find the maximum tolerable number of erasures for perfect error correction. Next, we take a probabilistic approach and derive a sufficient condition on the rate of a set of codes, such that decoding error probability vanishes as blocklength tends to infinity. In both scenarios, due to the inherent asymmetry of the problem, we derive the results from first principles, which indicates that robustness to encoding errors requires new properties of codes different from classical properties.Comment: 12 pages; a shorter version of this work will appear in the proceedings of ISIT 201

    Convexity in source separation: Models, geometry, and algorithms

    Get PDF
    Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learning. Despite the recent progress in each of these applications, advances in high-throughput sensor technology place demixing algorithms under pressure to accommodate extremely high-dimensional signals, separate an ever larger number of sources, and cope with more sophisticated signal and mixing models. These difficulties are exacerbated by the need for real-time action in automated decision-making systems. Recent advances in convex optimization provide a simple framework for efficiently solving numerous difficult demixing problems. This article provides an overview of the emerging field, explains the theory that governs the underlying procedures, and surveys algorithms that solve them efficiently. We aim to equip practitioners with a toolkit for constructing their own demixing algorithms that work, as well as concrete intuition for why they work

    Applications of graph-based codes in networks: analysis of capacity and design of improved algorithms

    Get PDF
    The conception of turbo codes by Berrou et al. has created a renewed interest in modern graph-based codes. Several encouraging results that have come to light since then have fortified the role these codes shall play as potential solutions for present and future communication problems. This work focuses on both practical and theoretical aspects of graph-based codes. The thesis can be broadly categorized into three parts. The first part of the thesis focuses on the design of practical graph-based codes of short lengths. While both low-density parity-check codes and rateless codes have been shown to be asymptotically optimal under the message-passing (MP) decoder, the performance of short-length codes from these families under MP decoding is starkly sub-optimal. This work first addresses the structural characterization of stopping sets to understand this sub-optimality. Using this characterization, a novel improved decoder that offers several orders of magnitude improvement in bit-error rates is introduced. Next, a novel scheme for the design of a good rate-compatible family of punctured codes is proposed. The second part of the thesis aims at establishing these codes as a good tool to develop reliable, energy-efficient and low-latency data dissemination schemes in networks. The problems of broadcasting in wireless multihop networks and that of unicast in delay-tolerant networks are investigated. In both cases, rateless coding is seen to offer an elegant means of achieving the goals of the chosen communication protocols. It was noticed that the ratelessness and the randomness in encoding process make this scheme specifically suited to such network applications. The final part of the thesis investigates an application of a specific class of codes called network codes to finite-buffer wired networks. This part of the work aims at establishing a framework for the theoretical study and understanding of finite-buffer networks. The proposed Markov chain-based method extends existing results to develop an iterative Markov chain-based technique for general acyclic wired networks. The framework not only estimates the capacity of such networks, but also provides a means to monitor network traffic and packet drop rates on various links of the network.Ph.D.Committee Chair: Fekri, Faramarz; Committee Member: Li, Ye; Committee Member: McLaughlin, Steven; Committee Member: Sivakumar, Raghupathy; Committee Member: Tetali, Prasa

    Density Evolution and Functional Threshold for the Noisy Min-Sum Decoder

    Full text link
    This paper investigates the behavior of the Min-Sum decoder running on noisy devices. The aim is to evaluate the robustness of the decoder in the presence of computation noise, e.g. due to faulty logic in the processing units, which represents a new source of errors that may occur during the decoding process. To this end, we first introduce probabilistic models for the arithmetic and logic units of the the finite-precision Min-Sum decoder, and then carry out the density evolution analysis of the noisy Min-Sum decoder. We show that in some particular cases, the noise introduced by the device can help the Min-Sum decoder to escape from fixed points attractors, and may actually result in an increased correction capacity with respect to the noiseless decoder. We also reveal the existence of a specific threshold phenomenon, referred to as functional threshold. The behavior of the noisy decoder is demonstrated in the asymptotic limit of the code-length -- by using "noisy" density evolution equations -- and it is also verified in the finite-length case by Monte-Carlo simulation.Comment: 46 pages (draft version); extended version of the paper with same title, submitted to IEEE Transactions on Communication

    Capacity Approaching Coding Strategies for Machine-to-Machine Communication in IoT Networks

    Get PDF
    Radio access technologies for mobile communications are characterized by multiple access (MA) strategies. Orthogonal MA techniques were a reasonable choice for achieving good performance with single user detection. With the tremendous growth in the number of mobile users and the new internet of things (IoT) shifting paradigm, it is expected that the monthly mobile data traffic worldwide will exceed 24.3 exabytes by 2019, over 100 billion IoT connections by 2025, and the financial impact of IoT on the global economy varies in the range of 3.9 to 11.1 trillion dollars by 2025. In light of the envisaged exponential growth and new trends, one promising solution to further enhance data rates without increasing the bandwidth is by increasing the spectral efficiency of the channel. Non-orthogonal MA techniques are potential candidates for future wireless communications. The two corner points on the boundary region of the MA channel are known to be achievable by single user decoding followed by successive decoding (SD). Other points can also be achieved using time sharing or rate splitting. On the other hand, machine-to-machine (M2M) communication which is an enabling technology for the IoT, enables massive multipurpose networked devices to exchange information among themselves with minor or no human intervention. This thesis consists of three main parts. In the first part, we propose new practical encoding and joint belief propagation (BP) decoding techniques for 2-user MA erasure channel (MAEC) that achieve any rate pair close to the boundary of the capacity region without using time sharing nor rate splitting. While at the encoders, the corresponding parity check matrices are randomly built from a half-rate LDPC matrix, the joint BP decoder employs the associated Tanner graphs of the parity check matrices to iteratively recover the erasures in the received combined codewords. Specifically, the joint decoder performs two steps in each decoding iteration: 1) simultaneously and independently runs the BP decoding process at each constituent sub-graph to recover some of the common erasures, 2) update the other sub-graph with newly recovered erasures and vice versa. When the number of erasures in the received combined codewords is less than or equal to the number of parity check constraints, the decoder may successfully decode both codewords, otherwise the decoder declares decoding failure. Furthermore, we calculate the probability of decoding failure and the outage capacity. Additionally, we show how the erasure probability evolves with the number of decoding iterations and the maximum tolerable loss. Simulations show that any rate pair close to the capacity boundary is achievable without using time sharing. In the second part, we propose a new cooperative joint network and rateless coding strategy for machine-type communication (MTC) devices in the multicast settings where three or more MTC devices dynamically form a cluster to disseminate messages between themselves. Specifically, in the basic cluster, three MTC devices transmit their respective messages simultaneously to the relay in the first phase. The relay broadcasts back the combined messages to all MTC devices within the basic cluster in the second phase. Given the fact that each MTC device can remove its own message, the received signal in the second phase is reduced to the combined messages coming from the other two MTC devices. Hence, this results in exploiting the interference caused by one message on the other and therefore improving the bandwidth efficiency. Furthermore, each group of three MTC devices in vicinity can form a basic cluster for exchanging messages, and the basic scheme extends to N MTC devices. Furthermore, we propose an efficient algorithm to disseminate messages among a large number of MTC devices. Moreover, we implement the proposed scheme employing practical Raptor codes with the use of two relaying schemes, namely amplify and forward (AF) and de-noise and forward (DNF). We show that with very little processing at the relay using DNF relaying scheme, performance can be further enhanced. We also show that the proposed scheme achieves a near optimal sum rate performance. In the third part, we present a comparative study of joint channel estimation and decoding of factor graph-based codes over flat fading channels and propose a simple channel approximation scheme that performs close to the optimal technique. Specifically, when channel state information (CSI) is not available at the receiver, a simpler approach is to estimate the channel state of a group of received symbols, then use the approximated value of the channel with the received signal to compute the log likelihood ratio. Simulation results show that the proposed scheme exhibits about 0.4 dB loss compared to the optimal solution when perfect CSI is available at the receiver

    A modified belief-propagation decoder for the parallel decoding of product codes

    Get PDF
    In this dissertation a modification to the belief-propagation algorithm is presented. The algorithm modifies the belief-propagation algorithm to allow for the parallel decoding of product codes. The algorithm leverages the fact that each component code in the product code can be independently decoded because the codewords are encoded by independent and identically distributed (i.i.d.) processes. The algorithm maximises the parellelisation by decoding all the component codes in each dimension in parallel. In order to facilitate this process we developed new additional stages which are added to the belief-propagation algorithm: the codeword reliability estimation, the belief-aggregation and the exit test stages. The parallel product code decoder o ers a 0.2 dB worsening of the decoding BER performance when compared to the best serial decoder. However, the parallel belief-propagation decoder o ers a 7.26 time speedup on an eight-core processor, which is 0.91 of the theoretical maximum of eight for an eight-core processor

    Graphical models and message-passing algorithms for network-constrained decision problems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. [201]-210).Inference problems, typically posed as the computation of summarizing statistics (e.g., marginals, modes, means, likelihoods), arise in a variety of scientific fields and engineering applications. Probabilistic graphical models provide a scalable framework for developing efficient inference methods, such as message-passing algorithms that exploit the conditional independencies encoded by the given graph. Conceptually, this framework extends naturally to a distributed network setting: by associating to each node and edge in the graph a distinct sensor and communication link, respectively, the iterative message-passing algorithms are equivalent to a sequence of purely-local computations and nearest-neighbor communications. Practically, modern sensor networks can also involve distributed resource constraints beyond those satisfied by existing message-passing algorithms, including e.g., a fixed small number of iterations, the presence of low-rate or unreliable links, or a communication topology that differs from the probabilistic graph. The principal focus of this thesis is to augment the optimization problems from which existing message-passing algorithms are derived, explicitly taking into account that there may be decision-driven processing objectives as well as constraints or costs on available network resources. The resulting problems continue to be NP-hard, in general, but under certain conditions become amenable to an established team-theoretic relaxation technique by which a new class of efficient message-passing algorithms can be derived. From the academic perspective, this thesis marks the intersection of two lines of active research, namely approximate inference methods for graphical models and decentralized Bayesian methods for multi-sensor detection.(cont)The respective primary contributions are new message-passing algorithms for (i) "online" measurement processing in which global decision performance degrades gracefully as network constraints become arbitrarily severe and for (ii) "offline" strategy optimization that remain tractable in a larger class of detection objectives and network constraints than previously considered. From the engineering perspective, the analysis and results of this thesis both expose fundamental issues in distributed sensor systems and advance the development of so-called "self-organizing fusion-layer" protocols compatible with emerging concepts in ad-hoc wireless networking.by O. Patrick Kreidl.Ph.D

    Unreliable and resource-constrained decoding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 185-213).Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like material, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically, and that simultaneously harvest information and energy. For each of these problems, fundamental trade-offs between communication system performance and reliability or resource consumption are established. For decoding repetition codes using consensus decoding circuits, the optimal tradeoff between decoding speed and quadratic wiring cost is defined and established. Designing optimal circuits is shown to be NP-complete, but is carried out for small circuit size. The natural relaxation to the integer circuit design problem is shown to be a reverse convex program. Random circuit topologies are also investigated. Uncoded transmission is investigated when a population of heterogeneous sources must be categorized due to decoder memory constraints. Quantizers that are optimal for mean Bayes risk error, a novel fidelity criterion, are designed. Human decision making in segregated populations is also studied with this framework. The ratio between the costs of false alarms and missed detections is also shown to fundamentally affect the essential nature of discrimination. The effect of noise on iterative message-passing decoders for low-density parity check (LDPC) codes is studied. Concentration of decoding performance around its average is shown to hold. Density evolution equations for noisy decoders are derived. Decoding thresholds degrade smoothly as decoder noise increases, and in certain cases, arbitrarily small final error probability is achievable despite decoder noisiness. Precise information storage capacity results for reliable memory systems constructed from unreliable components are also provided. Limits to communicating over systems that fail at random times are established. Communication with arbitrarily small probability of error is not possible, but schemes that optimize transmission volume communicated at fixed maximum message error probabilities are determined. System state feedback is shown not to improve performance. For optimal communication with decoders that simultaneously harvest information and energy, a coding theorem that establishes the fundamental trade-off between the rates at which energy and reliable information can be transmitted over a single line is proven. The capacity-power function is computed for several channels; it is non-increasing and concave.by Lav R. Varshney.Ph.D
    corecore