3 research outputs found

    Toward the implementation of analog LDPC decoders for long codewords

    Get PDF
    Error control codes are used in virtually every digital communication system. Traditionally, decoders have been implemented digitally. Analog decoders have been recently shown to have the potential to outperform digital decoders in terms of area and power/speed ratio. Analog designers have attempted to fully understand and exploit this potential for large decoders. However, large codes are generally still implemented with digital circuits. Nevertheless, in this thesis a number of aspects of analog decoder implementation are investigated with the hope of enabling the design of large analog decoders. In this thesis, we study and modify analog circuits used in a decoding algorithm known as the sum-product algorithm for implementation in a CMOS 90 nm technology. We apply a current-mode approach at the input nodes of these circuits and show through simulations that the power/speed ratio will be improved. Interested in studying the dynamics of decoders, we model an LDPC code in MATLAB's Simulink. We then apply the linearization technique on the modeled LDPC code in order to linearize the decoder about an initial state as its solution point. Challenges associated with decoder linearization are discussed. We also design and implement a chip comprised of the sum-product circuits with different configurations and sizes in order to study the effect of mismatch on the accuracy of the outputs. Unfortunately, testing of the chip fails as a result of errors in either the packaging process or fabrication

    Unreliable and resource-constrained decoding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 185-213).Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like material, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically, and that simultaneously harvest information and energy. For each of these problems, fundamental trade-offs between communication system performance and reliability or resource consumption are established. For decoding repetition codes using consensus decoding circuits, the optimal tradeoff between decoding speed and quadratic wiring cost is defined and established. Designing optimal circuits is shown to be NP-complete, but is carried out for small circuit size. The natural relaxation to the integer circuit design problem is shown to be a reverse convex program. Random circuit topologies are also investigated. Uncoded transmission is investigated when a population of heterogeneous sources must be categorized due to decoder memory constraints. Quantizers that are optimal for mean Bayes risk error, a novel fidelity criterion, are designed. Human decision making in segregated populations is also studied with this framework. The ratio between the costs of false alarms and missed detections is also shown to fundamentally affect the essential nature of discrimination. The effect of noise on iterative message-passing decoders for low-density parity check (LDPC) codes is studied. Concentration of decoding performance around its average is shown to hold. Density evolution equations for noisy decoders are derived. Decoding thresholds degrade smoothly as decoder noise increases, and in certain cases, arbitrarily small final error probability is achievable despite decoder noisiness. Precise information storage capacity results for reliable memory systems constructed from unreliable components are also provided. Limits to communicating over systems that fail at random times are established. Communication with arbitrarily small probability of error is not possible, but schemes that optimize transmission volume communicated at fixed maximum message error probabilities are determined. System state feedback is shown not to improve performance. For optimal communication with decoders that simultaneously harvest information and energy, a coding theorem that establishes the fundamental trade-off between the rates at which energy and reliable information can be transmitted over a single line is proven. The capacity-power function is computed for several channels; it is non-increasing and concave.by Lav R. Varshney.Ph.D
    corecore