6 research outputs found

    Turbo space-time coded modulation : principle and performance analysis

    Get PDF
    A breakthrough in coding was achieved with the invention of turbo codes. Turbo codes approach Shannon capacity by displaying the properties of long random codes, yet allowing efficient decoding. Coding alone, however, cannot fully address the problem of multipath fading channel. Recent advances in information theory have revolutionized the traditional view of multipath channel as an impairment. New results show that high gains in capacity can be achieved through the use of multiple antennas at the transmitter and the receiver. To take advantage of these new results in information theory, it is necessary to devise methods that allow communication systems to operate close to the predicted capacity. One such method recently invented is space-time coding, which provides both coding gain and diversity advantage. In this dissertation, a new class of codes is proposed that extends the concept of turbo coding to include space-time encoders as constituent building blocks of turbo codes. The codes are referred to as turbo spacetime coded modulation (turbo-STCM). The motivation behind the turbo-STCM concept is to fuse the important properties of turbo and space-time codes into a unified design framework. A turbo-STCM encoder is proposed, which consists of two space-time codes in recursive systematic form concatenated in parallel. An iterative symbol-by-symbol maximum a posteriori algorithm operating in the log domain is developed for decoding turbo-STCM. The decoder employs two a posteriori probability (APP) computing modules concatenated in parallel; one module for each constituent code. The analysis of turbo-STCM is demonstrated through simulations and theoretical closed-form expressions. Simulation results are provided for 4-PSK and 8-PSK schemes over the Rayleigh block-fading channel. It is shown that the turbo-STCM scheme features full diversity and full coding rate. The significant gain can be obtained in performance over conventional space-time codes of similar complexity. The analytical union bound to the bit error probability is derived for turbo-STCM over the additive white Gaussian noise (AWGN) and the Rayleigh block-fading channels. The bound makes it possible to express the performance analysis of turbo-STCM in terms of the properties of the constituent space-time codes. The union bound is demonstrated for 4-PSK and 8-PSK turbo-STCM with two transmit antennas and one/two receive antennas. Information theoretic bounds such as Shannon capacity, cutoff rate, outage capacity and the Fano bound, are computed for multiantenna systems over the AWGN and fading channels. These bounds are subsequently used as benchmarks for demonstrating the performance of turbo-STCM

    Unreliable and resource-constrained decoding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 185-213).Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like material, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically, and that simultaneously harvest information and energy. For each of these problems, fundamental trade-offs between communication system performance and reliability or resource consumption are established. For decoding repetition codes using consensus decoding circuits, the optimal tradeoff between decoding speed and quadratic wiring cost is defined and established. Designing optimal circuits is shown to be NP-complete, but is carried out for small circuit size. The natural relaxation to the integer circuit design problem is shown to be a reverse convex program. Random circuit topologies are also investigated. Uncoded transmission is investigated when a population of heterogeneous sources must be categorized due to decoder memory constraints. Quantizers that are optimal for mean Bayes risk error, a novel fidelity criterion, are designed. Human decision making in segregated populations is also studied with this framework. The ratio between the costs of false alarms and missed detections is also shown to fundamentally affect the essential nature of discrimination. The effect of noise on iterative message-passing decoders for low-density parity check (LDPC) codes is studied. Concentration of decoding performance around its average is shown to hold. Density evolution equations for noisy decoders are derived. Decoding thresholds degrade smoothly as decoder noise increases, and in certain cases, arbitrarily small final error probability is achievable despite decoder noisiness. Precise information storage capacity results for reliable memory systems constructed from unreliable components are also provided. Limits to communicating over systems that fail at random times are established. Communication with arbitrarily small probability of error is not possible, but schemes that optimize transmission volume communicated at fixed maximum message error probabilities are determined. System state feedback is shown not to improve performance. For optimal communication with decoders that simultaneously harvest information and energy, a coding theorem that establishes the fundamental trade-off between the rates at which energy and reliable information can be transmitted over a single line is proven. The capacity-power function is computed for several channels; it is non-increasing and concave.by Lav R. Varshney.Ph.D

    Sparse graph codes for compression, sensing, and secrecy

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from student PDF version of thesis.Includes bibliographical references (p. 201-212).Sparse graph codes were first introduced by Gallager over 40 years ago. Over the last two decades, such codes have been the subject of intense research, and capacity approaching sparse graph codes with low complexity encoding and decoding algorithms have been designed for many channels. Motivated by the success of sparse graph codes for channel coding, we explore the use of sparse graph codes for four other problems related to compression, sensing, and security. First, we construct locally encodable and decodable source codes for a simple class of sources. Local encodability refers to the property that when the original source data changes slightly, the compression produced by the source code can be updated easily. Local decodability refers to the property that a single source symbol can be recovered without having to decode the entire source block. Second, we analyze a simple message-passing algorithm for compressed sensing recovery, and show that our algorithm provides a nontrivial f1/f1 guarantee. We also show that very sparse matrices and matrices whose entries must be either 0 or 1 have poor performance with respect to the restricted isometry property for the f2 norm. Third, we analyze the performance of a special class of sparse graph codes, LDPC codes, for the problem of quantizing a uniformly random bit string under Hamming distortion. We show that LDPC codes can come arbitrarily close to the rate-distortion bound using an optimal quantizer. This is a special case of a general result showing a duality between lossy source coding and channel coding-if we ignore computational complexity, then good channel codes are automatically good lossy source codes. We also prove a lower bound on the average degree of vertices in an LDPC code as a function of the gap to the rate-distortion bound. Finally, we construct efficient, capacity-achieving codes for the wiretap channel, a model of communication that allows one to provide information-theoretic, rather than computational, security guarantees. Our main results include the introduction of a new security critertion which is an information-theoretic analog of semantic security, the construction of capacity-achieving codes possessing strong security with nearly linear time encoding and decoding algorithms for any degraded wiretap channel, and the construction of capacity-achieving codes possessing semantic security with linear time encoding and decoding algorithms for erasure wiretap channels. Our analysis relies on a relatively small set of tools. One tool is density evolution, a powerful method for analyzing the behavior of message-passing algorithms on long, random sparse graph codes. Another concept we use extensively is the notion of an expander graph. Expander graphs have powerful properties that allow us to prove adversarial, rather than probabilistic, guarantees for message-passing algorithms. Expander graphs are also useful in the context of the wiretap channel because they provide a method for constructing randomness extractors. Finally, we use several well-known isoperimetric inequalities (Harper's inequality, Azuma's inequality, and the Gaussian Isoperimetric inequality) in our analysis of the duality between lossy source coding and channel coding.by Venkat Bala Chandar.Ph.D

    1976-77 General Catalog

    Get PDF
    This catalog became effective with summer quarter 1976

    1977-78 General Catalog

    Get PDF
    This catalog became effective with summer quarter 1977

    1981-82 General Catalog

    Get PDF
    This catalog became effective with summer quarter 1981
    corecore