77 research outputs found

    Optimal information storage : nonsequential sources and neural channels

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.MIT Institute Archives copy: pages 101-163 bound in reverse order.Includes bibliographical references (p. 141-163).Information storage and retrieval systems are communication systems from the present to the future and fall naturally into the framework of information theory. The goal of information storage is to preserve as much signal fidelity under resource constraints as possible. The information storage theorem delineates average fidelity and average resource values that are achievable and those that are not. Moreover, observable properties of optimal information storage systems and the robustness of optimal systems to parameter mismatch may be determined. In this thesis, we study the physical properties of a neural information storage channel and also the fundamental bounds on the storage of sources that have nonsequential semantics. Experimental investigations have revealed that synapses in the mammalian brain possess unexpected properties. Adopting the optimization approach to biology, we cast the brain as an optimal information storage system and propose a theoretical framework that accounts for many of these physical properties. Based on previous experimental and theoretical work, we use volume as a limited resource and utilize the empirical relationship between volume anrid synaptic weight.(cont.) Our scientific hypotheses are based on maximizing information storage capacity per unit cost. We use properties of the capacity-cost function, e-capacity cost approximations, and measure matching to develop optimization principles. We find that capacity-achieving input distributions not only explain existing experimental measurements but also make non-trivial predictions about the physical structure of the brain. Numerous information storage applications have semantics such that the order of source elements is irrelevant, so the source sequence can be treated as a multiset. We formulate fidelity criteria that consider asymptotically large multisets and give conclusive, but trivialized, results in rate distortion theory. For fidelity criteria that consider fixed-size multisets. we give some conclusive results in high-rate quantization theory, low-rate quantization. and rate distortion theory. We also provide bounds on the rate-distortion function for other nonsequential fidelity criteria problems. System resource consumption can be significantly reduced by recognizing the correct invariance properties and semantics of the information storage task at hand.by Lav R. Varshney.S.M

    Comparison of Channels: Criteria for Domination by a Symmetric Channel

    Full text link
    This paper studies the basic question of whether a given channel VV can be dominated (in the precise sense of being more noisy) by a qq-ary symmetric channel. The concept of "less noisy" relation between channels originated in network information theory (broadcast channels) and is defined in terms of mutual information or Kullback-Leibler divergence. We provide an equivalent characterization in terms of χ2\chi^2-divergence. Furthermore, we develop a simple criterion for domination by a qq-ary symmetric channel in terms of the minimum entry of the stochastic matrix defining the channel VV. The criterion is strengthened for the special case of additive noise channels over finite Abelian groups. Finally, it is shown that domination by a symmetric channel implies (via comparison of Dirichlet forms) a logarithmic Sobolev inequality for the original channel.Comment: 31 pages, 2 figures. Presented at 2017 IEEE International Symposium on Information Theory (ISIT

    Information-Theoretic Foundations of Mismatched Decoding

    Full text link
    Shannon's channel coding theorem characterizes the maximal rate of information that can be reliably transmitted over a communication channel when optimal encoding and decoding strategies are used. In many scenarios, however, practical considerations such as channel uncertainty and implementation constraints rule out the use of an optimal decoder. The mismatched decoding problem addresses such scenarios by considering the case that the decoder cannot be optimized, but is instead fixed as part of the problem statement. This problem is not only of direct interest in its own right, but also has close connections with other long-standing theoretical problems in information theory. In this monograph, we survey both classical literature and recent developments on the mismatched decoding problem, with an emphasis on achievable random-coding rates for memoryless channels. We present two widely-considered achievable rates known as the generalized mutual information (GMI) and the LM rate, and overview their derivations and properties. In addition, we survey several improved rates via multi-user coding techniques, as well as recent developments and challenges in establishing upper bounds on the mismatch capacity, and an analogous mismatched encoding problem in rate-distortion theory. Throughout the monograph, we highlight a variety of applications and connections with other prominent information theory problems.Comment: Published in Foundations and Trends in Communications and Information Theory (Volume 17, Issue 2-3

    Polarization and Channel Ordering: Characterizations and Topological Structures

    Get PDF
    Information theory is the field in which we study the fundamental limitations of communication. Shannon proved in 1948 that there exists a maximum rate, called capacity, at which we can reliably communicate information through a given channel. However, Shannon did not provide an explicit construction of a practical coding scheme that achieves the capacity. Polar coding, invented by Arikan, is the first low-complexity coding technique that achieves the capacity of binary-input memoryless symmetric channels. The construction of these codes is based on a phenomenon called polarization. The study of polar codes and their generalization to arbitrary channels is the subject of polarization theory, a subfield of information and coding theories. This thesis consists of two parts. In the first part, we provide solutions to several open problems in polarization theory. The first open problem that we consider is to determine the binary operations that always lead to polarization when they are used in Arikan-style constructions. In order to solve this problem, we develop an ergodic theory for binary operations. This theory is used to provide a necessary and sufficient condition that characterizes the polarizing binary operations, both in the single-user and the multiple-access settings. We prove that the exponent of a polarizing binary operation cannot exceed 1/2. Furthermore, we show that the exponent of an arbitrary quasigroup operation is exactly 1/2. This implies that quasigroup operations are among the best polarizing binary operations. One drawback of polarization in the multiple-access setting is that it sometimes induces a loss in the symmetric capacity region of a given multiple-access channel (MAC). An open problem in MAC polarization theory is to determine all the MACs that do not lose any part of their capacity region by polarization. Using Fourier analysis, we solve this problem by providing a single-letter necessary and sufficient condition that characterizes all these MACs in the general setting where we have an arbitrary number of users, and each user uses an arbitrary Abelian group operation on his input alphabet. We also study the polarization of classical-quantum (cq) channels. The input alphabet is endowed with an arbitrary Abelian group operation, and an Arikan-style transformation is applied using this operation. We show that as the number of polarization steps becomes large, the synthetic cq-channels polarize to deterministic homomorphism channels that project their input to a quotient group of the input alphabet. This result is used to construct polar codes for arbitrary cq-channels and arbitrary classical-quantum multiple-access channels (cq-MAC). In the second part of this thesis, we investigate several problems that are related to three orderings of communication channels: degradedness, input-degradedness, and the Shannon ordering. We provide several characterizations for the input-degradedness and the Shannon ordering. Two channels are said to be equivalent if they are degraded from each other. Input-equivalence and Shannon-equivalence between channels are similarly defined. We construct and study several topologies on the quotients of the spaces of discrete memoryless channels (DMC) by the equivalence, the input-equivalence and the Shannon-equivalence relations. Finally, we prove the continuity of several channel parameters and operations under various DMC topologies

    Performance Analysis of Block Codes over Finite-state Channels in Delay-sensitive Communications

    Get PDF
    As the mobile application landscape expands, wireless networks are tasked with supporting different connection profiles, including real-time traffic and delay-sensitive communications. Among many ensuing engineering challenges is the need to better understand the fundamental limits of forward error correction in non-asymptotic regimes. This dissertation seeks to characterize the performance of block codes over finite-state channels with memory and also evaluate their queueing performance under different encoding/decoding schemes. In particular, a fading formulation is considered where a discrete channel with correlation over time introduces errors. For carefully selected channel models and arrival processes, a tractable Markov structure composed of queue length and channel state is identified. This facilitates the analysis of the stationary behavior of the system, leading to evaluation criteria such as bounds on the probability of the queue exceeding a threshold. Specifically, this dissertation focuses on system models with scalable arrival profiles based on Poisson processes, and finite-state memory channels. These assumptions permit the rigorous comparison of system performance for codes with arbitrary block lengths and code rates. Based on this characterization, it is possible to optimize code parameters for delay-sensitive applications over various channels. Random codes and BCH codes are then employed as means to study the relationship between code-rate selection and the queueing performance of point-to-point data links. The introduced methodology offers a new perspective on the joint queueing-coding analysis for finite-state channels, and is supported by numerical simulations. Furthermore, classical results from information theory are revisited in the context of channels with rare transitions, and bounds on the probabilities of decoding failure are derived for random codes. An analysis framework is presented where channel dependencies within and across code words are preserved. The results are subsequently integrated into a queueing formulation. It is shown that for current formulation, the performance analysis based on upper bounds provides a good estimate of both the system performance and the optimum code parameters. Overall, this study offers new insights about the impact of channel correlation on the performance of delay-aware communications and provides novel guidelines to select optimum code rates and block lengths

    Unreliable and resource-constrained decoding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 185-213).Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like material, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically, and that simultaneously harvest information and energy. For each of these problems, fundamental trade-offs between communication system performance and reliability or resource consumption are established. For decoding repetition codes using consensus decoding circuits, the optimal tradeoff between decoding speed and quadratic wiring cost is defined and established. Designing optimal circuits is shown to be NP-complete, but is carried out for small circuit size. The natural relaxation to the integer circuit design problem is shown to be a reverse convex program. Random circuit topologies are also investigated. Uncoded transmission is investigated when a population of heterogeneous sources must be categorized due to decoder memory constraints. Quantizers that are optimal for mean Bayes risk error, a novel fidelity criterion, are designed. Human decision making in segregated populations is also studied with this framework. The ratio between the costs of false alarms and missed detections is also shown to fundamentally affect the essential nature of discrimination. The effect of noise on iterative message-passing decoders for low-density parity check (LDPC) codes is studied. Concentration of decoding performance around its average is shown to hold. Density evolution equations for noisy decoders are derived. Decoding thresholds degrade smoothly as decoder noise increases, and in certain cases, arbitrarily small final error probability is achievable despite decoder noisiness. Precise information storage capacity results for reliable memory systems constructed from unreliable components are also provided. Limits to communicating over systems that fail at random times are established. Communication with arbitrarily small probability of error is not possible, but schemes that optimize transmission volume communicated at fixed maximum message error probabilities are determined. System state feedback is shown not to improve performance. For optimal communication with decoders that simultaneously harvest information and energy, a coding theorem that establishes the fundamental trade-off between the rates at which energy and reliable information can be transmitted over a single line is proven. The capacity-power function is computed for several channels; it is non-increasing and concave.by Lav R. Varshney.Ph.D

    Joint source and channel coding

    Get PDF
    • …
    corecore