4 research outputs found

    Euclidean network information theory

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 121-123).Many network information theory problems face the similar difficulty of single letterization. We argue that this is due to the lack of a geometric structure on the space of probability distributions. In this thesis, we develop such a structure by assuming that the distributions of interest are all close to each other. Under this assumption, the Kullback-Leibler (K-L) divergence is reduced to the squared Euclidean metric in an Euclidean space. In addition, we construct the notion of coordinate and inner product, which will facilitate solving communication problems. We will present the application of this approach to the point-to-point channels, general broadcast channels (BC), multiple access channels (MAC) with common sources, interference channels, and multi-hop layered communication networks without or with feedback. It can be shown that with this approach, information theory problems, such as the single-letterization, can be reduced to some linear algebra problems. Solving these linear algebra problems, we will show that for the general broadcast channels, transmitting the common message to receivers can be formulated as the trade-off between linear systems. We also provide an example to visualize this trade-off in a geometric way. For the MAC with common sources, we observe a coherent combining gain due to the cooperation between transmitters, and this gain can be obtained quantitively by applying our technique. In addition, the developments of the broadcast channels and multiple access channels suggest a trade-off relation between generating common messages for multiple users and transmitting them as the common sources to exploit the coherent combining gain, when optimizing the throughputs of communication networks. To study the structure of this trade-off and understand its role in optimizing the network throughput, we construct a deterministic model by our local approach that captures the critical channel parameters and well models the network. With this deterministic model, for multi-hop layered networks, we analyze the optimal network throughputs, and illustrate what kinds of common messages should be generated to achieve the optimal throughputs. Our results provide the insight of how users in a network should cooperate with each other to transmit information efficiently.by Shao-Lun Huang.Ph.D

    Function Computation over Networks:Efficient Information Processing for Cache and Sensor Applications

    Get PDF
    This thesis looks at efficient information processing for two network applications: content delivery with caching and collecting summary statistics in wireless sensor networks. Both applications are studied under the same paradigm: function computation over networks, where distributed source nodes cooperatively communicate some functions of individual observations to one or multiple destinations. One approach that always works is to convey all observations and then let the destinations compute the desired functions by themselves. However, if the available communication resources are limited, then revealing less unwanted information becomes critical. Centered on this goal, this thesis develops new coding schemes using information-theoretic tools. The first part of this thesis focuses on content delivery with caching. Caching is a technique that facilitates reallocation of communication resources in order to avoid network congestion during peak-traffic times. An information-theoretic model, termed sequential coding for computing, is proposed to analyze the potential gains offered by the caching technique. For the single-user case, the proposed framework succeeds in verifying the optimality of some simple caching strategies and in providing guidance towards optimal caching strategies. For the two-user case, five representative subproblems are considered, which draw connections with classic source coding problems including the Gray-Wyner system, successive refinement, and the Kaspi/Heegard-Berger problem. Afterwards, the problem of distributed computing with successive refinement is considered. It is shown that if full data recovery is required in the second stage of successive refinement, then any information acquired in the first stage will be useful later in the second stage. The second part of this thesis looks at the collection of summary statistics in wireless sensor networks. Summary statistics include arithmetic mean, median, standard deviation, etc, and they belong to the class of symmetric functions. This thesis develops arithmetic computation coding in order to efficiently perform in-network computation for weighted arithmetic sums and symmetric functions. The developed arithmetic computation coding increases the achievable computation rate from Θ((logL)/L)\Theta((\log L)/L) to Θ(1/logL)\Theta(1/\log L), where LL is the number of sensors. Finally, this thesis demonstrates that interaction among sensors is beneficial for computation of type-threshold functions, e.g., the maximum and the indicator function, and that a non-vanishing computation rate is achievable

    When all information is not created equal

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 191-196).Following Shannon's landmark paper, the classical theoretical framework for communication is based on a simplifying assumption that all information is equally important, thus aiming to provide a uniform protection to all information. However, this homogeneous view of information is not suitable for a variety of modern-day communication scenarios such as wireless and sensor networks, video transmission, interactive systems, and control applications. For example, an emergency alarm from a sensor network needs more protection than other transmitted information. Similarly, the coarse resolution of an image needs better protection than its finer details. For such heterogeneous information, if providing a uniformly high protection level to all parts of the information is infeasible, it is desirable to provide different protection levels based on the importance of those parts. The main objective of this thesis is to extend classical information theory to address this heterogeneous nature of information. Many theoretical tools needed for this are fundamentally different from the conventional homogeneous setting. One key issue is that bits are no more a sufficient measure of information. We develop a general framework for understanding the fundamental limits of transmitting such information, calculate such fundamental limits, and provide optimal architectures for achieving these limits. Our analysis shows that even without sacrificing the data-rate from channel capacity, some crucial parts of information can be protected with exponential reliability. This research would challenge the notion that a set of homogenous bits should necessarily be viewed as a universal interface to the physical layer; this potentially impacts the design of network architectures. This thesis also develops two novel approaches for simplifying such difficult problems in information theory. Our formulations are based on ideas from graphical models and Euclidean geometry and provide canonical examples for network information theory. They provide fresh insights into previously intractable problems as well as generalize previous related results.by Shashibhushan Prataprao Borade.Ph.D
    corecore