31 research outputs found

    Generating secret in a network

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 247-253) and index.This monograph studies the theory of information through the multiuser secret key agreement problem. A general notion of mutual dependence is established for the secrecy capacity, as a natural generalization of Shannon's mutual information to the multivariate case. Under linear-type source models, this capacity can be achieved practically by linear network codes. In addition to being an unusual application of the network coding solution to a secrecy problem, it gives secrecy capacity an interpretation of network information flow and partition connectivity, further confirming the intuitive meaning of secrecy capacity as mutual dependence. New identities in submodular function optimization and matroid theory are discovered in proving these results. A framework is also developed to view matroids as graphs, allowing certain theory on graphs to generalize to matroids. In order to study cooperation schemes in a network, a general channel model with multiple inputs is formulated. Single-letter secrecy capacity upper bounds are derived using the Shearer-type lemma. Lower bounds are obtained with a new cooperation scheme called the mixed source emulation. In the same way that mixed strategies may surpass pure strategies in zero-sum games, mixed source emulation outperforms the conventional pure source emulation approach in terms of the achievable key rate. Necessary and sufficient conditions are derived for tightness of these secrecy bounds, which shows that secrecy capacity can be characterized for a larger class of channels than the broadcast-type channels considered in previous work. The mixed source emulation scheme is also shown to be unnecessary for some channels while insufficient for others. The possibility of a better cooperative scheme becomes apparent, but a general scheme remains to be found.by Chung Chan.Ph.D

    Capacity and coding in digital communications

    Get PDF
    +164hlm.;24c

    Super-orthogonal space-time turbo codes in Rayleigh fading channels.

    Get PDF
    Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2005.The vision of anytime, anywhere communications coupled by the rapid growth of wireless subscribers and increased volumes of internet users, suggests that the widespread demand for always-on access data, is sure to be a major driver for the wireless industry in the years to come. Among many cutting edge wireless technologies, a new class of transmission techniques, known as Multiple-Input Multiple-Output (MIMO) techniques, has emerged as an important technology leading to promising link capacity gains of several fold increase in data rates and spectral efficiency. While the use of MIMO techniques in the third generation (3G) standards is minimal, it is anticipated that these technologies will play an important role in the physical layer of fixed and fourth generation (4G) wireless systems. Concatenated codes, a class of forward error correction codes, of which Turbo codes are a classical example, have been shown to achieve reliable performance which approach the Shannon limit. An effective and practical way to approach the capacity of MIMO wireless channels is to employ space-time coding (STC). Space-Time coding is based on introducing joint correlation in transmitted signals in both the space and time domains. Space-Time Trellis Codes (STTCs) have been shown to provide the best trade-off in terms of coding gain advantage, improved data rates and computational complexity. Super-Orthogonal Space-Time Trellis Coding (SOSTTC) is the recently proposed form of space-time trellis coding which outperforms its predecessor. The code has a systematic design method to maximize the coding gain for a given rate, constellation size, and number of states. Simulation and analytical results are provided to justify the improved performance. The main focus of this dissertation is on STTCs, SOSTTCs and their concatenated versions in quasi-static and rapid Rayleigh fading channels. Turbo codes and space-time codes have made significant impact in terms of the theory and practice by closing the gap on the Shannon limit and the large capacity gains provided by the MIMO channel, respectively. However, a convincing solution to exploit the capabilities provided by a MIMO channel would be to build the turbo processing principle into the design of MIMO architectures. The field of concatenated STTCs has already received much attention and has shown improved performance over conventional STTCs. Recently simple and double concatenated STTCs structures have shown to provide a further improvement performance. Motivated by this fact, two concatenated SOSTTC structures are proposed called Super-orthogonal space-time turbo codes. The performance of these new concatenated SOSTTC is compared with that of concatenated STTCs and conventional SOSTTCs with simulations in Rayleigh fading channels. It is seen that the SOST-CC system outperforms the ST-CC system in rapid fading channels, whereas it maintains performance similar to that in quasi-static. The SOST-SC system has improved performance for larger frame lengths and overall maintains similar performance with ST-SC systems. A further investigation of these codes with channel estimation errors is also provided

    Information theoretic aspects of the control and the mode estimation of stochastic systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (leaves 131-137).(cont.) parallel with a communication paradigm and deriving an analysis of performance. In our approach, the switching system is viewed as an encoder of the mode, which is interpreted as the message, while a probing signal establishes a random code. Using a distortion function, we define an uncertainty ball where the estimates are guaranteed to lie with probability arbitrarily close to 1. The radius of the uncertainty ball is directly related to the entropy rate of the switching process.In this thesis, we investigate three problems: the first broaches the control under information constraints in the presence of uncertainty; in the second we derive a new fundamental limitation of performance in the presence of finite capacity feedback; while the third studies the estimation of Hidden Markov Models. Problem 1: We study the stabilizability of uncertain stochastic systems in the presence of finite capacity feedback. We consider a stochastic digital link that sends words whose size is governed by a random process. Such link is used to transmit state measurements between the plant and the controller. We derive necessary and sufficient conditions for internal and external stabilizability of the feedback loop. In addition, stability in the presence of uncertainty in the plant is analyzed using a small-gain argument. Problem 2: We address a fundamental limitation of performance for feedback systems, in the presence of a communication channel. The feedback loop comprises a discrete-time, linear and time-invariant plant, a channel, an encoder and a decoder which may also embody a controller. We derive an inequality of the form L ̲>[or equal to] [epsilon]max ... - C[channel] where L ̲is a measure of disturbance rejection, A is the open loop dynamic matrix and Cchannel is the Shannon capacity of the channel. Our measure L ̲is non-negative and smaller L ̲indicates better rejection (attenuation), while L ̲= 0 signifies no rejection. Additionally, we prove that, under a stationarity assumption, L ̲admits a log-sensitivity integral representation. Problem 3: We tackle the problem of mode estimation in switching systems. From the theoretical point of view, our contribution is twofold: creating a framework that has a clearby Numo Miguel Lara Cintra Martins.Ph.D

    Information theoretic bounds for distributed computation

    Get PDF
    Includes bibliographical references (p. 101-103).Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.(cont.) In the second formulation, each node has an initial real-valued measurement. Nodes communicate their values via a network with fixed topology and noisy channels between nodes that are linked. The goal is for each node to estimate a given function of all the initial values in the network, so that the mean square error in the estimate is within a prescribed interval. Here, the nodes do not know the distribution of the source, but have unlimited computation power to run whatever algorithm needed to ensure the mean square error criterion. The question is: how does the communication network impact the time until the performance criterion is guaranteed. Using Information Theoretic inequalities, I derive an algorithm-independent lower bound on the computation time. The bound is a function of the uncertainty in the function to be estimated, via its differential entropy, and the desired accuracy level, as specified by the mean square error criterion. Next, I demonstrate the use of this bound in a scenario where nodes communicate through erasure channels to learn a linear function of all the node's initial values. For this scenario, I describe an algorithm whose running time, until with high probability all nodes' estimates lie within a prescribed interval of the true value, is reciprocally related to the "conductance." Conductance quantifies the information flow "bottle-neck" in the network and hence captures the effect of the topology and capacities. Using the lower bound, I show that the running time of any algorithm that guarantees the aforementioned probability criterion, must scale reciprocally with conductance. Thus, the lower bound is tight in capturing the effect of network topology via conductance; conversely, the running time of our algorithm is optimal with respect to its dependence on conductance.In this thesis, I explore via two formulations the impact of communication constraints on distributed computation. In both formulations, nodes make partial observations of an underlying source. They communicate in order to compute a given function of all the measurements in the network, to within a desired level of error. Such computation in networks arises in various contexts, like wireless and sensor networks, consensus and belief propagation with bit constraints, and estimation of a slowly evolving process. By utilizing Information Theoretic formulations and tools, I obtain code- or algorithm-independent lower bounds that capture fundamental limits imposed by the communication network. In the first formulation, each node samples a component of a source whose values belong to a field of order q. The nodes utilize their knowledge of the joint probability mass function of the components together with the function to be computed to efficiently compress their messages, which are then broadcast. The question is: how many bits per sample are necessary and sufficient for each node to broadcast in order for the probability of decoding error to approach zero as the number of samples grows. I find that when there are two nodes in the network seeking to compute the sample-wise modulo-q sum of their measurements, a node compressing so that the other can compute the modulo-q sum is no more efficient than its compressing so that the actual data sequence is decoded. However, when there are more than two nodes, we demonstrate that there exists a joint probability mass function for which nodes can more efficiently compress so that the modulo-q sum is decoded with probability of error asymptotically approaching zero. It is both necessary and sufficient for nodes to send a smaller number of bits per sample than they would have to in order for all nodes to acquire all the data sequences in the network.by Ola Ayaso.Ph.D

    Maintaining secrecy when information leakage is unavoidable

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 109-115).(cont.) We apply the framework to get new results, creating (a) encryption schemes with very short keys, and (b) hash functions that leak no information about their input, yet-paradoxically-allow testing if a candidate vector is close to the input. One of the technical contributions of this research is to provide new, cryptographic uses of mathematical tools from complexity theory known as randomness extractors.Sharing and maintaining long, random keys is one of the central problems in cryptography. This thesis provides about ensuring the security of a cryptographic key when partial information about it has been, or must be, leaked to an adversary. We consider two basic approaches: 1. Extracting a new, shorter, secret key from one that has been partially compromised. Specifically, we study the use of noisy data, such as biometrics and personal information, as cryptographic keys. Such data can vary drastically from one measurement to the next. We would like to store enough information to handle these variations, without having to rely on any secure storage-in particular, without storing the key itself in the clear. We solve the problem by casting it in terms of key extraction. We give a precise definition of what "security" should mean in this setting, and design practical, general solutions with rigorous analyses. Prior to this work, no solutions were known with satisfactory provable security guarantees. 2. Ensuring that whatever is revealed is not actually useful. This is most relevant when the key itself is sensitive-for example when it is based on a person's iris scan or Social Security Number. This second approach requires the user to have some control over exactly what information is revealed, but this is often the case: for example, if the user must reveal enough information to allow another user to correct errors in a corrupted key. How can the user ensure that whatever information the adversary learns is not useful to her? We answer by developing a theoretical framework for separating leaked information from useful information. Our definition strengthens the notion of entropic security, considered before in a few different contexts.by Adam Davison Smith.Ph.D

    When all information is not created equal

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 191-196).Following Shannon's landmark paper, the classical theoretical framework for communication is based on a simplifying assumption that all information is equally important, thus aiming to provide a uniform protection to all information. However, this homogeneous view of information is not suitable for a variety of modern-day communication scenarios such as wireless and sensor networks, video transmission, interactive systems, and control applications. For example, an emergency alarm from a sensor network needs more protection than other transmitted information. Similarly, the coarse resolution of an image needs better protection than its finer details. For such heterogeneous information, if providing a uniformly high protection level to all parts of the information is infeasible, it is desirable to provide different protection levels based on the importance of those parts. The main objective of this thesis is to extend classical information theory to address this heterogeneous nature of information. Many theoretical tools needed for this are fundamentally different from the conventional homogeneous setting. One key issue is that bits are no more a sufficient measure of information. We develop a general framework for understanding the fundamental limits of transmitting such information, calculate such fundamental limits, and provide optimal architectures for achieving these limits. Our analysis shows that even without sacrificing the data-rate from channel capacity, some crucial parts of information can be protected with exponential reliability. This research would challenge the notion that a set of homogenous bits should necessarily be viewed as a universal interface to the physical layer; this potentially impacts the design of network architectures. This thesis also develops two novel approaches for simplifying such difficult problems in information theory. Our formulations are based on ideas from graphical models and Euclidean geometry and provide canonical examples for network information theory. They provide fresh insights into previously intractable problems as well as generalize previous related results.by Shashibhushan Prataprao Borade.Ph.D

    Statistical limits of graphical channel models and a semidefinite programming approach

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 205-213).Community recovery is a major challenge in data science and computer science. The goal in community recovery is to find the hidden clusters from given relational data, which is often represented as a labeled hyper graph where nodes correspond to items needing to be labeled and edges correspond to observed relations between the items. We investigate the problem of exact recovery in the class of statistical models which can be expressed in terms of graphical channels. In a graphical channel model, we observe noisy measurements of the relations between k nodes while the true labeling is unknown to us, and the goal is to recover the labels correctly. This generalizes both the stochastic block models and spiked tensor models for principal component analysis, which has gained much interest over the last decade. We focus on two aspects of exact recovery: statistical limits and efficient algorithms achieving the statistic limit. For the statistical limits, we show that the achievability of exact recovery is essentially determined by whether we can recover the label of one node given other nodes labels with fairly high probability. This phenomenon was observed by Abbe et al. for generic stochastic block models, and called "local-to-global amplification". We confirm that local-to-global amplification indeed holds for generic graphical channel models, under some regularity assumptions. As a corollary, the threshold for exact recovery is explicitly determined. For algorithmic concerns, we consider two examples of graphical channel models, (i) the spiked tensor model with additive Gaussian noise, and (ii) the generalization of the stochastic block model for k-uniform hypergraphs. We propose a strategy which we call "truncate-and-relax", based on a standard semidefinite relaxation technique. We show that in these two models, the algorithm based on this strategy achieves exact recovery up to a threshold which orderwise matches the statistical threshold. We complement this by showing the limitation of the algorithm.by Chiheon Kim.Ph. D
    corecore