389,218 research outputs found

    A new basic theorem of information theory

    Get PDF
    "June 1, 1954." "This report is identical with a thesis submitted to the Department of Physics, M.I.T., ... for the degree of Doctor of Philosophy."Bibliography: p. 27-28.Army Signal Corps Contract DA36-039 sc-100 Project 8-102-B-0 Dept. of the Army Project 3-99-10-022Amiel Feinstein

    A Unified Approach for Network Information Theory

    Full text link
    In this paper, we take a unified approach for network information theory and prove a coding theorem, which can recover most of the achievability results in network information theory that are based on random coding. The final single-letter expression has a very simple form, which was made possible by many novel elements such as a unified framework that represents various network problems in a simple and unified way, a unified coding strategy that consists of a few basic ingredients but can emulate many known coding techniques if needed, and new proof techniques beyond the use of standard covering and packing lemmas. For example, in our framework, sources, channels, states and side information are treated in a unified way and various constraints such as cost and distortion constraints are unified as a single joint-typicality constraint. Our theorem can be useful in proving many new achievability results easily and in some cases gives simpler rate expressions than those obtained using conventional approaches. Furthermore, our unified coding can strictly outperform existing schemes. For example, we obtain a generalized decode-compress-amplify-and-forward bound as a simple corollary of our main theorem and show it strictly outperforms previously known coding schemes. Using our unified framework, we formally define and characterize three types of network duality based on channel input-output reversal and network flow reversal combined with packing-covering duality.Comment: 52 pages, 7 figures, submitted to IEEE Transactions on Information theory, a shorter version will appear in Proc. IEEE ISIT 201

    Entanglement, space-time and the Mayer-Vietoris theorem

    Full text link
    Entanglement appears to be a fundamental building block of quantum gravity leading to new principles underlying the nature of quantum space-time. One such principle is the ER-EPR duality. While supported by our present intuition, a proof is far from obvious. In this article I present a first step towards such a proof, originating in what is known to algebraic topologists as the Mayer-Vietoris theorem. The main result of this work is the re-interpretation of the various morphisms arising when the Mayer-Vietoris theorem is used to assemble a torus-like topology from more basic subspaces on the torus in terms of quantum information theory resulting in a quantum entangler gate (Hadamard and c-NOT)

    Szemer'edi's regularity lemma revisited

    Get PDF
    Szemer'edi's regularity lemma is a basic tool in graph theory, and also plays an important role in additive combinatorics, most notably in proving Szemer\'edi's theorem on arithmetic progressions . In this note we revisit this lemma from the perspective of probability theory and information theory instead of graph theory, and observe a variant of this lemma which introduces a new parameter FF. This stronger version of the regularity lemma was iterated in a recent paper of the author to reprove the analogous regularity lemma for hypergraphs

    Resultat för stora avvikelser av typ Gärtner-Ellis, med en tillämpning till försäkringsmatematik

    Get PDF
    Large deviations theory is a branch of probability theory which studies the exponential decay of probabilities for extremely rare events in the context of sequences of probability distributions. The theory originates from actuaries studying risk and insurance from a mathematical perspective, but today it has become its own field of study, and is no longer as tightly linked to insurance mathematics. Large deviations theory is nowadays frequently applied in various fields, such as information theory, queuing theory, statistical mechanics and finance. The connection to insurance mathematics has not grown obsolete, however, and these new results can also be applied to develop new results in the context of insurance. This paper is split into two main sections. The first presents some basic concepts from large deviations theory as well as the Gärtner-Ellis theorem, the first main topic of this thesis, and then provides a fairly detailed proof of this theorem. The Gärtner-Ellis theorem is an important result in large deviations theory, as it gives upper and lower bounds relating to asymptotic probabilities, while allowing for some dependence structure in the sequence of random variables. The second main topic of this thesis is the presentation of two large deviations results developed by H. Nyrhinen, concerning the random time of ruin as a function of the given starting capital. This section begins with introducing the specifics of this insurance setting of Nyrhinen’s work as well as the ruin problem, a central topic of risk theory. Following this are the main results, and the corresponding proofs, which rely to some part on convex analysis, and also on a continuous version of the Gärtner-Ellis theorem. Recommended preliminary knowledge: Probability Theory, Risk Theory

    Operator scaling with specified marginals

    Full text link
    The completely positive maps, a generalization of the nonnegative matrices, are a well-studied class of maps from nĂ—nn\times n matrices to mĂ—mm\times m matrices. The existence of the operator analogues of doubly stochastic scalings of matrices is equivalent to a multitude of problems in computer science and mathematics, such rational identity testing in non-commuting variables, noncommutative rank of symbolic matrices, and a basic problem in invariant theory (Garg, Gurvits, Oliveira and Wigderson, FOCS, 2016). We study operator scaling with specified marginals, which is the operator analogue of scaling matrices to specified row and column sums. We characterize the operators which can be scaled to given marginals, much in the spirit of the Gurvits' algorithmic characterization of the operators that can be scaled to doubly stochastic (Gurvits, Journal of Computer and System Sciences, 2004). Our algorithm produces approximate scalings in time poly(n,m) whenever scalings exist. A central ingredient in our analysis is a reduction from the specified marginals setting to the doubly stochastic setting. Operator scaling with specified marginals arises in diverse areas of study such as the Brascamp-Lieb inequalities, communication complexity, eigenvalues of sums of Hermitian matrices, and quantum information theory. Some of the known theorems in these areas, several of which had no effective proof, are straightforward consequences of our characterization theorem. For instance, we obtain a simple algorithm to find, when they exist, a tuple of Hermitian matrices with given spectra whose sum has a given spectrum. We also prove new theorems such as a generalization of Forster's theorem (Forster, Journal of Computer and System Sciences, 2002) concerning radial isotropic position.Comment: 34 pages, 3 page appendi

    Who Cares about Axiomatization? Representation, Invariance, and Formal Ontologies

    Get PDF
    The philosophy of science of Patrick Suppes is centered on two important notions that are part of the title of his recent book (Suppes 2002): Representation and Invariance. Representation is important because when we embrace a theory we implicitly choose a way to represent the phenomenon we are studying. Invariance is important because, since invariants are the only things that are constant in a theory, in a way they give the “objective” meaning of that theory. Every scientific theory gives a representation of a class of structures and studies the invariant properties holding in that class of structures. In Suppes’ view, the best way to define this class of structures is via axiomatization. This is because a class of structures is given by a definition, and this same definition establishes which are the properties that a single structure must possess in order to belong to the class. These properties correspond to the axioms of a logical theory. In Suppes’ view, the best way to characterize a scientific structure is by giving a representation theorem for its models and singling out the invariants in the structure. Thus, we can say that the philosophy of science of Patrick Suppes consists in the application of the axiomatic method to scientific disciplines. What I want to argue in this paper is that this application of the axiomatic method is also at the basis of a new approach that is being increasingly applied to the study of computer science and information systems, namely the approach of formal ontologies. The main task of an ontology is that of making explicit the conceptual structure underlying a certain domain. By “making explicit the conceptual structure” we mean singling out the most basic entities populating the domain and writing axioms expressing the main properties of these primitives and the relations holding among them. So, in both cases, the axiomatization is the main tool used to characterize the object of inquiry, being this object scientific theories (in Suppes’ approach), or information systems (for formal ontologies). In the following section I will present the view of Patrick Suppes on the philosophy of science and the axiomatic method, in section 3 I will survey the theoretical issues underlying the work that is being done in formal ontologies and in section 4 I will draw a comparison of these two approaches and explore similarities and differences between them
    • …
    corecore