18,582 research outputs found

    On the Capacity of Networks with Correlated Sources

    Full text link
    Characterizing the capacity region for a network can be extremely difficult. Even with independent sources, determining the capacity region can be as hard as the open problem of characterizing all information inequalities. The majority of computable outer bounds in the literature are relaxations of the Linear Programming bound which involves entropy functions of random variables related to the sources and link messages. When sources are not independent, the problem is even more complicated. Extension of linear programming bounds to networks with correlated sources is largely open. Source dependence is usually specified via a joint probability distribution, and one of the main challenges in extending linear programming bounds is the difficulty (or impossibility) of characterizing arbitrary dependencies via entropy functions. This paper tackles the problem by answering the question of how well entropy functions can characterize correlation among sources. We show that by using carefully chosen auxiliary random variables, the characterization can be fairly "accurate"

    Mission impossible: Computing the network coding capacity region

    Full text link
    One of the main theoretical motivations for the emerging area of network coding is the achievability of the max-flow/min-cut rate for single source multicast. This can exceed the rate achievable with routing alone, and is achievable with linear network codes. The multi-source problem is more complicated. Computation of its capacity region is equivalent to determination of the set of all entropy functions Ξ“βˆ—\Gamma^*, which is non-polyhedral. The aim of this paper is to demonstrate that this difficulty can arise even in single source problems. In particular, for single source networks with hierarchical sink requirements, and for single source networks with secrecy constraints. In both cases, we exhibit networks whose capacity regions involve Ξ“βˆ—\Gamma^*. As in the multi-source case, linear codes are insufficient

    Capacity Bounds for Networks with Correlated Sources and Characterisation of Distributions by Entropies

    Full text link
    Characterising the capacity region for a network can be extremely difficult. Even with independent sources, determining the capacity region can be as hard as the open problem of characterising all information inequalities. The majority of computable outer bounds in the literature are relaxations of the Linear Programming bound which involves entropy functions of random variables related to the sources and link messages. When sources are not independent, the problem is even more complicated. Extension of Linear Programming bounds to networks with correlated sources is largely open. Source dependence is usually specified via a joint probability distribution, and one of the main challenges in extending linear program bounds is the difficulty (or impossibility) of characterising arbitrary dependencies via entropy functions. This paper tackles the problem by answering the question of how well entropy functions can characterise correlation among sources. We show that by using carefully chosen auxiliary random variables, the characterisation can be fairly "accurate" Using such auxiliary random variables we also give implicit and explicit outer bounds on the capacity of networks with correlated sources. The characterisation of correlation or joint distribution via Shannon entropy functions is also applicable to other information measures such as Renyi entropy and Tsallis entropy.Comment: 24 pager, 1 figure, submitted to IEEE Transactions on Information Theory. arXiv admin note: text overlap with arXiv:1309.151

    Dualities Between Entropy Functions and Network Codes

    Full text link
    This paper provides a new duality between entropy functions and network codes. Given a function gβ‰₯0g\geq 0 defined on all proper subsets of NN random variables, we provide a construction for a network multicast problem which is solvable if and only if gg is entropic. The underlying network topology is fixed and the multicast problem depends on gg only through edge capacities and source rates. Relaxing the requirement that the domain of gg be subsets of random variables, we obtain a similar duality between polymatroids and the linear programming bound. These duality results provide an alternative proof of the insufficiency of linear (and abelian) network codes, and demonstrate the utility of non-Shannon inequalities to tighten outer bounds on network coding capacity regions

    Mapping the Region of Entropic Vectors with Support Enumeration & Information Geometry

    Full text link
    The region of entropic vectors is a convex cone that has been shown to be at the core of many fundamental limits for problems in multiterminal data compression, network coding, and multimedia transmission. This cone has been shown to be non-polyhedral for four or more random variables, however its boundary remains unknown for four or more discrete random variables. Methods for specifying probability distributions that are in faces and on the boundary of the convex cone are derived, then utilized to map optimized inner bounds to the unknown part of the entropy region. The first method utilizes tools and algorithms from abstract algebra to efficiently determine those supports for the joint probability mass functions for four or more random variables that can, for some appropriate set of non-zero probabilities, yield entropic vectors in the gap between the best known inner and outer bounds. These supports are utilized, together with numerical optimization over non-zero probabilities, to provide inner bounds to the unknown part of the entropy region. Next, information geometry is utilized to parameterize and study the structure of probability distributions on these supports yielding entropic vectors in the faces of entropy and in the unknown part of the entropy region

    Cut-Set Bounds on Network Information Flow

    Full text link
    Explicit characterization of the capacity region of communication networks is a long standing problem. While it is known that network coding can outperform routing and replication, the set of feasible rates is not known in general. Characterizing the network coding capacity region requires determination of the set of all entropic vectors. Furthermore, computing the explicitly known linear programming bound is infeasible in practice due to an exponential growth in complexity as a function of network size. This paper focuses on the fundamental problems of characterization and computation of outer bounds for networks with correlated sources. Starting from the known local functional dependencies induced by the communications network, we introduce the notion of irreducible sets, which characterize implied functional dependencies. We provide recursions for computation of all maximal irreducible sets. These sets act as information-theoretic bottlenecks, and provide an easily computable outer bound. We extend the notion of irreducible sets (and resulting outer bound) for networks with independent sources. We compare our bounds with existing bounds in the literature. We find that our new bounds are the best among the known graph theoretic bounds for networks with correlated sources and for networks with independent sources.Comment: to appear in IEEE Transactions on Information Theor

    On Multi-source Networks: Enumeration, Rate Region Computation, and Hierarchy

    Full text link
    Recent algorithmic developments have enabled computers to automatically determine and prove the capacity regions of small hypergraph networks under network coding. A structural theory relating network coding problems of different sizes is developed to make best use of this newfound computational capability. A formal notion of network minimality is developed which removes components of a network coding problem that are inessential to its core complexity. Equivalence between different network coding problems under relabeling is formalized via group actions, an algorithm which can directly list single representatives from each equivalence class of minimal networks up to a prescribed network size is presented. This algorithm, together with rate region software, is leveraged to create a database containing the rate regions for all minimal network coding problems with five or fewer sources and edges, a collection of 744119 equivalence classes representing more than 9 million networks. In order to best learn from this database, and to leverage it to infer rate regions and their characteristics of networks at scale, a hierarchy between different network coding problems is created with a new theory of combinations and embedding operators.Comment: 20 pages with double column, revision of previous submission arXiv:1507.0572

    Non-Asymptotic and Second-Order Achievability Bounds for Coding With Side-Information

    Full text link
    We present novel non-asymptotic or finite blocklength achievability bounds for three side-information problems in network information theory. These include (i) the Wyner-Ahlswede-Korner (WAK) problem of almost-lossless source coding with rate-limited side-information, (ii) the Wyner-Ziv (WZ) problem of lossy source coding with side-information at the decoder and (iii) the Gel'fand-Pinsker (GP) problem of channel coding with noncausal state information available at the encoder. The bounds are proved using ideas from channel simulation and channel resolvability. Our bounds for all three problems improve on all previous non-asymptotic bounds on the error probability of the WAK, WZ and GP problems--in particular those derived by Verdu. Using our novel non-asymptotic bounds, we recover the general formulas for the optimal rates of these side-information problems. Finally, we also present achievable second-order coding rates by applying the multidimensional Berry-Esseen theorem to our new non-asymptotic bounds. Numerical results show that the second-order coding rates obtained using our non-asymptotic achievability bounds are superior to those obtained using existing finite blocklength bounds.Comment: 32 pages (two column), 8 figures, v2 fixed some minor errors in the WZ problem, v2 included cost constraint in the GP problem, v3 added cardinality bounds, v4 fixed an error of the numerical calculation in the GP problem, v5 is an accepted version for publicatio

    Deep Learning for the Gaussian Wiretap Channel

    Full text link
    End-to-end learning of communication systems with neural networks and particularly autoencoders is an emerging research direction which gained popularity in the last year. In this approach, neural networks learn to simultaneously optimize encoding and decoding functions to establish reliable message transmission. In this paper, this line of thinking is extended to communication scenarios in which an eavesdropper must further be kept ignorant about the communication. The secrecy of the transmission is achieved by utilizing a modified secure loss function based on cross-entropy which can be implemented with state-of-the-art machine-learning libraries. This secure loss function approach is applied in a Gaussian wiretap channel setup, for which it is shown that the neural network learns a trade-off between reliable communication and information secrecy by clustering learned constellations. As a result, an eavesdropper with higher noise cannot distinguish between the symbols anymore.Comment: 6 pages, 11 figure

    On Capacity Region of Wiretap Networks

    Full text link
    In this paper we consider the problem of secure network coding where an adversary has access to an unknown subset of links chosen from a known collection of links subsets. We study the capacity region of such networks, commonly called "wiretap networks", subject to weak and strong secrecy constraints, and consider both zero-error and asymptotically zero-error communication. We prove that in general discrete memoryless networks modeled by discrete memoryless channels, the capacity region subject to strong secrecy requirement and the capacity region subject to weak secrecy requirement are equal. In particular, this result shows that requiring strong secrecy in a wiretap network with asymptotically zero probability of error does not shrink the capacity region compared to the case of weak secrecy requirement. We also derive inner and outer bounds on the network coding capacity region of wiretap networks subject to weak secrecy constraint, for both zero probability of error and asymptotically zero probability of error, in terms of the entropic region
    • …
    corecore