97 research outputs found

    Normalized Entropy Vectors, Network Information Theory and Convex Optimization

    Get PDF
    We introduce the notion of normalized entropic vectors -- slightly different from the standard definition in the literature in that we normalize entropy by the logarithm of the alphabet size. We argue that this definition is more natural for determining the capacity region of networks and, in particular, that it smooths out the irregularities of the space of non-normalized entropy vectors and renders the closure of the resulting space convex (and compact). Furthermore, the closure of the space remains convex even under constraints imposed by memoryless channels internal to the network. It therefore follows that, for a large class of acyclic memoryless networks, the capacity region for an arbitrary set of sources and destinations can be found by maximization of a linear function over the convex set of channel-constrained normalized entropic vectors and some linear constraints. While this may not necessarily make the problem simpler, it certainly circumvents the "infinite-letter characterization" issue, as well as the nonconvexity of earlier formulations, and exposes the core of the problem. We show that the approach allows one to obtain the classical cutset bounds via a duality argument. Furthermore, the approach readily shows that, for acyclic memoryless wired networks, one need only consider the space of unconstrained normalized entropic vectors, thus separating channel and network coding -- a result very recently recognized in the literature

    On the Impact of a Single Edge on the Network Coding Capacity

    Get PDF
    In this paper, we study the effect of a single link on the capacity of a network of error-free bit pipes. More precisely, we study the change in network capacity that results when we remove a single link of capacity δ\delta. In a recent result, we proved that if all the sources are directly available to a single super-source node, then removing a link of capacity δ\delta cannot change the capacity region of the network by more than δ\delta in each dimension. In this paper, we extend this result to the case of multi-source, multi-sink networks for some special network topologies.Comment: Originally presented at ITA 2011 in San Diego, CA. The arXiv version contains an updated proof of Theorem

    On the Entropy Region of Discrete and Continuous Random Variables and Network Information Theory

    Get PDF
    We show that a large class of network information theory problems can be cast as convex optimization over the convex space of entropy vectors. A vector in 2^(n) - 1 dimensional space is called entropic if each of its entries can be regarded as the joint entropy of a particular subset of n random variables (note that any set of size n has 2^(n) - 1 nonempty subsets.) While an explicit characterization of the space of entropy vectors is well-known for n = 2, 3 random variables, it is unknown for n > 3 (which is why most network information theory problems are open.) We will construct inner bounds to the space of entropic vectors using tools such as quasi-uniform distributions, lattices, and Cayley's hyperdeterminant

    Polymatroids and polyquantoids

    Full text link
    When studying entropy functions of multivariate probability distributions, polymatroids and matroids emerge. Entropy functions of pure multiparty quantum states give rise to analogous notions, called here polyquantoids and quantoids. Polymatroids and polyquantoids are related via linear mappings and duality. Quantum secret sharing schemes that are ideal are described by selfdual matroids. Expansions of integer polyquantoids to quantoids are studied and linked to that of polymatroids

    Infinitely many constrained inequalities for the von Neumann entropy

    Full text link
    We exhibit infinitely many new, constrained inequalities for the von Neumann entropy, and show that they are independent of each other and the known inequalities obeyed by the von Neumann entropy (basically strong subadditivity). The new inequalities were proved originally by Makarychev et al. [Commun. Inf. Syst., 2(2):147-166, 2002] for the Shannon entropy, using properties of probability distributions. Our approach extends the proof of the inequalities to the quantum domain, and includes their independence for the quantum and also the classical cases.Comment: 11 page
    corecore