473 research outputs found

    R\'enyi Entropy Power Inequalities via Normal Transport and Rotation

    Full text link
    Following a recent proof of Shannon's entropy power inequality (EPI), a comprehensive framework for deriving various EPIs for the R\'enyi entropy is presented that uses transport arguments from normal densities and a change of variable by rotation. Simple arguments are given to recover the previously known R\'enyi EPIs and derive new ones, by unifying a multiplicative form with constant c and a modification with exponent {\alpha} of previous works. In particular, for log-concave densities, we obtain a simple transportation proof of a sharp varentropy bound.Comment: 17 page. Entropy Journal, to appea

    Two Measures of Dependence

    Full text link
    Two families of dependence measures between random variables are introduced. They are based on the R\'enyi divergence of order α\alpha and the relative α\alpha-entropy, respectively, and both dependence measures reduce to Shannon's mutual information when their order α\alpha is one. The first measure shares many properties with the mutual information, including the data-processing inequality, and can be related to the optimal error exponents in composite hypothesis testing. The second measure does not satisfy the data-processing inequality, but appears naturally in the context of distributed task encoding.Comment: 40 pages; 1 figure; published in Entrop

    Strong converse for the quantum capacity of the erasure channel for almost all codes

    Get PDF
    A strong converse theorem for channel capacity establishes that the error probability in any communication scheme for a given channel necessarily tends to one if the rate of communication exceeds the channel's capacity. Establishing such a theorem for the quantum capacity of degradable channels has been an elusive task, with the strongest progress so far being a so-called "pretty strong converse". In this work, Morgan and Winter proved that the quantum error of any quantum communication scheme for a given degradable channel converges to a value larger than 1/21/\sqrt{2} in the limit of many channel uses if the quantum rate of communication exceeds the channel's quantum capacity. The present paper establishes a theorem that is a counterpart to this "pretty strong converse". We prove that the large fraction of codes having a rate exceeding the erasure channel's quantum capacity have a quantum error tending to one in the limit of many channel uses. Thus, our work adds to the body of evidence that a fully strong converse theorem should hold for the quantum capacity of the erasure channel. As a side result, we prove that the classical capacity of the quantum erasure channel obeys the strong converse property.Comment: 15 pages, submission to the 9th Conference on the Theory of Quantum Computation, Communication, and Cryptography (TQC 2014

    Samplers and Extractors for Unbounded Functions

    Get PDF
    Blasiok (SODA\u2718) recently introduced the notion of a subgaussian sampler, defined as an averaging sampler for approximating the mean of functions f from {0,1}^m to the real numbers such that f(U_m) has subgaussian tails, and asked for explicit constructions. In this work, we give the first explicit constructions of subgaussian samplers (and in fact averaging samplers for the broader class of subexponential functions) that match the best known constructions of averaging samplers for [0,1]-bounded functions in the regime of parameters where the approximation error epsilon and failure probability delta are subconstant. Our constructions are established via an extension of the standard notion of randomness extractor (Nisan and Zuckerman, JCSS\u2796) where the error is measured by an arbitrary divergence rather than total variation distance, and a generalization of Zuckerman\u27s equivalence (Random Struct. Alg.\u2797) between extractors and samplers. We believe that the framework we develop, and specifically the notion of an extractor for the Kullback-Leibler (KL) divergence, are of independent interest. In particular, KL-extractors are stronger than both standard extractors and subgaussian samplers, but we show that they exist with essentially the same parameters (constructively and non-constructively) as standard extractors

    Hypothesis Testing Interpretations and Renyi Differential Privacy

    Full text link
    Differential privacy is a de facto standard in data privacy, with applications in the public and private sectors. A way to explain differential privacy, which is particularly appealing to statistician and social scientists is by means of its statistical hypothesis testing interpretation. Informally, one cannot effectively test whether a specific individual has contributed her data by observing the output of a private mechanism---any test cannot have both high significance and high power. In this paper, we identify some conditions under which a privacy definition given in terms of a statistical divergence satisfies a similar interpretation. These conditions are useful to analyze the distinguishability power of divergences and we use them to study the hypothesis testing interpretation of some relaxations of differential privacy based on Renyi divergence. This analysis also results in an improved conversion rule between these definitions and differential privacy

    Generalized distribution based diversity measurement: Survey and unification

    Get PDF
    Social and natural sciences employ a number of different measures of diversity. The presents paper surveys those depending on the distribution of abundances among a given set of categories. Characteristic properties of the measures are generalized and a unifying notation is derived. It is argued that such unification enables scientists and decision makers to measure distribution based diversity in a new, more flexible manner, and represents a useful complement to models of generalized feature based diversity, such as Nehring and Puppe’s (2002) theory of diversity.Diversity measurement; Generalization; Non–additivity; Concavity; Numbers equivalence
    • 

    corecore