75 research outputs found

    The operational meaning of min- and max-entropy

    Get PDF
    We show that the conditional min-entropy Hmin(A|B) of a bipartite state rho_AB is directly related to the maximum achievable overlap with a maximally entangled state if only local actions on the B-part of rho_AB are allowed. In the special case where A is classical, this overlap corresponds to the probability of guessing A given B. In a similar vein, we connect the conditional max-entropy Hmax(A|B) to the maximum fidelity of rho_AB with a product state that is completely mixed on A. In the case where A is classical, this corresponds to the security of A when used as a secret key in the presence of an adversary holding B. Because min- and max-entropies are known to characterize information-processing tasks such as randomness extraction and state merging, our results establish a direct connection between these tasks and basic operational problems. For example, they imply that the (logarithm of the) probability of guessing A given B is a lower bound on the number of uniform secret bits that can be extracted from A relative to an adversary holding B

    Min- and Max-Entropy in Infinite Dimensions

    Get PDF

    The Operational Meaning of Min- and Max-Entropy

    Get PDF
    In this paper, we show that the conditional min-entropy Hmin(AvertB)H_{min}(A vert B) of a bipartite state rhoABrho_{A B} is directly related to the maximum achievable overlap with a maximally entangled state if only local actions on the BB-part of rhoABrho_{A B} are allowed. In the special case where AA is classical, this overlap corresponds to the probability of guessing AA given BB. In a similar vein, we connect the conditional max-entropy Hmax(AvertB)H_{max}(A vert B) to the maximum fidelity of rhoABrho_{AB} with a product state that is completely mixed on AA. In the case where AA is classical, this corresponds to the security of AA when used as a secret key in the presence of an - adversary holding BB. Because min- and max-entropies are known to characterize information-processing tasks such as randomness extraction and state merging, our results establish a direct connection between these tasks and basic operational problems. For example, they imply that the (logarithm of the) probability of guessing AA given BB is a lower bound on the number of uniform secret bits that can be extracted from AA relative to an adversary holding BB

    Min- and Max-Entropy in Infinite Dimensions

    Get PDF
    We consider an extension of the conditional min- and max-entropies to infinite-dimensional separable Hilbert spaces. We show that these satisfy characterizing properties known from the finite-dimensional case, and retain information-theoretic operational interpretations, e.g., the min-entropy as maximum achievable quantum correlation, and the max-entropy as decoupling accuracy. We furthermore generalize the smoothed versions of these entropies and prove an infinite-dimensional quantum asymptotic equipartition property. To facilitate these generalizations we show that the min- and max-entropy can be expressed in terms of convergent sequences of finite-dimensional min- and max-entropies, which provides a convenient technique to extend proofs from the finite to the infinite-dimensional settin

    Bottleneck Problems: Information and Estimation-Theoretic View

    Full text link
    Information bottleneck (IB) and privacy funnel (PF) are two closely related optimization problems which have found applications in machine learning, design of privacy algorithms, capacity problems (e.g., Mrs. Gerber's Lemma), strong data processing inequalities, among others. In this work, we first investigate the functional properties of IB and PF through a unified theoretical framework. We then connect them to three information-theoretic coding problems, namely hypothesis testing against independence, noisy source coding and dependence dilution. Leveraging these connections, we prove a new cardinality bound for the auxiliary variable in IB, making its computation more tractable for discrete random variables. In the second part, we introduce a general family of optimization problems, termed as \textit{bottleneck problems}, by replacing mutual information in IB and PF with other notions of mutual information, namely ff-information and Arimoto's mutual information. We then argue that, unlike IB and PF, these problems lead to easily interpretable guarantee in a variety of inference tasks with statistical constraints on accuracy and privacy. Although the underlying optimization problems are non-convex, we develop a technique to evaluate bottleneck problems in closed form by equivalently expressing them in terms of lower convex or upper concave envelope of certain functions. By applying this technique to binary case, we derive closed form expressions for several bottleneck problems
    • …
    corecore