11 research outputs found

    The AWGN Red Alert Problem

    Full text link
    Consider the following unequal error protection scenario. One special message, dubbed the "red alert" message, is required to have an extremely small probability of missed detection. The remainder of the messages must keep their average probability of error and probability of false alarm below a certain threshold. The goal then is to design a codebook that maximizes the error exponent of the red alert message while ensuring that the average probability of error and probability of false alarm go to zero as the blocklength goes to infinity. This red alert exponent has previously been characterized for discrete memoryless channels. This paper completely characterizes the optimal red alert exponent for additive white Gaussian noise channels with block power constraints.Comment: 13 pages, 10 figures, To appear in IEEE Transactions on Information Theor

    A Coding Theorem for f-Separable Distortion Measures

    No full text
    In this work we relax the usual separability assumption made in rate-distortion literature and propose f -separable distortion measures, which are well suited to model non-linear penalties. The main insight behind f -separable distortion measures is to define an n-letter distortion measure to be an f -mean of single-letter distortions. We prove a rate-distortion coding theorem for stationary ergodic sources with f -separable distortion measures, and provide some illustrative examples of the resulting rate-distortion functions. Finally, we discuss connections between f -separable distortion measures, and the subadditive distortion measure previously proposed in literature

    A Coding Theorem for f-Separable Distortion Measures

    No full text
    In this work we relax the usual separability assumption made in rate-distortion literature and propose f -separable distortion measures, which are well suited to model non-linear penalties. The main insight behind f -separable distortion measures is to define an n-letter distortion measure to be an f -mean of single-letter distortions. We prove a rate-distortion coding theorem for stationary ergodic sources with f -separable distortion measures, and provide some illustrative examples of the resulting rate-distortion functions. Finally, we discuss connections between f -separable distortion measures, and the subadditive distortion measure previously proposed in literature

    Second-Order Converse for Rate-Limited Common Randomness Generation

    No full text
    Publisher Copyright: © 2022 IEEE.We employ a recent technique based on a semigroup application of the method of types to improve on a second-order converse for the common randomness (CR) generation problem. The previously known bound lead to a correct second-order asymptotic rate, but incorrect sign on the second-order term for error rates below 1/2. The new bound has both the correct scaling and sign of the second-order term for small enough error rates.Peer reviewe

    Information Spectrum Converse for Minimum Entropy Couplings and Functional Representations

    Full text link
    Given two jointly distributed random variables (X,Y)(X,Y), a functional representation of XX is a random variable ZZ independent of YY, and a deterministic function g(⋅,⋅)g(\cdot, \cdot) such that X=g(Y,Z)X=g(Y,Z). The problem of finding a minimum entropy functional representation is known to be equivalent to the problem of finding a minimum entropy coupling where, given a collection of probability distributions P1,…,PmP_1, \dots, P_m, the goal is to find a coupling X1,…,XmX_1, \dots, X_m (Xi∼Pi)X_i \sim P_i) with the smallest entropy Hα(X1,…,Xm)H_\alpha(X_1, \dots, X_m). This paper presents a new information spectrum converse, and applies it to obtain direct lower bounds on minimum entropy in both problems. The new results improve on all known lower bounds, including previous lower bounds based on the concept of majorization. In particular, the presented proofs leverage both - the information spectrum and the majorization - perspectives on minimum entropy couplings and functional representations.Comment: 2023 IEEE International Symposium on Information Theory (ISIT

    A compression perspective on secrecy measures

    No full text
    The relationship between secrecy, compression rate, and shared secret key rate is surveyed under perfect secrecy, equivocation, maximal leakage, local differential privacy, and secrecy by design. It is emphasized that the utility cost of jointly compressing and securing data is very sensitive to (a) the adopted secrecy metric and (b) the specifics of the compression setting. That is, although it is well-known that the fundamental limits of traditional lossless variable-length compression and almost-lossless fixed-length compression are intimately related, this relationship collapses for many secrecy measures. The asymptotic fundamental limit of almost-lossless fixed length compression remains entropy for all secrecy measures studied. However, the fundamental limits of lossless variable-length compression are no longer entropy under perfect secrecy, secrecy by design, and sometimes under local differential privacy. Moreover, there are significant differences in secret key/secrecy tradeoffs between lossless and almost-lossless compression under perfect secrecy, secrecy by design, maximal leakage, and local differential privacy

    Secrecy by Design With Applications to Privacy and Compression

    No full text
    Secrecy by design is examined as an approach to information-theoretic secrecy. The main idea behind this approach is to design an information processing system from the ground up to be perfectly secure with respect to an explicit secrecy constraint. The principal technical contributions are decomposition bounds that allow the representation of a random variable X as a deterministic function of (S,Z) , where S is a given fixed random variable and Z is constructed to be independent of S . Using the problems of privacy and lossless compression as examples, the utility cost of applying secrecy by design is investigated. Privacy is studied in the setting of the privacy funnel function previously introduced in the literature and new bounds for the regime of zero information leakage are derived. For the problem of lossless compression, it is shown that strong information-theoretic guarantees can be achieved using a reduced secret key size and a quantifiable penalty on the compression rate. The fundamental limits for both problems are characterized with matching lower and upper bounds when the secret S is a deterministic function of the information source X

    On mismatched unequal message protection for finite block length joint source-channel coding

    No full text
    Abstract—We study the problem of lossless joint source-channel coding (JSCC) in the finite block length regime from an unequal message protection (UMP) perspective. We demonstrate that the problem of lossless JSCC can be cast in terms of UMP codes previously studied. We show that an optimal JSCC can be constructed from a matched UMP code. We further derive a finite block length bound that characterizes the performance of a JSCC constructed from a UMP code not perfectly matched to the source. This bound is evaluated for a binary memoryless source transmitted over a binary symmetric channel. Two-class schemes previously studied in literature are compared with the proposed scheme. Empirically the JSCCs based on UMP codes approach the performance of the optimal matched code quite fast in number of classes used. I
    corecore