31 research outputs found

    How to Quantize nn Outputs of a Binary Symmetric Channel to n−1n-1 Bits?

    Full text link
    Suppose that YnY^n is obtained by observing a uniform Bernoulli random vector XnX^n through a binary symmetric channel with crossover probability α\alpha. The "most informative Boolean function" conjecture postulates that the maximal mutual information between YnY^n and any Boolean function b(Xn)\mathrm{b}(X^n) is attained by a dictator function. In this paper, we consider the "complementary" case in which the Boolean function is replaced by f:{0,1}n→{0,1}n−1f:\left\{0,1\right\}^n\to\left\{0,1\right\}^{n-1}, namely, an n−1n-1 bit quantizer, and show that I(f(Xn);Yn)≤(n−1)⋅(1−h(α))I(f(X^n);Y^n)\leq (n-1)\cdot\left(1-h(\alpha)\right) for any such ff. Thus, in this case, the optimal function is of the form f(xn)=(x1,…,xn−1)f(x^n)=(x_1,\ldots,x_{n-1}).Comment: 5 pages, accepted ISIT 201

    Key Capacity with Limited One-Way Communication for Product Sources

    Full text link
    We show that for product sources, rate splitting is optimal for secret key agreement using limited one-way communication at two terminals. This yields an alternative proof of the tensorization property of a strong data processing inequality originally studied by Erkip and Cover and amended recently by Anantharam et al. We derive a `water-filling' solution of the communication-rate--key-rate tradeoff for two arbitrarily correlated vector Gaussian sources, for the case with an eavesdropper, and for stationary Gaussian processes.Comment: 5 pages, ISIT 201

    Justification of Logarithmic Loss via the Benefit of Side Information

    Full text link
    We consider a natural measure of relevance: the reduction in optimal prediction risk in the presence of side information. For any given loss function, this relevance measure captures the benefit of side information for performing inference on a random variable under this loss function. When such a measure satisfies a natural data processing property, and the random variable of interest has alphabet size greater than two, we show that it is uniquely characterized by the mutual information, and the corresponding loss function coincides with logarithmic loss. In doing so, our work provides a new characterization of mutual information, and justifies its use as a measure of relevance. When the alphabet is binary, we characterize the only admissible forms the measure of relevance can assume while obeying the specified data processing property. Our results naturally extend to measuring causal influence between stochastic processes, where we unify different causal-inference measures in the literature as instantiations of directed information

    Privacy-Aware MMSE Estimation

    Full text link
    We investigate the problem of the predictability of random variable YY under a privacy constraint dictated by random variable XX, correlated with YY, where both predictability and privacy are assessed in terms of the minimum mean-squared error (MMSE). Given that XX and YY are connected via a binary-input symmetric-output (BISO) channel, we derive the \emph{optimal} random mapping PZ∣YP_{Z|Y} such that the MMSE of YY given ZZ is minimized while the MMSE of XX given ZZ is greater than (1−ϵ)var(X)(1-\epsilon)\mathsf{var}(X) for a given ϵ≥0\epsilon\geq 0. We also consider the case where (X,Y)(X,Y) are continuous and PZ∣YP_{Z|Y} is restricted to be an additive noise channel.Comment: 9 pages, 3 figure

    Compressed Secret Key Agreement: Maximizing Multivariate Mutual Information Per Bit

    Full text link
    The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components for generating a common secret key. The objective is to maximize the achievable key rate as a function of the joint entropy of the compressed sources. Since the maximum achievable key rate captures the total amount of information mutual to the compressed sources, an optimal compression scheme essentially maximizes the multivariate mutual information per bit of randomness of the private sources, and can therefore be viewed more generally as a dimension reduction technique. Single-letter lower and upper bounds on the maximum achievable key rate are derived for the general source model, and an explicit polynomial-time computable formula is obtained for the pairwise independent network model. In particular, the converse results and the upper bounds are obtained from those of the related secret key agreement problem with rate-limited discussion. A precise duality is shown for the two-user case with one-way discussion, and such duality is extended to obtain the desired converse results in the multi-user case. In addition to posing new challenges in information processing and dimension reduction, the compressed secret key agreement problem helps shed new light on resolving the difficult problem of secret key agreement with rate-limited discussion, by offering a more structured achieving scheme and some simpler conjectures to prove
    corecore