8,720 research outputs found
Estimating the privacy of quantum-random numbers
We analyze the information an attacker can obtain on the numbers generated by
a user by measurements on a subsystem of a system consisting of two entangled
two-level systems. The attacker and the user make measurements on their
respective subsystems, only. Already the knowledge of the density matrix of the
subsystem of the user completely determines the upper bound on the information
accessible to the attacker. We compare and contrast this information to the
appropriate bounds provided by quantum state discrimination.Comment: 26 pages, 4 figure
Almost Perfect Privacy for Additive Gaussian Privacy Filters
We study the maximal mutual information about a random variable
(representing non-private information) displayed through an additive Gaussian
channel when guaranteeing that only bits of information is leaked
about a random variable (representing private information) that is
correlated with . Denoting this quantity by , we show that
for perfect privacy, i.e., , one has for any pair of
absolutely continuous random variables and then derive a second-order
approximation for for small . This approximation is
shown to be related to the strong data processing inequality for mutual
information under suitable conditions on the joint distribution . Next,
motivated by an operational interpretation of data privacy, we formulate the
privacy-utility tradeoff in the same setup using estimation-theoretic
quantities and obtain explicit bounds for this tradeoff when is
sufficiently small using the approximation formula derived for
.Comment: 20 pages. To appear in Springer-Verla
Information Extraction Under Privacy Constraints
A privacy-constrained information extraction problem is considered where for
a pair of correlated discrete random variables governed by a given
joint distribution, an agent observes and wants to convey to a potentially
public user as much information about as possible without compromising the
amount of information revealed about . To this end, the so-called {\em
rate-privacy function} is introduced to quantify the maximal amount of
information (measured in terms of mutual information) that can be extracted
from under a privacy constraint between and the extracted information,
where privacy is measured using either mutual information or maximal
correlation. Properties of the rate-privacy function are analyzed and
information-theoretic and estimation-theoretic interpretations of it are
presented for both the mutual information and maximal correlation privacy
measures. It is also shown that the rate-privacy function admits a closed-form
expression for a large family of joint distributions of . Finally, the
rate-privacy function under the mutual information privacy measure is
considered for the case where has a joint probability density function
by studying the problem where the extracted information is a uniform
quantization of corrupted by additive Gaussian noise. The asymptotic
behavior of the rate-privacy function is studied as the quantization resolution
grows without bound and it is observed that not all of the properties of the
rate-privacy function carry over from the discrete to the continuous case.Comment: 55 pages, 6 figures. Improved the organization and added detailed
literature revie
Privacy-Aware MMSE Estimation
We investigate the problem of the predictability of random variable under
a privacy constraint dictated by random variable , correlated with ,
where both predictability and privacy are assessed in terms of the minimum
mean-squared error (MMSE). Given that and are connected via a
binary-input symmetric-output (BISO) channel, we derive the \emph{optimal}
random mapping such that the MMSE of given is minimized while
the MMSE of given is greater than for a
given . We also consider the case where are continuous
and is restricted to be an additive noise channel.Comment: 9 pages, 3 figure
Bounds on inference
Lower bounds for the average probability of error of estimating a hidden
variable X given an observation of a correlated random variable Y, and Fano's
inequality in particular, play a central role in information theory. In this
paper, we present a lower bound for the average estimation error based on the
marginal distribution of X and the principal inertias of the joint distribution
matrix of X and Y. Furthermore, we discuss an information measure based on the
sum of the largest principal inertias, called k-correlation, which generalizes
maximal correlation. We show that k-correlation satisfies the Data Processing
Inequality and is convex in the conditional distribution of Y given X. Finally,
we investigate how to answer a fundamental question in inference and privacy:
given an observation Y, can we estimate a function f(X) of the hidden random
variable X with an average error below a certain threshold? We provide a
general method for answering this question using an approach based on
rate-distortion theory.Comment: Allerton 2013 with extended proof, 10 page
- …