1,982 research outputs found
Contraction of Locally Differentially Private Mechanisms
We investigate the contraction properties of locally differentially private
mechanisms. More specifically, we derive tight upper bounds on the divergence
between and output distributions of an
-LDP mechanism in terms of a divergence between the
corresponding input distributions and , respectively. Our first main
technical result presents a sharp upper bound on the -divergence
in terms of and
. We also show that the same result holds for a large family of
divergences, including KL-divergence and squared Hellinger distance. The second
main technical result gives an upper bound on
in terms of total variation distance
and . We then utilize these bounds to
establish locally private versions of the van Trees inequality, Le Cam's,
Assouad's, and the mutual information methods, which are powerful tools for
bounding minimax estimation risks. These results are shown to lead to better
privacy analyses than the state-of-the-arts in several statistical problems
such as entropy and discrete distribution estimation, non-parametric density
estimation, and hypothesis testing
Minimax Estimation of Kernel Mean Embeddings
In this paper, we study the minimax estimation of the Bochner integral
also called as the kernel
mean embedding, based on random samples drawn i.i.d.~from , where
is a positive definite
kernel. Various estimators (including the empirical estimator),
of are studied in the literature wherein all of
them satisfy with
being the reproducing kernel Hilbert space induced by . The
main contribution of the paper is in showing that the above mentioned rate of
is minimax in and
-norms over the class of discrete measures and
the class of measures that has an infinitely differentiable density, with
being a continuous translation-invariant kernel on . The
interesting aspect of this result is that the minimax rate is independent of
the smoothness of the kernel and the density of (if it exists). This result
has practical consequences in statistical applications as the mean embedding
has been widely employed in non-parametric hypothesis testing, density
estimation, causal inference and feature selection, through its relation to
energy distance (and distance covariance)
Convergence of Smoothed Empirical Measures with Applications to Entropy Estimation
This paper studies convergence of empirical measures smoothed by a Gaussian
kernel. Specifically, consider approximating , for
, by
, where is the empirical measure,
under different statistical distances. The convergence is examined in terms of
the Wasserstein distance, total variation (TV), Kullback-Leibler (KL)
divergence, and -divergence. We show that the approximation error under
the TV distance and 1-Wasserstein distance () converges at rate
in remarkable contrast to a typical
rate for unsmoothed (and ). For the
KL divergence, squared 2-Wasserstein distance (), and
-divergence, the convergence rate is , but only if
achieves finite input-output mutual information across the additive
white Gaussian noise channel. If the latter condition is not met, the rate
changes to for the KL divergence and , while
the -divergence becomes infinite - a curious dichotomy. As a main
application we consider estimating the differential entropy
in the high-dimensional regime. The distribution
is unknown but i.i.d samples from it are available. We first show that
any good estimator of must have sample complexity
that is exponential in . Using the empirical approximation results we then
show that the absolute-error risk of the plug-in estimator converges at the
parametric rate , thus establishing the minimax
rate-optimality of the plug-in. Numerical results that demonstrate a
significant empirical superiority of the plug-in approach to general-purpose
differential entropy estimators are provided.Comment: arXiv admin note: substantial text overlap with arXiv:1810.1158
- β¦