141,719 research outputs found
On adaptive posterior concentration rates
We investigate the problem of deriving posterior concentration rates under
different loss functions in nonparametric Bayes. We first provide a lower bound
on posterior coverages of shrinking neighbourhoods that relates the metric or
loss under which the shrinking neighbourhood is considered, and an intrinsic
pre-metric linked to frequentist separation rates. In the Gaussian white noise
model, we construct feasible priors based on a spike and slab procedure
reminiscent of wavelet thresholding that achieve adaptive rates of contraction
under or metrics when the underlying parameter belongs to a
collection of H\"{o}lder balls and that moreover achieve our lower bound. We
analyse the consequences in terms of asymptotic behaviour of posterior credible
balls as well as frequentist minimax adaptive estimation. Our results are
appended with an upper bound for the contraction rate under an arbitrary loss
in a generic regular experiment. The upper bound is attained for certain sieve
priors and enables to extend our results to density estimation.Comment: Published at http://dx.doi.org/10.1214/15-AOS1341 in the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
High-dimensional Gaussian model selection on a Gaussian design
We consider the problem of estimating the conditional mean of a real Gaussian
variable \nolinebreak Y=\sum_{i=1}^p\nolinebreak\theta_iX_i+\nolinebreak
\epsilon where the vector of the covariates follows a
joint Gaussian distribution. This issue often occurs when one aims at
estimating the graph or the distribution of a Gaussian graphical model. We
introduce a general model selection procedure which is based on the
minimization of a penalized least-squares type criterion. It handles a variety
of problems such as ordered and complete variable selection, allows to
incorporate some prior knowledge on the model and applies when the number of
covariates is larger than the number of observations . Moreover, it is
shown to achieve a non-asymptotic oracle inequality independently of the
correlation structure of the covariates. We also exhibit various minimax rates
of estimation in the considered framework and hence derive adaptiveness
properties of our procedure
Adaptive estimation of covariance matrices via Cholesky decomposition
This paper studies the estimation of a large covariance matrix. We introduce
a novel procedure called ChoSelect based on the Cholesky factor of the inverse
covariance. This method uses a dimension reduction strategy by selecting the
pattern of zero of the Cholesky factor. Alternatively, ChoSelect can be
interpreted as a graph estimation procedure for directed Gaussian graphical
models. Our approach is particularly relevant when the variables under study
have a natural ordering (e.g. time series) or more generally when the Cholesky
factor is approximately sparse. ChoSelect achieves non-asymptotic oracle
inequalities with respect to the Kullback-Leibler entropy. Moreover, it
satisfies various adaptive properties from a minimax point of view. We also
introduce and study a two-stage procedure that combines ChoSelect with the
Lasso. This last method enables the practitioner to choose his own trade-off
between statistical efficiency and computational complexity. Moreover, it is
consistent under weaker assumptions than the Lasso. The practical performances
of the different procedures are assessed on numerical examples
Optimal rates of convergence for estimating the null density and proportion of nonnull effects in large-scale multiple testing
An important estimation problem that is closely related to large-scale
multiple testing is that of estimating the null density and the proportion of
nonnull effects. A few estimators have been introduced in the literature;
however, several important problems, including the evaluation of the minimax
rate of convergence and the construction of rate-optimal estimators, remain
open. In this paper, we consider optimal estimation of the null density and the
proportion of nonnull effects. Both minimax lower and upper bounds are derived.
The lower bound is established by a two-point testing argument, where at the
core is the novel construction of two least favorable marginal densities
and . The density is heavy tailed both in the spatial and frequency
domains and is a perturbation of such that the characteristic
functions associated with and match each other in low frequencies.
The minimax upper bound is obtained by constructing estimators which rely on
the empirical characteristic function and Fourier analysis. The estimator is
shown to be minimax rate optimal. Compared to existing methods in the
literature, the proposed procedure not only provides more precise estimates of
the null density and the proportion of the nonnull effects, but also yields
more accurate results when used inside some multiple testing procedures which
aim at controlling the False Discovery Rate (FDR). The procedure is easy to
implement and numerical results are given.Comment: Published in at http://dx.doi.org/10.1214/09-AOS696 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Analysing correlated noise on the surface code using adaptive decoding algorithms
Laboratory hardware is rapidly progressing towards a state where quantum
error-correcting codes can be realised. As such, we must learn how to deal with
the complex nature of the noise that may occur in real physical systems. Single
qubit Pauli errors are commonly used to study the behaviour of error-correcting
codes, but in general we might expect the environment to introduce correlated
errors to a system. Given some knowledge of structures that errors commonly
take, it may be possible to adapt the error-correction procedure to compensate
for this noise, but performing full state tomography on a physical system to
analyse this structure quickly becomes impossible as the size increases beyond
a few qubits. Here we develop and test new methods to analyse blue a particular
class of spatially correlated errors by making use of parametrised families of
decoding algorithms. We demonstrate our method numerically using a diffusive
noise model. We show that information can be learnt about the parameters of the
noise model, and additionally that the logical error rates can be improved. We
conclude by discussing how our method could be utilised in a practical setting
blue and propose extensions of our work to study more general error models.Comment: 19 pages, 8 figures, comments welcome; v2 - minor typos corrected
some references added; v3 - accepted to Quantu
Unequal Error Protection Querying Policies for the Noisy 20 Questions Problem
In this paper, we propose an open-loop unequal-error-protection querying
policy based on superposition coding for the noisy 20 questions problem. In
this problem, a player wishes to successively refine an estimate of the value
of a continuous random variable by posing binary queries and receiving noisy
responses. When the queries are designed non-adaptively as a single block and
the noisy responses are modeled as the output of a binary symmetric channel the
20 questions problem can be mapped to an equivalent problem of channel coding
with unequal error protection (UEP). A new non-adaptive querying strategy based
on UEP superposition coding is introduced whose estimation error decreases with
an exponential rate of convergence that is significantly better than that of
the UEP repetition coding introduced by Variani et al. (2015). With the
proposed querying strategy, the rate of exponential decrease in the number of
queries matches the rate of a closed-loop adaptive scheme where queries are
sequentially designed with the benefit of feedback. Furthermore, the achievable
error exponent is significantly better than that of random block codes
employing equal error protection.Comment: To appear in IEEE Transactions on Information Theor
- …