8,547 research outputs found
A ridge-parameter approach to deconvolution
Kernel methods for deconvolution have attractive features, and prevail in the
literature. However, they have disadvantages, which include the fact that they
are usually suitable only for cases where the error distribution is infinitely
supported and its characteristic function does not ever vanish. Even in these
settings, optimal convergence rates are achieved by kernel estimators only when
the kernel is chosen to adapt to the unknown smoothness of the target
distribution. In this paper we suggest alternative ridge methods, not involving
kernels in any way. We show that ridge methods (a) do not require the
assumption that the error-distribution characteristic function is nonvanishing;
(b) adapt themselves remarkably well to the smoothness of the target density,
with the result that the degree of smoothness does not need to be directly
estimated; and (c) give optimal convergence rates in a broad range of settings.Comment: Published in at http://dx.doi.org/10.1214/009053607000000028 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Goodness-of-fit testing and quadratic functional estimation from indirect observations
We consider the convolution model where i.i.d. random variables having
unknown density are observed with additive i.i.d. noise, independent of the
's. We assume that the density belongs to either a Sobolev class or a
class of supersmooth functions. The noise distribution is known and its
characteristic function decays either polynomially or exponentially
asymptotically. We consider the problem of goodness-of-fit testing in the
convolution model. We prove upper bounds for the risk of a test statistic
derived from a kernel estimator of the quadratic functional based on
indirect observations. When the unknown density is smoother enough than the
noise density, we prove that this estimator is consistent,
asymptotically normal and efficient (for the variance we compute). Otherwise,
we give nonparametric upper bounds for the risk of the same estimator. We give
an approach unifying the proof of nonparametric minimax lower bounds for both
problems. We establish them for Sobolev densities and for supersmooth densities
less smooth than exponential noise. In the two setups we obtain exact testing
constants associated with the asymptotic minimax rates.Comment: Published in at http://dx.doi.org/10.1214/009053607000000118 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Testing the suitability of polynomial models in errors-in-variables problems
A low-degree polynomial model for a response curve is used commonly in
practice. It generally incorporates a linear or quadratic function of the
covariate. In this paper we suggest methods for testing the goodness of fit of
a general polynomial model when there are errors in the covariates. There, the
true covariates are not directly observed, and conventional bootstrap methods
for testing are not applicable. We develop a new approach, in which
deconvolution methods are used to estimate the distribution of the covariates
under the null hypothesis, and a ``wild'' or moment-matching bootstrap argument
is employed to estimate the distribution of the experimental errors (distinct
from the distribution of the errors in covariates). Most of our attention is
directed at the case where the distribution of the errors in covariates is
known, although we also discuss methods for estimation and testing when the
covariate error distribution is estimated. No assumptions are made about the
distribution of experimental error, and, in particular, we depart substantially
from conventional parametric models for errors-in-variables problems.Comment: Published in at http://dx.doi.org/10.1214/009053607000000361 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Nonparametric estimation of mixing densities for discrete distributions
By a mixture density is meant a density of the form
, where
is a family of probability densities and
is a probability measure on . We consider the problem of
identifying the unknown part of this model, the mixing distribution , from
a finite sample of independent observations from . Assuming that the
mixing distribution has a density function, we wish to estimate this density
within appropriate function classes. A general approach is proposed and its
scope of application is investigated in the case of discrete distributions.
Mixtures of power series distributions are more specifically studied. Standard
methods for density estimation, such as kernel estimators, are available in
this context, and it has been shown that these methods are rate optimal or
almost rate optimal in balls of various smoothness spaces. For instance, these
results apply to mixtures of the Poisson distribution parameterized by its
mean. Estimators based on orthogonal polynomial sequences have also been
proposed and shown to achieve similar rates. The general approach of this paper
extends and simplifies such results. For instance, it allows us to prove
asymptotic minimax efficiency over certain smoothness classes of the
above-mentioned polynomial estimator in the Poisson case. We also study
discrete location mixtures, or discrete deconvolution, and mixtures of discrete
uniform distributions.Comment: Published at http://dx.doi.org/10.1214/009053605000000381 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …