4 research outputs found
Bayes and maximum likelihood for -Wasserstein deconvolution of Laplace mixtures
We consider the problem of recovering a distribution function on the real
line from observations additively contaminated with errors following the
standard Laplace distribution. Assuming that the latent distribution is
completely unknown leads to a nonparametric deconvolution problem. We begin by
studying the rates of convergence relative to the -norm and the Hellinger
metric for the direct problem of estimating the sampling density, which is a
mixture of Laplace densities with a possibly unbounded set of locations: the
rate of convergence for the Bayes' density estimator corresponding to a
Dirichlet process prior over the space of all mixing distributions on the real
line matches, up to a logarithmic factor, with the rate
for the maximum likelihood estimator. Then, appealing to an inversion
inequality translating the -norm and the Hellinger distance between
general kernel mixtures, with a kernel density having polynomially decaying
Fourier transform, into any -Wasserstein distance, , between the
corresponding mixing distributions, provided their Laplace transforms are
finite in some neighborhood of zero, we derive the rates of convergence in the
-Wasserstein metric for the Bayes' and maximum likelihood estimators of
the mixing distribution. Merging in the -Wasserstein distance between
Bayes and maximum likelihood follows as a by-product, along with an assessment
on the stochastic order of the discrepancy between the two estimation
procedures
On asymptotically efficient maximum likelihood estimation of linear functionals in Laplace measurement error models
Maximum likelihood estimation of linear functionals in the inverse problem of
deconvolution is considered. Given observations of a random sample from a
distribution indexed by a (potentially
infinite-dimensional) parameter , which is the distribution of the latent
variable in a standard additive Laplace measurement error model, one wants to
estimate a linear functional of . Asymptotically efficient maximum
likelihood estimation (MLE) of integral linear functionals of the mixing
distribution in a convolution model with the Laplace kernel density is
investigated. Situations are distinguished in which the functional of interest
can be consistently estimated at -rate by the plug-in MLE, which is
asymptotically normal and efficient, in the sense of achieving the variance
lower bound, from those in which no integral linear functional can be estimated
at parametric rate, which precludes any possibility for asymptotic efficiency.
The -convergence of the MLE, valid in the case of a degenerate mixing
distribution at a single location point, fails in general, as does asymptotic
normality. It is shown that there exists no regular estimator sequence for
integral linear functionals of the mixing distribution that, when recentered
about the estimand and -rescaled, is asymptotically efficient,
\emph{viz}., has Gaussian limit distribution with minimum variance. One can
thus only expect estimation with some slower rate and, often, with a
non-Gaussian limit distribution
On asymptotically efficient maximum likelihood estimation of linear functionals in Laplace measurement error models
Maximum likelihood estimation of linear functionals in the inverse problem of
deconvolution is considered. Given observations of a random sample from a
distribution indexed by a (potentially
infinite-dimensional) parameter , which is the distribution of the latent
variable in a standard additive Laplace measurement error model, one wants to
estimate a linear functional of . Asymptotically efficient maximum
likelihood estimation (MLE) of integral linear functionals of the mixing
distribution in a convolution model with the Laplace kernel density is
investigated. Situations are distinguished in which the functional of interest
can be consistently estimated at -rate by the plug-in MLE, which is
asymptotically normal and efficient, in the sense of achieving the variance
lower bound, from those in which no integral linear functional can be estimated
at parametric rate, which precludes any possibility for asymptotic efficiency.
The -convergence of the MLE, valid in the case of a degenerate mixing
distribution at a single location point, fails in general, as does asymptotic
normality. It is shown that there exists no regular estimator sequence for
integral linear functionals of the mixing distribution that, when recentered
about the estimand and -rescaled, is asymptotically efficient,
\emph{viz}., has Gaussian limit distribution with minimum variance. One can
thus only expect estimation with some slower rate and, often, with a
non-Gaussian limit distribution