4 research outputs found

    Bayes and maximum likelihood for L1L^1-Wasserstein deconvolution of Laplace mixtures

    Full text link
    We consider the problem of recovering a distribution function on the real line from observations additively contaminated with errors following the standard Laplace distribution. Assuming that the latent distribution is completely unknown leads to a nonparametric deconvolution problem. We begin by studying the rates of convergence relative to the L2L^2-norm and the Hellinger metric for the direct problem of estimating the sampling density, which is a mixture of Laplace densities with a possibly unbounded set of locations: the rate of convergence for the Bayes' density estimator corresponding to a Dirichlet process prior over the space of all mixing distributions on the real line matches, up to a logarithmic factor, with the n3/8log1/8nn^{-3/8}\log^{1/8}n rate for the maximum likelihood estimator. Then, appealing to an inversion inequality translating the L2L^2-norm and the Hellinger distance between general kernel mixtures, with a kernel density having polynomially decaying Fourier transform, into any LpL^p-Wasserstein distance, p1p\geq1, between the corresponding mixing distributions, provided their Laplace transforms are finite in some neighborhood of zero, we derive the rates of convergence in the L1L^1-Wasserstein metric for the Bayes' and maximum likelihood estimators of the mixing distribution. Merging in the L1L^1-Wasserstein distance between Bayes and maximum likelihood follows as a by-product, along with an assessment on the stochastic order of the discrepancy between the two estimation procedures

    On asymptotically efficient maximum likelihood estimation of linear functionals in Laplace measurement error models

    Get PDF
    Maximum likelihood estimation of linear functionals in the inverse problem of deconvolution is considered. Given observations of a random sample from a distribution P0PF0P_0\equiv P_{F_0} indexed by a (potentially infinite-dimensional) parameter F0F_0, which is the distribution of the latent variable in a standard additive Laplace measurement error model, one wants to estimate a linear functional of F0F_0. Asymptotically efficient maximum likelihood estimation (MLE) of integral linear functionals of the mixing distribution F0F_0 in a convolution model with the Laplace kernel density is investigated. Situations are distinguished in which the functional of interest can be consistently estimated at n1/2n^{-1/2}-rate by the plug-in MLE, which is asymptotically normal and efficient, in the sense of achieving the variance lower bound, from those in which no integral linear functional can be estimated at parametric rate, which precludes any possibility for asymptotic efficiency. The n\sqrt{n}-convergence of the MLE, valid in the case of a degenerate mixing distribution at a single location point, fails in general, as does asymptotic normality. It is shown that there exists no regular estimator sequence for integral linear functionals of the mixing distribution that, when recentered about the estimand and n\sqrt{n}-rescaled, is asymptotically efficient, \emph{viz}., has Gaussian limit distribution with minimum variance. One can thus only expect estimation with some slower rate and, often, with a non-Gaussian limit distribution

    On asymptotically efficient maximum likelihood estimation of linear functionals in Laplace measurement error models

    Full text link
    Maximum likelihood estimation of linear functionals in the inverse problem of deconvolution is considered. Given observations of a random sample from a distribution P0PF0P_0\equiv P_{F_0} indexed by a (potentially infinite-dimensional) parameter F0F_0, which is the distribution of the latent variable in a standard additive Laplace measurement error model, one wants to estimate a linear functional of F0F_0. Asymptotically efficient maximum likelihood estimation (MLE) of integral linear functionals of the mixing distribution F0F_0 in a convolution model with the Laplace kernel density is investigated. Situations are distinguished in which the functional of interest can be consistently estimated at n1/2n^{-1/2}-rate by the plug-in MLE, which is asymptotically normal and efficient, in the sense of achieving the variance lower bound, from those in which no integral linear functional can be estimated at parametric rate, which precludes any possibility for asymptotic efficiency. The n\sqrt{n}-convergence of the MLE, valid in the case of a degenerate mixing distribution at a single location point, fails in general, as does asymptotic normality. It is shown that there exists no regular estimator sequence for integral linear functionals of the mixing distribution that, when recentered about the estimand and n\sqrt{n}-rescaled, is asymptotically efficient, \emph{viz}., has Gaussian limit distribution with minimum variance. One can thus only expect estimation with some slower rate and, often, with a non-Gaussian limit distribution
    corecore