research

Bayes and maximum likelihood for L1L^1-Wasserstein deconvolution of Laplace mixtures

Abstract

We consider the problem of recovering a distribution function on the real line from observations additively contaminated with errors following the standard Laplace distribution. Assuming that the latent distribution is completely unknown leads to a nonparametric deconvolution problem. We begin by studying the rates of convergence relative to the L2L^2-norm and the Hellinger metric for the direct problem of estimating the sampling density, which is a mixture of Laplace densities with a possibly unbounded set of locations: the rate of convergence for the Bayes' density estimator corresponding to a Dirichlet process prior over the space of all mixing distributions on the real line matches, up to a logarithmic factor, with the nβˆ’3/8log⁑1/8nn^{-3/8}\log^{1/8}n rate for the maximum likelihood estimator. Then, appealing to an inversion inequality translating the L2L^2-norm and the Hellinger distance between general kernel mixtures, with a kernel density having polynomially decaying Fourier transform, into any LpL^p-Wasserstein distance, pβ‰₯1p\geq1, between the corresponding mixing distributions, provided their Laplace transforms are finite in some neighborhood of zero, we derive the rates of convergence in the L1L^1-Wasserstein metric for the Bayes' and maximum likelihood estimators of the mixing distribution. Merging in the L1L^1-Wasserstein distance between Bayes and maximum likelihood follows as a by-product, along with an assessment on the stochastic order of the discrepancy between the two estimation procedures

    Similar works

    Full text

    thumbnail-image