341,616 research outputs found

    Robust Bayes-Like Estimation: Rho-Bayes estimation

    Full text link
    We consider the problem of estimating the joint distribution PP of nn independent random variables within the Bayes paradigm from a non-asymptotic point of view. Assuming that PP admits some density ss with respect to a given reference measure, we consider a density model S\overline S for ss that we endow with a prior distribution π\pi (with support S\overline S) and we build a robust alternative to the classical Bayes posterior distribution which possesses similar concentration properties around ss whenever it belongs to the model S\overline S. Furthermore, in density estimation, the Hellinger distance between the classical and the robust posterior distributions tends to 0, as the number of observations tends to infinity, under suitable assumptions on the model and the prior, provided that the model S\overline S contains the true density ss. However, unlike what happens with the classical Bayes posterior distribution, we show that the concentration properties of this new posterior distribution are still preserved in the case of a misspecification of the model, that is when ss does not belong to S\overline S but is close enough to it with respect to the Hellinger distance.Comment: 68 page

    Robust Bayes-Like Estimation: Rho-Bayes estimation

    Get PDF
    We consider the problem of estimating the joint distribution PP of nn independent random variables within the Bayes paradigm from a non-asymptotic point of view. Assuming that PP admits some density ss with respect to a given reference measure, we consider a density model S\overline S for ss that we endow with a prior distribution π\pi (with support S\overline S) and we build a robust alternative to the classical Bayes posterior distribution which possesses similar concentration properties around ss whenever it belongs to the model S\overline S. Furthermore, in density estimation, the Hellinger distance between the classical and the robust posterior distributions tends to 0, as the number of observations tends to infinity, under suitable assumptions on the model and the prior, provided that the model S\overline S contains the true density ss. However, unlike what happens with the classical Bayes posterior distribution, we show that the concentration properties of this new posterior distribution are still preserved in the case of a misspecification of the model, that is when ss does not belong to S\overline S but is close enough to it with respect to the Hellinger distance.Comment: 68 page

    Bayes and empirical Bayes: do they merge?

    Full text link
    Bayesian inference is attractive for its coherence and good frequentist properties. However, it is a common experience that eliciting a honest prior may be difficult and, in practice, people often take an {\em empirical Bayes} approach, plugging empirical estimates of the prior hyperparameters into the posterior distribution. Even if not rigorously justified, the underlying idea is that, when the sample size is large, empirical Bayes leads to "similar" inferential answers. Yet, precise mathematical results seem to be missing. In this work, we give a more rigorous justification in terms of merging of Bayes and empirical Bayes posterior distributions. We consider two notions of merging: Bayesian weak merging and frequentist merging in total variation. Since weak merging is related to consistency, we provide sufficient conditions for consistency of empirical Bayes posteriors. Also, we show that, under regularity conditions, the empirical Bayes procedure asymptotically selects the value of the hyperparameter for which the prior mostly favors the "truth". Examples include empirical Bayes density estimation with Dirichlet process mixtures.Comment: 27 page

    Consistency of Bayes factor for nonnested model selection when the model dimension grows

    Full text link
    Zellner's gg-prior is a popular prior choice for the model selection problems in the context of normal regression models. Wang and Sun [J. Statist. Plann. Inference 147 (2014) 95-105] recently adopt this prior and put a special hyper-prior for gg, which results in a closed-form expression of Bayes factor for nested linear model comparisons. They have shown that under very general conditions, the Bayes factor is consistent when two competing models are of order O(nτ)O(n^{\tau}) for τ<1\tau <1 and for τ=1\tau=1 is almost consistent except a small inconsistency region around the null hypothesis. In this paper, we study Bayes factor consistency for nonnested linear models with a growing number of parameters. Some of the proposed results generalize the ones of the Bayes factor for the case of nested linear models. Specifically, we compare the asymptotic behaviors between the proposed Bayes factor and the intrinsic Bayes factor in the literature.Comment: Published at http://dx.doi.org/10.3150/15-BEJ720 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Empirical Bayes and Full Bayes for Signal Estimation

    Full text link
    We consider signals that follow a parametric distribution where the parameter values are unknown. To estimate such signals from noisy measurements in scalar channels, we study the empirical performance of an empirical Bayes (EB) approach and a full Bayes (FB) approach. We then apply EB and FB to solve compressed sensing (CS) signal estimation problems by successively denoising a scalar Gaussian channel within an approximate message passing (AMP) framework. Our numerical results show that FB achieves better performance than EB in scalar channel denoising problems when the signal dimension is small. In the CS setting, the signal dimension must be large enough for AMP to work well; for large signal dimensions, AMP has similar performance with FB and EB.Comment: This work was presented at the Information Theory and Application workshop (ITA), San Diego, CA, Feb. 201
    corecore