159,427 research outputs found

    Robust Bayes-Like Estimation: Rho-Bayes estimation

    Full text link
    We consider the problem of estimating the joint distribution PP of nn independent random variables within the Bayes paradigm from a non-asymptotic point of view. Assuming that PP admits some density ss with respect to a given reference measure, we consider a density model S\overline S for ss that we endow with a prior distribution π\pi (with support S\overline S) and we build a robust alternative to the classical Bayes posterior distribution which possesses similar concentration properties around ss whenever it belongs to the model S\overline S. Furthermore, in density estimation, the Hellinger distance between the classical and the robust posterior distributions tends to 0, as the number of observations tends to infinity, under suitable assumptions on the model and the prior, provided that the model S\overline S contains the true density ss. However, unlike what happens with the classical Bayes posterior distribution, we show that the concentration properties of this new posterior distribution are still preserved in the case of a misspecification of the model, that is when ss does not belong to S\overline S but is close enough to it with respect to the Hellinger distance.Comment: 68 page

    Robust Bayes-Like Estimation: Rho-Bayes estimation

    Get PDF
    We consider the problem of estimating the joint distribution PP of nn independent random variables within the Bayes paradigm from a non-asymptotic point of view. Assuming that PP admits some density ss with respect to a given reference measure, we consider a density model S\overline S for ss that we endow with a prior distribution π\pi (with support S\overline S) and we build a robust alternative to the classical Bayes posterior distribution which possesses similar concentration properties around ss whenever it belongs to the model S\overline S. Furthermore, in density estimation, the Hellinger distance between the classical and the robust posterior distributions tends to 0, as the number of observations tends to infinity, under suitable assumptions on the model and the prior, provided that the model S\overline S contains the true density ss. However, unlike what happens with the classical Bayes posterior distribution, we show that the concentration properties of this new posterior distribution are still preserved in the case of a misspecification of the model, that is when ss does not belong to S\overline S but is close enough to it with respect to the Hellinger distance.Comment: 68 page

    Empirical Bayes and Full Bayes for Signal Estimation

    Full text link
    We consider signals that follow a parametric distribution where the parameter values are unknown. To estimate such signals from noisy measurements in scalar channels, we study the empirical performance of an empirical Bayes (EB) approach and a full Bayes (FB) approach. We then apply EB and FB to solve compressed sensing (CS) signal estimation problems by successively denoising a scalar Gaussian channel within an approximate message passing (AMP) framework. Our numerical results show that FB achieves better performance than EB in scalar channel denoising problems when the signal dimension is small. In the CS setting, the signal dimension must be large enough for AMP to work well; for large signal dimensions, AMP has similar performance with FB and EB.Comment: This work was presented at the Information Theory and Application workshop (ITA), San Diego, CA, Feb. 201

    Make the most of your samples : Bayes factor estimators for high-dimensional models of sequence evolution

    Get PDF
    Background: Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results: We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions: We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation

    Asymptotic Accuracy of Bayesian Estimation for a Single Latent Variable

    Full text link
    In data science and machine learning, hierarchical parametric models, such as mixture models, are often used. They contain two kinds of variables: observable variables, which represent the parts of the data that can be directly measured, and latent variables, which represent the underlying processes that generate the data. Although there has been an increase in research on the estimation accuracy for observable variables, the theoretical analysis of estimating latent variables has not been thoroughly investigated. In a previous study, we determined the accuracy of a Bayes estimation for the joint probability of the latent variables in a dataset, and we proved that the Bayes method is asymptotically more accurate than the maximum-likelihood method. However, the accuracy of the Bayes estimation for a single latent variable remains unknown. In the present paper, we derive the asymptotic expansions of the error functions, which are defined by the Kullback-Leibler divergence, for two types of single-variable estimations when the statistical regularity is satisfied. Our results indicate that the accuracies of the Bayes and maximum-likelihood methods are asymptotically equivalent and clarify that the Bayes method is only advantageous for multivariable estimations.Comment: 28 pages, 3 figure

    Empirical Minimum-Variance Hedge (The)

    Get PDF
    Decision making under unknown true parameters (estimation risk) is discussed along with Bayes' and parameter certainty equivalent (PCE) criteria. Bayes' criterion incorporates estimation risk in a manner consistent with expected utility maximization. The PCE method, which is the most commonly used, is not consistent with expected utility maximization. Bayes' criterion is employed to solve for the minimum-variance hedge ratio. Empirical application of Bayes' minimum-variance hedge ratio is addressed and illustrated. Simulations show that discrepancies between prior and sample parameters may lead to substantial differences between Bayesian and PCE minimum-variance hedges.

    Bayesian estimation of one-parameter qubit gates

    Full text link
    We address estimation of one-parameter unitary gates for qubit systems and seek for optimal probes and measurements. Single- and two-qubit probes are analyzed in details focusing on precision and stability of the estimation procedure. Bayesian inference is employed and compared with the ultimate quantum limits to precision, taking into account the biased nature of Bayes estimator in the non asymptotic regime. Besides, through the evaluation of the asymptotic a posteriori distribution for the gate parameter and the comparison with the results of Monte Carlo simulated experiments, we show that asymptotic optimality of Bayes estimator is actually achieved after a limited number of runs. The robustness of the estimation procedure against fluctuations of the measurement settings is investigated and the use of entanglement to improve the overall stability of the estimation scheme is also analyzed in some details.Comment: 10 pages, 5 figure
    corecore