48 research outputs found

    Recent advances on Bayesian inference for P(X min Y )

    Get PDF
    We address the statistical problem of evaluating R = P(X < Y ), where X and Y are two independent random variables. Bayesian parametric inference about R, based on the marginal posterior density of R, has been widely discussed under various distributional assumptions on X and Y . This classical approach requires both elicitation of a prior on the complete parameter and numerical integration in order to derive the marginal distribution of R. In this paper, we discuss and apply recent advances in Bayesian inference based on higher-order asymptotics and on pseudo-likelihoods, and related matching priors, which allow to perform accurate inference on the parameter of interest only. The proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals. The accuracy of the proposed methodology is illustrated both by numerical studies and by real-life data concerning clinical studie

    On interval and point estimators based on a penalization of the modified profile likelihood

    Get PDF
    Various modifications of the profile likelihood have been proposed in the literature. Despite modified profile likelihood methods have better properties than those based on the profile likelihood, the signed likelihood ratio statistic based on the modified profile likelihood has a standard normal distribution only to first order, and it can be inaccurate in particular in models with many nuisance parameters. In this paper we propose an adjustment of the profile likelihood from a new perspective. The idea is to resort to suitable default priors on the parameter of interest only to be used as non-negative weight functions in order to modify the modified profile likelihood. In particular, we focus on matching priors, i.e. priors on the parameter of interest only for which there is an agreement between frequentist and Bayesian inference, derived from modified profile likelihoods. The proposed modified profile likelihood has desiderable inferential properties: the corresponding signed likelihood ratio statistic is standard normal to second order and the correponding maximizer is a refinement of the maximum likelihood estimator, which improves its small sample properties. Examples illustrate the proposed modified profile likelihood and outline its improvement over its counterparts

    On the use of pseudo-likelihoods in Bayesian variable selection.

    Get PDF
    In the presence of nuisance parameters, we discuss a one-parameter Bayesian analysis based on a pseudo-likelihood assuming a default prior distribution for the parameter of interest only. Although this way to proceed cannot always be considered as orthodox in the Bayesian perspective, it is of interest to evaluate whether the use of suitable pseudo-likelihoods may be proposed for Bayesian inference. Attention is focused in the context of regression models, in particular on inference about a scalar regression coefficient in various multiple regression models, i.e. scale and regression models with non-normal errors, non-linear normal heteroscedastic regression models, and log-linear models for count data with overdispersion. Some interesting conclusions emerge

    Default prior distributions from quasi- and quasi-profile likelihoods.

    Get PDF
    In some problems of practical interest, a standard Bayesian analysis can be difficult to perform. This is true, for example, when the class of sampling parametric models is unknown or if robustness with respect to data or to model misspecifications is required. These situations can be usefully handled by using a posterior distribution for the parameter of interest which is based on a pseudo-likelihood function derived from estimating equations, i.e. on a quasi-likelihood, and on a suitable prior distribution. The aim of this paper is to propose and discuss the construction of a default prior distribution for a scalar parameter of interest to be used together with a quasi-likelihood function. We show that the proposed default prior can be interpreted as a Jeffreys-type prior, since it is proportional to the square-root of the expected information derived from the quasi-likelihood. The frequentist coverage of the credible regions, based on the proposed procedure, is studied through Monte Carlo simulations in the context of robustness theory and of generalized linear models with overdispersion

    A note on approximate Bayesian credible sets based on modified loglikelihood ratios

    Get PDF
    Asymptotic arguments are widely used in Bayesian inference, and in recent years there has been considerable developments of the so-called higher-order asymptotics. This theory provides very accurate approximations to posterior distributions, and to related quantities, in a variety of parametric statistical problems, even for small sample sizes. The aim of this contribution is to discuss recent advances in approximate Bayesian computations based on the asymptotic theory of modified loglikelihood ratios, both from theoretical and practical point of views. Results on third-order approximations for univariate posterior distributions, also in the presence of nuisance parameters, are reviewed and a new formula for a vector parameter of interest is presented. All these approximations may routinely be applied in practice for Bayesian inference, since they require little more than standard likelihood quantities for their implementation, and hence they may be available at little additional computational cost over simple first-order approximations. Moreover, these approximations give rise to a simple simulation scheme, alternative to MCMC, for Bayesian computation of marginal posterior distributions for a scalar parameter of interest. In addition, they can be used for testing precise null hypothesis and to define accurate Bayesian credible sets. Some illustrative examples are discussed, with particular attention to the use of matching priors

    A new Bayesian discrepancy measure

    Full text link
    A Bayesian Discrepancy Test (BDT) is proposed to evaluate the distance of a given hypothesis with respect to the available information (prior law and data). The proposed measure of evidence has properties of consistency and invariance. After having presented the similarities and differences between the BDT and other Bayesian tests, we proceed with the analysis of some multiparametric case studies, showing the properties of the BDT. Among them conceptual and interpretative simplicity, possibility of dealing with complex case studies.Comment: 20 pages 9 figure

    Recent advances on Bayesian inference for P(X<Y)P(X < Y)

    Get PDF
    We address the statistical problem of evaluating R=P(X<Y)R = P(X \lt Y), where XX and YY are two independent random variables. Bayesian parametric inference is based on the marginal posterior density of RR and has been widely discussed under various distributional assumptions on XX and YY. This classical approach requires both elicitation of a prior on the complete parameter and numerical integration in order to derive the marginal distribution of RR. In this paper, we discuss and apply recent advances in Bayesian inference based on higher-order asymptotics and on pseudo-likelihoods, and related matching priors, which allow one to perform accurate inference on the parameter of interest RR only, even for small sample sizes. The proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals. From a theoretical point of view, we show that the used prior is a strong matching prior. From an applied point of view, the accuracy of the proposed methodology is illustrated both by numerical studies and by real-life data concerning clinical studies

    A new Bayesian discrepancy measure

    Get PDF
    The aim of this article is to make a contribution to the Bayesian procedure of testing precise hypotheses for parametric models. For this purpose, we define the Bayesian Discrepancy Measure that allows one to evaluate the suitability of a given hypothesis with respect to the available information (prior law and data). To summarise this information, the posterior median is employed, allowing a simple assessment of the discrepancy with a fixed hypothesis. The Bayesian Discrepancy Measure assesses the compatibility of a single hypothesis with the observed data, as opposed to the more common comparative approach where a hypothesis is rejected in favour of a competing hypothesis. The proposed measure of evidence has properties of consistency and invariance. After presenting the definition of the measure for a parameter of interest, both in the absence and in the presence of nuisance parameters, we illustrate some examples showing its conceptual and interpretative simplicity. Finally, we compare a test procedure based on the Bayesian Discrepancy Measure, with the Full Bayesian Significance Test, a well-known Bayesian testing procedure for sharp hypotheses

    Recent advances on Bayesian inference for P(X min Y )

    Get PDF
    We address the statistical problem of evaluating R = P(X < Y ), where X and Y are two independent random variables. Bayesian parametric inference about R, based on the marginal posterior density of R, has been widely discussed under various distributional assumptions on X and Y . This classical approach requires both elicitation of a prior on the complete parameter and numerical integration in order to derive the marginal distribution of R. In this paper, we discuss and apply recent advances in Bayesian inference based on higher-order asymptotics and on pseudo-likelihoods, and related matching priors, which allow to perform accurate inference on the parameter of interest only. The proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals. The accuracy of the proposed methodology is illustrated both by numerical studies and by real-life data concerning clinical studie
    corecore