51 research outputs found

    Compatibility of Prior Specifications Across Linear Models

    Full text link
    Bayesian model comparison requires the specification of a prior distribution on the parameter space of each candidate model. In this connection two concerns arise: on the one hand the elicitation task rapidly becomes prohibitive as the number of models increases; on the other hand numerous prior specifications can only exacerbate the well-known sensitivity to prior assignments, thus producing less dependable conclusions. Within the subjective framework, both difficulties can be counteracted by linking priors across models in order to achieve simplification and compatibility; we discuss links with related objective approaches. Given an encompassing, or full, model together with a prior on its parameter space, we review and summarize a few procedures for deriving priors under a submodel, namely marginalization, conditioning, and Kullback--Leibler projection. These techniques are illustrated and discussed with reference to variable selection in linear models adopting a conventional gg-prior; comparisons with existing standard approaches are provided. Finally, the relative merits of each procedure are evaluated through simulated and real data sets.Comment: Published in at http://dx.doi.org/10.1214/08-STS258 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Non parametric mixture priors based on an exponential random scheme

    Get PDF
    We propose a general procedure for constructing nonparametric priors for Bayesian inference. Under very general assumptions,the proposed prior selects absolutely continuous distribution functions, hence it can be useful with continuous data. We use the notion of Feller-type approximation, with a random scheme based on the natural exponential family, in order to construct a large class of distribution functions. We show how one can assign a probability to such a class and discuss the main properties of the proposed prior, named Feller prior. Feller priors are related to mixture models with unknown number of components or, more generally,to mixtures with unknown weight distribution. Two illustrations relative to the estimation of a density and of a mixing distribution are carried out with respect to well known data-set in order to evaluate the performance ofour procedure. Computations are performed using a modified version of an MCMC algorithm which is briefly described.Bernstein Polynomials, density estimation, Feller operators, Hierarchical models, Mixture Models, Non-parametric Bayesian Inference

    Covid-19 and ex-smokers: an underestimated prognostic factor?

    Get PDF
    Dear Editor, The recent and explosive worldwide outbreak of Covid-19 leads many scientists and clinicians to identify the most responsible triggering risk factors in individuals without comorbidities, as well as potential prognostic factors. A notable field of research has been conducted on the role of smoking, which has been initially hypothesized as being a protective factor for Covid-19...

    Chisini means and rational decision making: Equivalence of investment criteria

    Get PDF
    A plethora of tools are used for investment decisions and performance measurement, including Net Present Value (NPV), Internal Rate of Return (IRR), Profitability Index (PI), Modified Internal Rate of Return (MIRR), Average Accounting Rate of Return (AARR). All these and other known metrics are generally considered non-equivalent and some of them are regarded as unreliable or even naive. Building upon Magni (2010a, 2013)'s Average Internal Rate of Return (AIRR), we show that the notion of Chisini mean enables these tools to be used as rational decision criteria. Specifically, we focus on 11 metrics and show that, if properly used, they all provide equivalent accept-reject decisions and equivalent project rankings. Therefore, the intuitive notion of mean is the founding basis of investment decision criteria

    CMS physics technical design report : Addendum on high density QCD with heavy ions

    Get PDF
    Peer reviewe

    Due approcci alla costruzione del modello statistico: confronti ed osservazioni

    No full text
    Lauritzen, in his book "Statistical models as extremal families", criticizes the usual procedure of giving in advance the statistical model and then applying the various inference principles, and states that the statistical analysis and the statistical model ought to be jointly considered. The technique which he suggests for the construction of a statistical model, may be traced, although in a simpler context , also in a theorem by Diaconis and Freedman. From a predictive view point the same problem has been also dealt with by Cifarelli and Regazzini. We present a new, and practically more useful version, of the deacons and freedman's Theorem, and analyse in relationship with the predictive viewpoint. Finally we describe a way to combine the two above-mentioned approaches into a unified framework

    Some Remarks on the Use of Improper Priors for the Analysis of Exponential Regression Models

    No full text
    We consider Bayesian inference on the exponential regression model. It is known that standard improper priors on the parameters of this model lead to marginal posterior distributions on one of the parameters involved which are also improper on the interval (0, 1). This has sometimes been interpreted as meaning that the observations are irrelevant. We re-analyse this problem using a finitely-additive approach and show that the above conclusion is not generally correct. This result emphasizes, once again, the dangers of routine approach to Bayesian inference using improper priors

    Confidence distribution for the ability parameter of the Rasch model

    No full text
    In this paper we consider the Rasch model and suggest novel point estimators and confidence intervals for the ability parameter. They are based on a proposed confidence distribution (CD) whose construction has required to overcome some difficulties essentially due to the discrete nature of the model. When the number of items is large, the computations due to the combinatorics involved become heavy and thus we provide first and second order approximations of the CD. Simulation studies show the good behavior of our estimators and intervals when compared with those obtained through other standard frequentist and weakly informative Bayesian procedures. Finally, using the expansion of the expected length of the suggested interval, we are able to identify reasonable values of the sample size which lead to a desired length of the interval

    How to Compute a Mean? The Chisini Approach and Its Applications

    No full text
    Scholars often consider the arithmetic mean as the only mean available. This gives rise to several mistakes. Thus, in a first course in statistics, it is necessary to introduce them to a more general concept of mean. In this work we present the notion of mean suggested by Oscar Chisini in 1929 which has a double advantage. It focuses students' minds on the substance of the problem for which a mean is required, thus discouraging any automatic procedure, and it does not require a preliminary list of the different mean formulas. Advantages and limits of the Chisini mean are discussed by means of examples
    corecore