52 research outputs found

    Discussion of "Is Bayes Posterior just Quick and Dirty Confidence?" by D. A. S. Fraser

    Full text link
    Discussion of "Is Bayes Posterior just Quick and Dirty Confidence?" by D. A. S. Fraser [arXiv:1112.5582].Comment: Published in at http://dx.doi.org/10.1214/11-STS352B the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Combining information from independent sources through confidence distributions

    Full text link
    This paper develops new methodology, together with related theories, for combining information from independent studies through confidence distributions. A formal definition of a confidence distribution and its asymptotic counterpart (i.e., asymptotic confidence distribution) are given and illustrated in the context of combining information. Two general combination methods are developed: the first along the lines of combining p-values, with some notable differences in regard to optimality of Bahadur type efficiency; the second by multiplying and normalizing confidence densities. The latter approach is inspired by the common approach of multiplying likelihood functions for combining parametric information. The paper also develops adaptive combining methods, with supporting asymptotic theory which should be of practical interest. The key point of the adaptive development is that the methods attempt to combine only the correct information, downweighting or excluding studies containing little or wrong information about the true parameter of interest. The combination methodologies are illustrated in simulated and real data examples with a variety of applications.Comment: Published at http://dx.doi.org/10.1214/009053604000001084 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Confidence distribution (CD) -- distribution estimator of a parameter

    Full text link
    The notion of confidence distribution (CD), an entirely frequentist concept, is in essence a Neymanian interpretation of Fisher's Fiducial distribution. It contains information related to every kind of frequentist inference. In this article, a CD is viewed as a distribution estimator of a parameter. This leads naturally to consideration of the information contained in CD, comparison of CDs and optimal CDs, and connection of the CD concept to the (profile) likelihood function. A formal development of a multiparameter CD is also presented.Comment: Published at http://dx.doi.org/10.1214/074921707000000102 in the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bridging Bayesian, frequentist and fiducial (BFF) inferences using confidence distribution

    Full text link
    Bayesian, frequentist and fiducial (BFF) inferences are much more congruous than they have been perceived historically in the scientific community (cf., Reid and Cox 2015; Kass 2011; Efron 1998). Most practitioners are probably more familiar with the two dominant statistical inferential paradigms, Bayesian inference and frequentist inference. The third, lesser known fiducial inference paradigm was pioneered by R.A. Fisher in an attempt to define an inversion procedure for inference as an alternative to Bayes' theorem. Although each paradigm has its own strengths and limitations subject to their different philosophical underpinnings, this article intends to bridge these different inferential methodologies through the lenses of confidence distribution theory and Monte-Carlo simulation procedures. This article attempts to understand how these three distinct paradigms, Bayesian, frequentist, and fiducial inference, can be unified and compared on a foundational level, thereby increasing the range of possible techniques available to both statistical theorists and practitioners across all fields.Comment: 30 pages, 5 figures, Handbook on Bayesian Fiducial and Frequentist (BFF) Inference

    Repro Samples Method for a Performance Guaranteed Inference in General and Irregular Inference Problems

    Full text link
    Rapid advancements in data science require us to have fundamentally new frameworks to tackle prevalent but highly non-trivial "irregular" inference problems, to which the large sample central limit theorem does not apply. Typical examples are those involving discrete or non-numerical parameters and those involving non-numerical data, etc. In this article, we present an innovative, wide-reaching, and effective approach, called "repro samples method," to conduct statistical inference for these irregular problems plus more. The development relates to but improves several existing simulation-inspired inference approaches, and we provide both exact and approximate theories to support our development. Moreover, the proposed approach is broadly applicable and subsumes the classical Neyman-Pearson framework as a special case. For the often-seen irregular inference problems that involve both discrete/non-numerical and continuous parameters, we propose an effective three-step procedure to make inferences for all parameters. We also develop a unique matching scheme that turns the discreteness of discrete/non-numerical parameters from an obstacle for forming inferential theories into a beneficial attribute for improving computational efficiency. We demonstrate the effectiveness of the proposed general methodology using various examples, including a case study example on a Gaussian mixture model with unknown number of components. This case study example provides a solution to a long-standing open inference question in statistics on how to quantify the estimation uncertainty for the unknown number of components and other associated parameters. Real data and simulation studies, with comparisons to existing approaches, demonstrate the far superior performance of the proposed method

    A Bias Correction Method in Meta-analysis of Randomized Clinical Trials with no Adjustments for Zero-inflated Outcomes

    Full text link
    Many clinical endpoint measures, such as the number of standard drinks consumed per week or the number of days that patients stayed in the hospital, are count data with excessive zeros. However, the zero-inflated nature of such outcomes is often ignored in analyses, which leads to biased estimates and, consequently, a biased estimate of the overall intervention effect in a meta-analysis. The current study proposes a novel statistical approach, the Zero-inflation Bias Correction (ZIBC) method, that can account for the bias introduced when using the Poisson regression model despite a high rate of zeros in the outcome distribution for randomized clinical trials. This correction method utilizes summary information from individual studies to correct intervention effect estimates as if they were appropriately estimated in zero-inflated Poisson regression models. Simulation studies and real data analyses show that the ZIBC method has good performance in correcting zero-inflation bias in many situations. This method provides a methodological solution in improving the accuracy of meta-analysis results, which is important to evidence-based medicine

    A Simulation Study of the Performance of Statistical Models for Count Outcomes with Excessive Zeros

    Full text link
    Background: Outcome measures that are count variables with excessive zeros are common in health behaviors research. There is a lack of empirical data about the relative performance of prevailing statistical models when outcomes are zero-inflated, particularly compared with recently developed approaches. Methods: The current simulation study examined five commonly used analytical approaches for count outcomes, including two linear models (with outcomes on raw and log-transformed scales, respectively) and three count distribution-based models (i.e., Poisson, negative binomial, and zero-inflated Poisson (ZIP) models). We also considered the marginalized zero-inflated Poisson (MZIP) model, a novel alternative that estimates the effects on overall mean while adjusting for zero-inflation. Extensive simulations were conducted to evaluate their the statistical power and Type I error rate across various data conditions. Results: Under zero-inflation, the Poisson model failed to control the Type I error rate, resulting in higher than expected false positive results. When the intervention effects on the zero (vs. non-zero) and count parts were in the same direction, the MZIP model had the highest statistical power, followed by the linear model with outcomes on raw scale, negative binomial model, and ZIP model. The performance of a linear model with a log-transformed outcome variable was unsatisfactory. When only one of the effects on the zero (vs. non-zero) part and the count part existed, the ZIP model had the highest statistical power. Conclusions: The MZIP model demonstrated better statistical properties in detecting true intervention effects and controlling false positive results for zero-inflated count outcomes. This MZIP model may serve as an appealing analytical approach to evaluating overall intervention effects in studies with count outcomes marked by excessive zeros
    • …
    corecore