1,134 research outputs found
The radial plot in meta-analysis : approximations and applications
Fixed effects meta-analysis can be thought of as least squares analysis of the radial plot, the plot of standardized treatment effect against precision (reciprocal of the standard deviation) for the studies in a systematic review. For example, the least squares slope through the origin estimates the treatment effect, and a widely used test for publication bias is equivalent to testing the significance of the regression intercept. However, the usual theory assumes that the within-study variances are known, whereas in practice they are estimated. This leads to extra variability in the points of the radial plot which can lead to a marked distortion in inferences that are derived from these regression calculations. This is illustrated by a clinical trials example from the Cochrane database. We derive approximations to the sampling properties of the radial plot and suggest bias corrections to some of the commonly used methods of meta-analysis. A simulation study suggests that these bias corrections are effective in controlling levels of significance of tests and coverage of confidence intervals
A comparison of approximate interval estimators for the Bernoulli parameter
The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators
Modern Likelihood-Frequentist Inference
We offer an exposition of modern higher-order likelihood inference, and introduce software to implement this in a quite general setting. The aim is to make more accessible an important development in statistical theory and practice. The software, implemented in an R package, requires only that the user provide code to compute the likelihood function, and to specify extra-likelihood aspects of the model, such as stopping rule or censoring model, through a function generating a dataset under the model. The exposition charts a narrow course through the developments, intending thereby to make these more widely accessible. It includes the likelihood ratio approximation to the distribution of the maximum likelihood estimator, i.e. the p* formula, and transformation of this yielding a second-order approximation to the distribution of the signed likelihood ratio test statistic, based on a modified signedlikelihood ratio statistic r* . This follows developments of Barndorff-Nielsen and others. The software utilizes the approximation to required Jacobians as developed by Skovgaard, which is included in the exposition. Several examples of using the software are provided
Statistical validation of simulation models: A case study
Rigorous statistical validation requires that the responses of the model and the real system have the same expected values. However, the modeled and actual responses are not comparable if they are obtained under different scenarios (environmental conditions). Moreover, data on the real system may be unavailable; sensitivity analysis can then be applied to find out whether the model inputs have effects on the model outputs that agree with the experts' intuition. Not only the total model, but also its modules may be submitted to such sensitivity analyses. This article illustrates these issues through a case study, namely a simulation model for the use of sonar to search for mines on the sea bottom. The methodology, however, applies to models in general.Simulation Models;Statistical Validation;statistics
The Jeffreys-Lindley Paradox and Discovery Criteria in High Energy Physics
The Jeffreys-Lindley paradox displays how the use of a p-value (or number of
standard deviations z) in a frequentist hypothesis test can lead to an
inference that is radically different from that of a Bayesian hypothesis test
in the form advocated by Harold Jeffreys in the 1930s and common today. The
setting is the test of a well-specified null hypothesis (such as the Standard
Model of elementary particle physics, possibly with "nuisance parameters")
versus a composite alternative (such as the Standard Model plus a new force of
nature of unknown strength). The p-value, as well as the ratio of the
likelihood under the null hypothesis to the maximized likelihood under the
alternative, can strongly disfavor the null hypothesis, while the Bayesian
posterior probability for the null hypothesis can be arbitrarily large. The
academic statistics literature contains many impassioned comments on this
paradox, yet there is no consensus either on its relevance to scientific
communication or on its correct resolution. The paradox is quite relevant to
frontier research in high energy physics. This paper is an attempt to explain
the situation to both physicists and statisticians, in the hope that further
progress can be made.Comment: v4: Continued editing for clarity. Figure added. v5: Minor fixes to
biblio. Same as published version except for minor copy-edits, Synthese
(2014). v6: fix typos, and restore garbled sentence at beginning of Sec 4 to
v
Accurate Parametric Inference for Small Samples
We outline how modern likelihood theory, which provides essentially exact
inferences in a variety of parametric statistical problems, may routinely be
applied in practice. Although the likelihood procedures are based on analytical
asymptotic approximations, the focus of this paper is not on theory but on
implementation and applications. Numerical illustrations are given for logistic
regression, nonlinear models, and linear non-normal models, and we describe a
sampling approach for the third of these classes. In the case of logistic
regression, we argue that approximations are often more appropriate than
`exact' procedures, even when these exist.Comment: Published in at http://dx.doi.org/10.1214/08-STS273 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- âŠ