113 research outputs found

    Implementing and Documenting Random Number Generators

    Get PDF
    As simulation arid Monte Carlo continue to play an increasing role in statistical research, careful attention must be given to problems which arise in implementing and documenting collect ions of random number generators. This paper examines the value of theoretical as well as empirical evidence in establishing the quality of generators, the selection of generators to comprise a good basic set, the techniques and efficiency of implementation, and the extent of documentation. Illustrative examples are drawn from various current sources.

    Monte Carlo Techniques in Studying Robust Estimators

    Get PDF
    Recent work on robust estimation has led to many procedures, which are easy to formulate and straightforward to program but difficult to study analytically. In such circumstances experimental sampling is quite attractive, but the variety and complexity of both estimators and sampling situations make effective Monte Carlo techniques essential. This discussion examines problems, techniques, and results and draws on examples in studies of robust location and robust regression.

    Notes on Automating Stem and Leaf Displays

    Get PDF
    The stem-and-leaf display is a natural semi-graphic technique to include in statistical computing systems. This paper discusses the choices involved in implementing both automated and flexible versions of the display, develops an algorithm for the automated version, examines various implementation considerations, and presents a set of semi-portable FORTRAN subroutines for producing stem-and-leaf displays.

    Estimation in meta-analyses of response ratios

    Get PDF
    BACKGROUND: For outcomes that studies report as the means in the treatment and control groups, some medical applications and nearly half of meta-analyses in ecology express the effect as the ratio of means (RoM), also called the response ratio (RR), analyzed in the logarithmic scale as the log-response-ratio, LRR. METHODS: In random-effects meta-analysis of LRR, with normal and lognormal data, we studied the performance of estimators of the between-study variance, τ2, (measured by bias and coverage) in assessing heterogeneity of study-level effects, and also the performance of related estimators of the overall effect in the log scale, λ. We obtained additional empirical evidence from two examples. RESULTS: The results of our extensive simulations showed several challenges in using LRR as an effect measure. Point estimators of τ2 had considerable bias or were unreliable, and interval estimators of τ2 seldom had the intended 95% coverage for small to moderate-sized samples (n<40). Results for estimating λ differed between lognormal and normal data. CONCLUSIONS: For lognormal data, we can recommend only SSW, a weighted average in which a study's weight is proportional to its effective sample size, (when n≥40) and its companion interval (when n≥10). Normal data posed greater challenges. When the means were far enough from 0 (more than one standard deviation, 4 in our simulations), SSW was practically unbiased, and its companion interval was the only option

    Network Meta-Analysis of Ulcerative Colitis Pharmacotherapies: Carryover Effects from Induction and Bias of the Results

    Get PDF
    Dear Editor, In their network meta-analyses (NMAs) of treatments for ulcerative colitis (UC), Singh et al1 did not take into account a complication associated with studies that re-randomized patients for the maintenance phase: differential carryover effects from induction can bias the results. In those studies, patients who responded to induction were re-randomized to maintenance treatments that included placebo. If, however, carryover effects from induction differ substantially among active treatments, the effects of those treatments, relative to placebo, are not comparable

    On the Q statistic with constant weights for standardized mean difference

    Get PDF
    Cochran\u27s Q statistic is routinely used for testing heterogeneity in meta-analysis. Its expected value is also used in several popular estimators of the between-study variance, tau 2 . Those applications generally have not considered the implications of its use of estimated variances in the inverse-variance weights. Importantly, those weights make approximating the distribution of Q (more explicitly, Q IV ) rather complicated. As an alternative, we investigate a new Q statistic, Q F , whose constant weights use only the studies\u27 effective sample sizes. For the standardized mean difference as the measure of effect, we study, by simulation, approximations to distributions of Q IV and Q F , as the basis for tests of heterogeneity and for new point and interval estimators of tau 2 . These include new DerSimonian-Kacker-type moment estimators based on the first moment of Q F , and novel median-unbiased estimators. The results show that: an approximation based on an algorithm of Farebrother follows both the null and the alternative distributions of Q F reasonably well, whereas the usual chi-squared approximation for the null distribution of Q IV and the Biggerstaff-Jackson approximation to its alternative distribution are poor; in estimating tau 2 , our moment estimator based on Q F is almost unbiased, the Mandel - Paule estimator has some negative bias in some situations, and the DerSimonian-Laird and restricted maximum likelihood estimators have considerable negative bias; and all 95% interval estimators have coverage that is too high when tau 2 = 0 , but otherwise the Q-profile interval performs very well

    Exploring consequences of simulation design for apparent performance of methods of meta-analysis

    Get PDF
    Contemporary statistical publications rely on simulation to evaluate performance of new methods and compare them with established methods. In the context of random-effects meta-analysis of log-odds-ratios, we investigate how choices in generating data affect such conclusions. The choices we study include the overall log-odds-ratio, the distribution of probabilities in the control arm, and the distribution of study-level sample sizes. We retain the customary normal distribution of study-level effects. To examine the impact of the components of simulations, we assess the performance of the best available inverse–variance–weighted two-stage method, a two-stage method with constant sample-size-based weights, and two generalized linear mixed models. The results show no important differences between fixed and random sample sizes. In contrast, we found differences among data-generation models in estimation of heterogeneity variance and overall log-odds-ratio. This sensitivity to design poses challenges for use of simulation in choosing methods of meta-analysis

    On the Q statistic with constant weights for standardized mean difference

    Get PDF
    Cochran's Q statistic is routinely used for testing heterogeneity in meta-analysis. Its expected value is also used in several popular estimators of the between-study variance, (Formula presented.). Those applications generally have not considered the implications of its use of estimated variances in the inverse-variance weights. Importantly, those weights make approximating the distribution of Q (more explicitly, (Formula presented.)) rather complicated. As an alternative, we investigate a new Q statistic, (Formula presented.), whose constant weights use only the studies' effective sample sizes. For the standardized mean difference as the measure of effect, we study, by simulation, approximations to distributions of (Formula presented.) and (Formula presented.), as the basis for tests of heterogeneity and for new point and interval estimators of (Formula presented.). These include new DerSimonian–Kacker-type moment estimators based on the first moment of (Formula presented.), and novel median-unbiased estimators. The results show that: an approximation based on an algorithm of Farebrother follows both the null and the alternative distributions of (Formula presented.) reasonably well, whereas the usual chi-squared approximation for the null distribution of (Formula presented.) and the Biggerstaff–Jackson approximation to its alternative distribution are poor; in estimating (Formula presented.), our moment estimator based on (Formula presented.) is almost unbiased, the Mandel – Paule estimator has some negative bias in some situations, and the DerSimonian–Laird and restricted maximum likelihood estimators have considerable negative bias; and all 95% interval estimators have coverage that is too high when (Formula presented.), but otherwise the Q-profile interval performs very well
    • …
    corecore