135 research outputs found

    A Socratic Dialogue

    Get PDF
    Socrates has found some aspects of medical biostatistics a bit confusing, and wishes to discuss some of these issues with Simplicio, a prominent medical researcher. This Socratic dialogue will shed some light on the errant use of parametric analyses in clinical trials

    Training Statisticians To Be Alert To The Dangers Of Misapplying Statistical Methods

    Get PDF
    Statisticians are faced with a variety of challenges. Their ability to cope successfully with these challenges depends, in large part, on the quality of their training. It is not the purpose of this article to present a comprehensive training plan that will overhaul the standard curriculum a statistician might follow under current training regimens (i.e., in a degree program). Rather, the objective is to point out important areas that appear to be under-represented in standard curricula and correspondingly overlooked too often in practice. The hope is that these areas might be better integrated into the training of the next generation of statisticians

    An Empirical Demonstration of the Need for Exact Tests

    Get PDF
    The robustness of parametric analyses is rarely questioned or qualified. Robustness, generally understood, means the exact and approximate p-values will lie on the same side of alpha for any reasonable data set; and 1) any data set would qualify as reasonable and 2) robustness holds universally, for all alpha levels and approximations. For this to be true, the approximation would need to be perfect all of the time. Any discrepancy between the approximation and the exact p-value, for any combination of alpha level and data set, would constitute a violation. Clearly, this is not true, and when confronted with this reality, the “No True Scotsman” fallacy is often invoked with the declaration it must have been a pathological data set, as if this would obviate the responsibility to select an appropriate research method. Ideally, a method would be selected because it is optimal, or at least appropriate, without needing special pleading, but judging by how often approximations are used when the exact values they are trying to approximate are readily available, current trends do not come close to this ideal. One possible explanation might be that there is not much information available on data sets for which the approximations fail miserably. Examples are presented in an effort to clarify the need for exact analyses

    Adaptive Tests for Ordered Categorical Data

    Get PDF
    Consider testing for independence against stochastic order in an ordered 2xJ contingency table, under product multinomial sampling. In applications one may wish to exploit prior information concerning the direction of the treatment effect, yet ultimately end up with a testing procedure with good frequentist properties. As such, a reasonable objective may be to simultaneously maximize power at a specified alternative and ensure reasonable power for all other alternatives of interest. For this objective, none of the available testing approaches are completely satisfactory. A new class of admissible adaptive tests is derived. Each test in this class strictly preserves the Type I error rate and strikes a balance between good global power and nearly optimal (envelope) power to detect a specific alternative of most interest. Prior knowledge of the direction of the treatment effect, the level of confidence in this prior information, and possibly the marginal totals might be used to select a specific test from this class

    Quantifying The Proportion Of Cases Attributable To An Exposure

    Get PDF
    The attributable fraction and the average attributable fractions, which are commonly used to assess the relative effect of several exposures to the prevalence of a disease, do not represent the proportion of cases caused by each exposure. Furthermore, the sum of attributable fractions over all exposures generally exceeds not only the attributable fraction for all exposures taken together, but also 100%. Other measures are discussed here, including the directly attributable fraction and the confounding fraction, that may be more suitable in defining the fraction directly attributable to an exposure

    Randomization Technique, Allocation Concealment, Masking, And Susceptibility Of Trials To Selection Bias

    Get PDF
    It is widely believed that baseline imbalances in randomized clinical trials must necessarily be random. Yet even among masked randomized trials conducted with allocation concealment, there are mechanisms by which patients with specific covariates may be selected for inclusion into a particular treatment group. This selection bias would force imbalance in those covariates, measured or unmeasured, that are used for the patient selection. Unfortunately, few trials provide adequate information to determine even if there was allocation concealment, how the randomization was conducted, and how successful the masking may have been, let alone if selection bias was adequately controlle d. In this article we reinforce the message that allocation details should be presented in full. We also facilitate such reporting by identifying and clarifying the role of specific reportable design features. Because the designs that eliminate all selection bias are rarely feasible in practice, our development has important implications for not only the implementation, but also the reporting and interpretation, of randomized clinical trials

    Accuracy of the Berger-Exner test for detecting third-order selection bias in randomised controlled trials: a simulation-based investigation

    Get PDF
    BACKGROUND: Randomised controlled trials (RCT) are highly influential upon medical decisions. Thus RCTs must not distort the truth. One threat to internal trial validity is the correct prediction of future allocations (selection bias). The Berger-Exner test detects such bias but has not been widely utilized in practice. One reason for this non-utilisation may be a lack of information regarding its test accuracy. The objective of this study is to assess the accuracy of the Berger-Exner test on the basis of relevant simulations for RCTs with dichotomous outcomes. METHODS: Simulated RCTs with various parameter settings were generated, using R software, and subjected to bias-free and selection bias scenarios. The effect size inflation due to bias was quantified. The test was applied in both scenarios and the pooled sensitivity and specificity, with 95% confidence intervals for alpha levels of 1%, 5%, and 20%, were computed. Summary ROC curves were generated and the relationships of parameters with test accuracy were explored. RESULTS: An effect size inflation of 71% - 99% was established. Test sensitivity was 1.00 (95% CI: 0.99 – 1.00) for alpha level 1%, 5%, and 20%; test specificity was 0.94 (95% CI: 0.93 – 0.96); 0.82 (95% CI: 0.80 – 0.84), and 0.56 (95% CI: 0.54 – 0.58) for alpha 1%, 5%, and 20%, respectively. Test accuracy was best with the maximal procedure used with a maximum tolerated imbalance (MTI) = 2 as the randomisation method at alpha 1%. CONCLUSIONS: The results of this simulation study suggest that the Berger-Exner test is generally accurate for identifying third-order selection bias. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/1471-2288-14-114) contains supplementary material, which is available to authorized users

    Per Family Error Rates: A Response

    Get PDF
    As the authors note, the familywise error rate (FWER) is used rather often, whereas the per-family error rate (PFER) is not. Is this as it should be? It would seem that no universal answer is possible, as context determines which is more appropriate in any given application. In the general scenario of testing the benefit of an intervention, one might ideally want an error rate that aligns with the decision for benefit. In most cases the FWER does this pretty well, while allowing one to identify those endpoints for which benefit exists. The PFER does not seem to have any advantage over the FWER in this general testing scenario. Perhaps in some other scenarios the PFER might have some reasonable role

    Parametric Analyses In Randomized Clinical Trials

    Get PDF
    One salient feature of randomized clinical trials is that patients are randomly allocated to treatment groups, but not randomly sampled from any target population. Without random sampling parametric analyses are inexact, yet they are still often used in clinical trials. Given the availability of an exact test, it would still be conceivable to argue convincingly that for technical reasons (upon which we elaborate) a parametric test might be preferable in some situations. Having acknowledged this possibility, we point out that such an argument cannot be convincing without supporting facts concerning the specifics of the problem at hand. Moreover, we have never seen these arguments made in practice. We conclude that the frequent preference for parametric analyses over exact analyses is without merit. In this article we briefly present the scientific basis for preferring exact tests, and refer the interested reader to the vast literature backing up these claims. We also refute the assertions offered in some recent publications promoting parametric analyses as being superior in some general sense to exact analyses. In asking the reader to keep an open mind to our arguments, we are suggesting the possibility that numerous researchers have published incorrect advice, which has then been taught extensively in schools. We ask the reader to consider the relative merits of the arguments, but not the frequency with which each argument is made

    SNP Haplotype Mapping in a Small ALS Family

    Get PDF
    The identification of genes for monogenic disorders has proven to be highly effective for understanding disease mechanisms, pathways and gene function in humans. Nevertheless, while thousands of Mendelian disorders have not yet been mapped there has been a trend away from studying single-gene disorders. In part, this is due to the fact that many of the remaining single-gene families are not large enough to map the disease locus to a single site in the genome. New tools and approaches are needed to allow researchers to effectively tap into this genetic gold-mine. Towards this goal, we have used haploid cell lines to experimentally validate the use of high-density single nucleotide polymorphism (SNP) arrays to define genome-wide haplotypes and candidate regions, using a small amyotrophic lateral sclerosis (ALS) family as a prototype. Specifically, we used haploid-cell lines to determine if high-density SNP arrays accurately predict haplotypes across entire chromosomes and show that haplotype information significantly enhances the genetic information in small families. Panels of haploid-cell lines were generated and a 5 centimorgan (cM) short tandem repeat polymorphism (STRP) genome scan was performed. Experimentally derived haplotypes for entire chromosomes were used to directly identify regions of the genome identical-by-descent in 5 affected individuals. Comparisons between experimentally determined and in silico haplotypes predicted from SNP arrays demonstrate that SNP analysis of diploid DNA accurately predicted chromosomal haplotypes. These methods precisely identified 12 candidate intervals, which are shared by all 5 affected individuals. Our study illustrates how genetic information can be maximized using readily available tools as a first step in mapping single-gene disorders in small families
    • …
    corecore