29 research outputs found

    Immortal person-time in studies of cancer outcomes

    Get PDF
    Immortal person-time arises in an observational study when follow-up time is included in person-time at risk for the study outcome, even though that time precedes the last event required for entry into the study population or satisfaction of an exposure definition.1,2 Immune person-time is similar, but it pertains to outcomes other than death. If a study patient were to have incurred the outcome or been censored during immortal or immune person-time, then the patient would not have satisfied the requirements for inclusion in the study or exposure category. A study or exposure category that includes immortal or immune person-time yields a downwardly biased outcome rate and an upwardly biased survival curve. This bias occurs because the accumulated person-time exceeds person-time actually at risk. When comparing rates or survival curves among exposure categories, the net effect of immortal or immune person-time bias may be in any direction

    Reflection on modern methods: five myths about measurement error in epidemiological research

    Get PDF
    Epidemiologists are often confronted with datasets to analyse which contain measurement error due to, for instance, mistaken data entries, inaccurate recordings and measurement instrument or procedural errors. If the effect of measurement error is misjudged, the data analyses are hampered and the validity of the study's inferences may be affected. In this paper, we describe five myths that contribute to misjudgments about measurement error, regarding expected structure, impact and solutions to mitigate the problems resulting from mismeasurements. The aim is to clarify these measurement error misconceptions. We show that the influence of measurement error in an epidemiological data analysis can play out in ways that go beyond simple heuristics, such as heuristics about whether or not to expect attenuation of the effect estimates. Whereas we encourage epidemiologists to deliberate about the structure and potential impact of measurement error in their analyses, we also recommend exercising restraint when making claims about the magnitude or even direction of effect of measurement error if not accompanied by statistical measurement error corrections or quantitative bias analysis. Suggestions for alleviating the problems or investigating the structure and magnitude of measurement error are given.Clinical epidemiolog

    The Importance of Making Assumptions in Bias Analysis

    Get PDF
    Quantitative bias analyses allow researchers to adjust for uncontrolled confounding, given specification of certain bias parameters. When researchers are concerned about unknown confounders, plausible values for these bias parameters will be difficult to specify. Ding and VanderWeele developed bounding factor and E-value approaches that require the user to specify only some of the bias parameters. We describe the mathematical meaning of bounding factors and E-values and the plausibility of these methods in an applied context. We encourage researchers to pay particular attention to the assumption made, when using E-values, that the prevalence of the uncontrolled confounder among the exposed is 100% (or, equivalently, the prevalence of the exposure among those without the confounder is 0%). We contrast methods that attempt to bound biases or effects and alternative approaches such as quantitative bias analysis. We provide an example where failure to make this distinction led to erroneous statements. If the primary concern in an analysis is with known but unmeasured potential confounders, then E-values are not needed and may be misleading. In cases where the concern is with unknown confounders, the E-value assumption of an extreme possible prevalence of the confounder limits its practical utility

    Pragmatic considerations for negative control outcome studies to guide non-randomized comparative analyses: A narrative review

    Get PDF
    Purpose: This narrative review describes the application of negative control outcome (NCO) methods to assess potential bias due to unmeasured or mismeasured confounders in non-randomized comparisons of drug effectiveness and safety. An NCO is assumed to have no causal relationship with a treatment under study while subject to the same confounding structure as the treatment and outcome of interest; an association between treatment and NCO then reflects the potential for uncontrolled confounding between treatment and outcome. Methods: We focus on two recently completed NCO studies that assessed the comparability of outcome risk for patients initiating different osteoporosis medications and lipid-lowering therapies, illustrating several ways in which confounding may result. In these studies, NCO methods were implemented in claims-based data sources, with the results used to guide the decision to proceed with comparative effectiveness or safety analyses. Results: Based on this research, we provide recommendations for future NCO studies, including considerations for the identification of confounding mechanisms in the target patient population, the selection of NCOs expected to satisfy required assumptions, the interpretation of NCO effect estimates, and the mitigation of uncontrolled confounding detected in NCO analyses. We propose the use of NCO studies prior to initiating comparative effectiveness or safety research, providing information on the potential presence of uncontrolled confounding in those comparative analyses. Conclusions: Given the increasing use of non-randomized designs for regulatory decision-making, the application of NCO methods will strengthen study design, analysis, and interpretation of real-world data and the credibility of the resulting real-world evidence

    On Compulsory Preregistration of Protocols response

    No full text
    Clinical epidemiolog

    Should Preregistration of Epidemiologic Study Protocols Become Compulsory? Reflections and a Counterproposal

    No full text
    Clinical epidemiolog
    corecore