333 research outputs found

    Methods to assess seasonal effects in epidemiological studies of infectious diseases—exemplified by application to the occurrence of meningococcal disease

    Get PDF
    AbstractSeasonal variation in occurrence is a common feature of many diseases, especially those of infectious origin. Studies of seasonal variation contribute to healthcare planning and to the understanding of the aetiology of infections. In this article, we provide an overview of statistical methods for the assessment and quantification of seasonality of infectious diseases, as exemplified by their application to meningococcal disease in Denmark in 1995-2011. Additionally, we discuss the conditions under which seasonality should be considered as a covariate in studies of infectious diseases. The methods considered range from the simplest comparison of disease occurrence between the extremes of summer and winter, through modelling of the intensity of seasonal patterns by use of a sine curve, to more advanced generalized linear models. All three classes of method have advantages and disadvantages. The choice among analytical approaches should ideally reflect the research question of interest. Simple methods are compelling, but may overlook important seasonal peaks that would have been identified if more advanced methods had been applied. For most studies, we suggest the use of methods that allow estimation of the magnitude and timing of seasonal peaks and valleys, ideally with a measure of the intensity of seasonality, such as the peak-to-low ratio. Seasonality may be a confounder in studies of infectious disease occurrence when it fulfils the three primary criteria for being a confounder, i.e. when both the disease occurrence and the exposure vary seasonally without seasonality being a step in the causal pathway. In these situations, confounding by seasonality should be controlled as for any confounder

    Insights into different results from different causal contrasts in the presence of effect-measure modification

    Get PDF
    Purpose: Both propensity score (PS) matching and inverse probability of treatment weighting (IPTW) allow causal contrasts, albeit different ones. In the presence of effect-measure modification, different analytic approaches produce different summary estimates. Methods: We present a spreadsheet example that assumes a dichotomous exposure, covariate, and outcome. The covariate can be a confounder or not and a modifier of the relative risk (RR) or not. Based on expected cell counts, we calculate RR estimates using five summary estimators: Mantel-Haenszel (MH), maximum likelihood (ML), the standardized mortality ratio (SMR), PS matching, and a common implementation of IPTW. Results: Without effect-measure modification, all approaches produce identical results. In the presence of effect-measure modification and regardless of the presence of confounding, results from the SMR and PS are identical, but IPTW can produce strikingly different results (e.g., RR = 0.83 vs. RR = 1.50). In such settings, MH and ML do not estimate a population parameter and results for those measures fall between PS and IPTW. Conclusions: Discrepancies between PS and IPTW reflect different weighting of stratum-specific effect estimates. SMR and PS matching assign weights according to the distribution of the effect-measure modifier in the exposed subpopulation, whereas IPTW assigns weights according to the distribution of the entire study population. In pharmacoepidemiology, contraindications to treatment that also modify the effect might be prevalent in the population, but would be rare among the exposed. In such settings, estimating the effect of exposure in the exposed rather than the whole population is preferable

    Statistical inference in abstracts of major medical and epidemiology journals 1975–2014: a systematic review

    Get PDF
    Since its introduction in the twentieth century, null hypothesis significance testing (NHST), a hybrid of significance testing (ST) advocated by Fisher and null hypothesis testing (NHT) developed by Neyman and Pearson, has become widely adopted but has also been a source of debate. The principal alternative to such testing is estimation with point estimates and confidence intervals (CI). Our aim was to estimate time trends in NHST, ST, NHT and CI reporting in abstracts of major medical and epidemiological journals. We reviewed 89,533 abstracts in five major medical journals and seven major epidemiological journals, 1975–2014, and estimated time trends in the proportions of abstracts containing statistical inference. In those abstracts, we estimated time trends in the proportions relying on NHST and its major variants, ST and NHT, and in the proportions reporting CIs without explicit use of NHST (CI-only approach). The CI-only approach rose monotonically during the study period in the abstracts of all journals. In Epidemiology abstracts, as a result of the journal’s editorial policy, the CI-only approach has always been the most common approach. In the other 11 journals, the NHST approach started out more common, but by 2014, this disparity had narrowed, disappeared or reversed in 9 of them. The exceptions were JAMA, New England Journal of Medicine, and Lancet abstracts, where the predominance of the NHST approach prevailed over time. In 2014, the CI-only approach is as popular as the NHST approach in the abstracts of 4 of the epidemiology journals: the American Journal of Epidemiology (48%), the Annals of Epidemiology (55%), Epidemiology (79%) and the International Journal of Epidemiology (52%). The reporting of CIs without explicitly interpreting them as statistical tests is becoming more common in abstracts, particularly in epidemiology journals. Although NHST is becoming less popular in abstracts of most epidemiology journals studied and some widely read medical journals, it is still very common in the abstracts of other widely read medical journals, especially in the hybrid form of ST and NHT in which p values are reported numerically along with declarations of the presence or absence of statistical significance

    Performance of propensity score calibration - A simulation study

    Get PDF
    Confounding can be a major source of bias in nonexperimental research. The authors recently introduced propensity score calibration (PSC), which combines propensity scores and regression calibration to address confounding by variables unobserved in the main study by using variables observed in a validation study. Here, the authors assess the performance of PSC using simulations in settings with and without violation of the key assumption of PSC: that the error-prone propensity score estimated in the main study is a surrogate for the gold-standard propensity score (i.e., it contains no additional information on the outcome). The assumption can be assessed if data on the outcome are available in the validation study. If data are simulated allowing for surrogacy to be violated, results depend largely on the extent of violation. If surrogacy holds, PSC leads to bias reduction between 32% and 106% (>100% representing overcorrection). If surrogacy is violated, PSC can lead to an increase in bias. Surrogacy is violated when the direction of confounding of the exposure-disease association caused by the unobserved variable(s) differs from that of the confounding due to observed variables. When surrogacy holds, PSC is a useful approach to adjust for unmeasured confounding using validation data

    Analytic strategies to adjust confounding using exposure propensity scores and disease risk scores: Nonsteroidal antiinflammatory drugs and short-term mortality in the elderly

    Get PDF
    Little is known about optimal application and behavior of exposure propensity scores (EPS) in small studies. In a cohort of 103,133 elderly Medicaid beneficiaries in New Jersey, the effect of nonsteroidal antiinflammatory drug use on 1-year all-cause mortality was assessed (1995-1997) based on the assumption that there is no protective effect and that the preponderance of any observed effect would be confounded. To study the comparative behavior of EPS, disease risk scores, and "conventional" disease models, the authors randomly resampled 1,000 subcohorts of 10,000, 1,000, and 500 persons. The number of variables was limited in disease models, but not EPS and disease risk scores. Estimated EPS were used to adjust for confounding by matching, inverse probability of treatment weighting, stratification, and modeling. The crude rate ratio of death was 0.68 for users of nonsteroidal antiinflammatory drugs. "Conventional" adjustment resulted in a rate ratio of 0.80 (95% confidence interval: 0.77, 0.84). The rate ratio closest to 1 (0.85) was achieved by inverse probability of treatment weighting (95% confidence interval: 0.82, 0.88). With decreasing study size, estimates remained further from the null value, which was most pronounced for inverse probability of treatment weighting (n = 500: rate ratio = 0.72, 95% confidence interval: 0.26, 1.68). In this setting, analytic strategies using EPS or disease risk scores were not generally superior to "conventional" models. Various ways to use EPS and disease risk scores behaved differently with smaller study size

    A review of the application of propensity score methods yielded increasing use, advantages in specific settings, but not substantially different estimates compared with conventional multivariable methods

    Get PDF
    Objective: Propensity score (PS) analyses attempt to control for confounding in nonexperimental studies by adjusting for the likelihood that a given patient is exposed. Such analyses have been proposed to address confounding by indication, but there is little empirical evidence that they achieve better control than conventional multivariate outcome modeling. Study Design and Methods: Using PubMed and Science Citation Index, we assessed the use of propensity scores over time and critically evaluated studies published through 2003. Results: Use of propensity scores increased from a total of 8 reports before 1998 to 71 in 2003. Most of the 177 published studies abstracted assessed medications (N = 60) or surgical interventions (N = 51), mainly in cardiology and cardiac surgery (N = 90). Whether PS methods or conventional outcome models were used to control for confounding had little effect on results in those studies in which such comparison was possible. Only 9 of 69 studies (13%) had an effect estimate that differed by more than 20% from that obtained with a conventional outcome model in all PS analyses presented. Conclusions: Publication of results based on propensity score methods has increased dramatically, but there is little evidence that these methods yield substantially different estimates compared with conventional multivariable methods

    Pragmatic considerations for negative control outcome studies to guide non-randomized comparative analyses: A narrative review

    Get PDF
    Purpose: This narrative review describes the application of negative control outcome (NCO) methods to assess potential bias due to unmeasured or mismeasured confounders in non-randomized comparisons of drug effectiveness and safety. An NCO is assumed to have no causal relationship with a treatment under study while subject to the same confounding structure as the treatment and outcome of interest; an association between treatment and NCO then reflects the potential for uncontrolled confounding between treatment and outcome. Methods: We focus on two recently completed NCO studies that assessed the comparability of outcome risk for patients initiating different osteoporosis medications and lipid-lowering therapies, illustrating several ways in which confounding may result. In these studies, NCO methods were implemented in claims-based data sources, with the results used to guide the decision to proceed with comparative effectiveness or safety analyses. Results: Based on this research, we provide recommendations for future NCO studies, including considerations for the identification of confounding mechanisms in the target patient population, the selection of NCOs expected to satisfy required assumptions, the interpretation of NCO effect estimates, and the mitigation of uncontrolled confounding detected in NCO analyses. We propose the use of NCO studies prior to initiating comparative effectiveness or safety research, providing information on the potential presence of uncontrolled confounding in those comparative analyses. Conclusions: Given the increasing use of non-randomized designs for regulatory decision-making, the application of NCO methods will strengthen study design, analysis, and interpretation of real-world data and the credibility of the resulting real-world evidence

    Atrial fibrillation and comorbidities: clinical characteristics and antithrombotic treatment in GLORIA-AF

    Get PDF
    BackgroundPatients with AF often have multimorbidity (the presence of >= 2 concomitant chronic conditions).ObjectiveTo describe baseline characteristics, patterns of antithrombotic therapy, and factors associated with oral anticoagulant (OAC) prescription in patients with AF and >= 2 concomitant, chronic, comorbid conditions.MethodsPhase III of the GLORIA-AF Registry enrolled consecutive patients from January 2014 through December 2016 with recently diagnosed AF and CHA(2)DS(2)-VASc score >= 1 to assess the safety and effectiveness of antithrombotic treatment.ResultsOf 21,241 eligible patients, 15,119 (71.2%) had >= 2 concomitant, chronic, comorbid conditions. The proportions of patients with multimorbidity receiving non-vitamin K antagonist oral anticoagulants (NOACs) and vitamin K antagonists (VKA) were 60.2% and 23.6%, respectively. The proportion with paroxysmal AF was 57.0% in the NOAC group and 45.4% in the VKA group. Multivariable log-binomial regression analysis found the following factors were associated with no OAC prescription: pattern of AF (paroxysmal, persistent, or permanent), coronary artery disease, myocardial infarction, prior bleeding, smoking status, and region (Asia, North America, or Europe). Factors associated with OAC prescriptions were age, body mass index, renal function, hypertension, history of cerebral ischemic symptoms, and AF ablation.ConclusionMultimorbid AF patients prescribed NOACs have fewer comorbidities than those prescribed VKAs. Age, AF pattern, comorbidities, and renal function are associated with OAC prescription.Thrombosis and Hemostasi

    Changes in anticoagulant prescription patterns over time for patients with atrial fibrillation around the world

    Get PDF
    Background: Prescribing patterns for stroke prevention in atrial fibrillation (AF) patients evolved with approval of non-Vitamin K antagonist oral anticoagulants (NOACs) over time.Objectives: To assess changes in anticoagulant prescription patterns in various geographical regions upon first approval of a NOAC and to analyze the evolution of oral anticoagulants (OACs) use over time in relation to CHA(2)DS(2)-VASc and HAS-BLED risk profiles.Methods: Global Registry on Long-Term Oral Antithrombotic Treatment in Patients with Atrial Fibrillation (GLORIA-AF) Phases II and III reported data on antithrombotic therapy for patients with newly diagnosed AF and >= 1 stroke risk factor. We focused on sites enrolling patients in both phases and reported treatment patterns for the first 4 years after initial NOAC approval.Results: From GLORIA-AF Phases II and III, 27 432 patients were eligible for this analysis. When contrasting the first year with the fourth year of enrolment, the proportion of NOAC prescriptions increased in Asia from 29.2% to 60.8%, in Europe from 53.4% to 75.8%, in North America from 49.0% to 73.9% and in Latin America from 55.7% to 71.1%. The proportion of Vitamin K antagonists (VKAs) use decreased across all regions over time, in Asia from 26.0% to 9.8%, in Europe from 35.5% to 16.8%, in North America from 28.9% to 12.1%, and in Latin America from 32.4% to 17.8%. In the multivariable analysis, factors associated with NOAC prescription were as follows: enrolment year, type of site, region, stroke and bleeding risk scores, and type and categorization of AF.Conclusions: During 4 years after the approval of the first NOAC, NOAC use increased, while VKA use decreased, across all regions.Thrombosis and Hemostasi
    • …
    corecore