263 research outputs found

    A case-control study of the effect of infant feeding on celiac disease

    Get PDF
    Aims: The aim of this study was to investigate the association between the duration of breast-feeding and the age at the first gluten introduction into the infant diet and the incidence and age at onset of celiac disease. Methods: In a case-control study, 143 children with celiac disease and 137 randomly recruited gender- and age-matched control children were administered a standardized questionnaire. Multivariate-adjusted odds ratios (OR) as estimates of the relative risk and corresponding 95% confidence intervals (95% CI) were calculated. Results: The risk of developing celiac disease decreased significantly by 63% for children breast-fed for more than 2 months (OR 0.37, 95% Cl 0.21-0.64) as compared with children breast-fed for 2 months or less. The age at first gluten introduction had no significant influence on the incidence of celiac disease (OR 0.72, 95% Cl 0.29-1.79 comparing first gluten introduction into infant diet >3 months vs. less than or equal to3 months). Conclusions: A significant protective effect on the incidence of celiac disease was suggested by the duration of breast-feeding (partial breastfeeding as well as exclusive breast-feeding). The data did not support an influence of the age at first dietary gluten exposure on the incidence of celiac disease. However, the age at first gluten exposure appeared to affect the age at onset of symptoms. Copyright (C) 2001 S. Karger AG, Basel

    Propensity score methodology for confounding control in health care utilization databases

    Get PDF
    Propensity score (PS) methodology is a common approach to control for confounding in nonexperimental studies of treatment effects using health care utilization databases. This methodology offers researchers many advantages compared with conventional multivariate models: it directly focuses on the determinants of treatment choice, facilitating the understanding of the clinical decision-making process by the researcher; it allows for graphical comparisons of the distribution of propensity scores and truncation of subjects without overlapping PS indicating a lack of equipoise; it allows transparent assessment of the confounder balance achieved by the PS at baseline; and it offers a straightforward approach to reduce the dimensionality of sometimes large arrays of potential confounders in utilization databases, directly addressing the “curse of dimensionality” in the context of rare events. This article provides an overview of the use of propensity score methodology for pharmacoepidemiologic research with large health care utilization databases, covering recent discussions on covariate selection, the role of automated techniques for addressing unmeasurable confounding via proxies, strategies to maximize clinical equipoise at baseline, and the potential of machine-learning algorithms for optimized propensity score estimation. The appendix discusses the available software packages for PS methodology. Propensity scores are a frequently used and versatile tool for transparent and comprehensive adjustment of confounding in pharmacoepidemiology with large health care databases

    Quantifying the Role of Adverse Events in the Mortality Difference between First and Second-Generation Antipsychotics in Older Adults: Systematic Review and Meta-Synthesis

    Get PDF
    Background: Observational studies have reported higher mortality among older adults treated with first-generation antipsychotics (FGAs) versus second-generation antipsychotics (SGAs). A few studies examined risk for medical events, including stroke, ventricular arrhythmia, venous thromboembolism, myocardial infarction, pneumonia, and hip fracture. Objectives: 1) Review robust epidemiologic evidence comparing mortality and medical event risk between FGAs and SGAs in older adults; 2) Quantify how much these medical events explain the observed mortality difference between FGAs and SGAs. Data sources Pubmed and Science Citation Index. Study eligibility criteria, participants, and interventions Studies of antipsychotic users that: 1) evaluated mortality or medical events specified above; 2) restricted to populations with a mean age of 65 years or older 3) compared FGAs to SGAs, or both to a non-user group; (4) employed a “new user” design; (5) adjusted for confounders assessed prior to antipsychotic initiation; (6) and did not require survival after antipsychotic initiation. A separate search was performed for mortality estimates associated with the specified medical events. Study appraisal and synthesis methods For each medical event, we used a non-parametric model to estimate lower and upper bounds for the proportion of the mortality difference—comparing FGAs to SGAs—mediated by their difference in risk for the medical event. Results: We provide a brief, updated summary of the included studies and the biological plausibility of these mechanisms. Of the 1122 unique citations retrieved, we reviewed 20 observational cohort studies that reported 28 associations. We identified hip fracture, stroke, myocardial infarction, and ventricular arrhythmias as potential intermediaries on the causal pathway from antipsychotic type to death. However, these events did not appear to explain the entire mortality difference. Conclusions: The current literature suggests that hip fracture, stroke, myocardial infarction, and ventricular arrhythmias partially explain the mortality difference between SGAs and FGAs

    Collective excitation and decay of waveguide-coupled atoms: from timed Dicke states to inverted ensembles

    Full text link
    The collective absorption and emission of light by an ensemble of atoms is at the heart of many fundamental quantum optical effects and the basis for numerous applications. However, beyond weak excitation, both experiment and theory become increasingly challenging. Here, we explore the regimes from weak excitation to inversion with ensembles of up to one thousand atoms that are trapped and optically interfaced using the evanescent field surrounding an optical nanofiber. We realize strong inversion, with about 80% of the atoms being excited, and study their subsequent radiative decay into the guided modes. The data is very well described by a simple model that assumes a cascaded interaction of the guided light with the atoms. Our results contribute to the fundamental understanding of the collective interaction of light and matter and are relevant for applications ranging from quantum memories to sources of nonclassical light to optical frequency standards

    Dimension reduction and shrinkage methods for high dimensional disease risk scores in historical data

    Get PDF
    Abstract Background Multivariable confounder adjustment in comparative studies of newly marketed drugs can be limited by small numbers of exposed patients and even fewer outcomes. Disease risk scores (DRSs) developed in historical comparator drug users before the new drug entered the market may improve adjustment. However, in a high dimensional data setting, empirical selection of hundreds of potential confounders and modeling of DRS even in the historical cohort can lead to over-fitting and reduced predictive performance in the study cohort. We propose the use of combinations of dimension reduction and shrinkage methods to overcome this problem, and compared the performances of these modeling strategies for implementing high dimensional (hd) DRSs from historical data in two empirical study examples of newly marketed drugs versus comparator drugs after the new drugs’ market entry—dabigatran versus warfarin for the outcome of major hemorrhagic events and cyclooxygenase-2 inhibitor (coxibs) versus nonselective non-steroidal anti-inflammatory drugs (nsNSAIDs) for gastrointestinal bleeds. Results Historical hdDRSs that included predefined and empirical outcome predictors with dimension reduction (principal component analysis; PCA) and shrinkage (lasso and ridge regression) approaches had higher c-statistics (0.66 for the PCA model, 0.64 for the PCA + ridge and 0.65 for the PCA + lasso models in the warfarin users) than an unreduced model (c-statistic, 0.54) in the dabigatran example. The odds ratio (OR) from PCA + lasso hdDRS-stratification [OR, 0.64; 95 % confidence interval (CI) 0.46–0.90] was closer to the benchmark estimate (0.93) from a randomized trial than the model without empirical predictors (OR, 0.58; 95 % CI 0.41–0.81). In the coxibs example, c-statistics of the hdDRSs in the nsNSAID initiators were 0.66 for the PCA model, 0.67 for the PCA + ridge model, and 0.67 for the PCA + lasso model; these were higher than for the unreduced model (c-statistic, 0.45), and comparable to the demographics + risk score model (c-statistic, 0.67). Conclusions hdDRSs using historical data with dimension reduction and shrinkage was feasible, and improved confounding adjustment in two studies of newly marketed medications

    Costs of Measuring Outcomes of Acute Hospital Care in a Longitudinal Outcomes Measurement System

    Get PDF
    It is widely acknowledged that the measurement of outcomes of care and the comparison of outcomes over time within health care providers and risk-adjusted comparisons among providers are important parts of improving quality and cost-effectiveness of care. However, few studies have assessed the costs of measuring outcomes of care. We sought to evaluate the personnel and financial resources spent for a prospective assessment of outcomes of acute hospital care by health professionals in internal medicine. The study included 15 primary care hospitals participating in a longitudinal outcomes measurement program and 2005 patients over an assessment period with an average duration of 6 months. Each hospital project manager participated in a previously-tested structured 30-minute telephone interview. Outcome measures include time spent by the individual job titles in implementing and running the outcomes measurement program. Job-title-specific times were used to calculate costs from the hospitals' perspective. One-time costs (C2132 + 1352) and administrative costs (95 97 per week) varied substantially. Costs per patient were fairly stable at around 20. We estimated that the total cost for each hospital to assess outcomes of care for accreditation (10 tracer diagnoses over 6 months) would be 9700 and that continuous monitoring of outcomes (5 tracer diagnoses) would cost 12,400 per year. This study suggests that outcomes of acute hospital care can be assessed with limited resources and that standardized training programs would reduce variability in overall costs. This study should help hospital decision makers to estimate the necessary funding for outcomes measurement initiatives

    Instrumental variable methods in comparative safety and effectiveness research

    Get PDF
    Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-Ă -vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial
    • …
    corecore