690 research outputs found

    Half-life of cost-of-illness estimates: the case of Spina Bifida

    Get PDF
    ManuscriptNeural tube defects, which include spina bifida, are one of the most frequent and important categories of birth defects. Accordingly, there has been considerable interest in studying the impact of spina bifida as a public health problem. This impact can be measured in various ways, including disease-specific mortality, morbidity, functional limitation or disability, and quality of life impairment. Each of these measures captures one component of the total burden of disease. Such measures of impact are important because they allow public health agencies, researchers, and health care providers to understand the effects of preventive or diagnostic interventions, changes in disease incidence or prevalence, and new technologies

    Analytic distribution of the optimal cross-correlation statistic for stochastic gravitational-wave-background searches using pulsar timing arrays

    Full text link
    We show via both analytical calculation and numerical simulation that the optimal cross-correlation statistic (OS) for stochastic gravitational-wave-background (GWB) searches using data from pulsar timing arrays follows a generalized chi-squared (GX2) distribution-i.e., a linear combination of chi-squared distributions with coefficients given by the eigenvalues of the quadratic form defining the statistic. This observation is particularly important for calculating the frequentist statistical significance of a possible GWB detection, which depends on the exact form of the distribution of the OS signal-to-noise ratio (S/N) ρ^A^gw2/σ0\hat\rho \equiv \hat A_{\rm gw}^2/\sigma_0 in the absence of GW-induced cross correlations (i.e., the null distribution). Previous discussions of the OS have incorrectly assumed that the analytic null distribution of ρ^\hat\rho is well-approximated by a zero-mean unit-variance Gaussian distribution. Empirical calculations show that the null distribution of ρ^\hat\rho has "tails" which differ significantly from those for a Gaussian distribution, but which follow (exactly) a GX2 distribution. So, a correct analytical assessment of the statistical significance of a potential detection requires the use of a GX2 distribution.Comment: 13 pages, 3 Figure

    How many diagnosis fields are needed to capture safety events in administrative data? Findings and recommendations from the WHO ICD-11 Topic Advisory Group on Quality and Safety

    Get PDF
    Objective As part of the WHO ICD-11 development initiative, the Topic Advisory Group on Quality and Safety explores meta-features of morbidity data sets, such as the optimal number of secondary diagnosis fields. Design The Health Care Quality Indicators Project of the Organization for Economic Co-Operation and Development collected Patient Safety Indicator (PSI) information from administrative hospital data of 19-20 countries in 2009 and 2011. We investigated whether three countries that expanded their data systems to include more secondary diagnosis fields showed increased PSI rates compared with six countries that did not. Furthermore, administrative hospital data from six of these countries and two American states, California (2011) and Florida (2010), were analysed for distributions of coded patient safety events across diagnosis fields. Results Among the participating countries, increasing the number of diagnosis fields was not associated with any overall increase in PSI rates. However, high proportions of PSI-related diagnoses appeared beyond the sixth secondary diagnosis field. The distribution of three PSI-related ICD codes was similar in California and Florida: 89-90% of central venous catheter infections and 97-99% of retained foreign bodies and accidental punctures or lacerations were captured within 15 secondary diagnosis fields. Conclusions Six to nine secondary diagnosis fields are inadequate for comparing complication rates using hospital administrative data; at least 15 (and perhaps more with ICD-11) are recommended to fully characterize clinical outcomes. Increasing the number of fields should improve the international and intra-national comparability of data for epidemiologic and health services research, utilization analyses and quality of care assessmen

    Evidence of Two Functionally Distinct Ornithine Decarboxylation Systems in Lactic Acid Bacteria

    Get PDF
    Biogenic amines are low-molecular-weight organic bases whose presence in food can result in health problems. The biosynthesis of biogenic amines in fermented foods mostly proceeds through amino acid decarboxylation carried out by lactic acid bacteria (LAB), but not all systems leading to biogenic amine production by LAB have been thoroughly characterized. Here, putative ornithine decarboxylation pathways consisting of a putative ornithine decarboxylase and an amino acid transporter were identified in LAB by strain collection screening and database searches. The decarboxylases were produced in heterologous hosts and purified and characterized in vitro, whereas transporters were heterologously expressed in Lactococcus lactis and functionally characterized in vivo. Amino acid decarboxylation by whole cells of the original hosts was determined as well. We concluded that two distinct types of ornithine decarboxylation systems exist in LAB. One is composed of an ornithine decarboxylase coupled to an ornithine/putrescine transmembrane exchanger. Their combined activities results in the extracellular release of putrescine. This typical amino acid decarboxylation system is present in only a few LAB strains and may contribute to metabolic energy production and/or pH homeostasis. The second system is widespread among LAB. It is composed of a decarboxylase active on ornithine and L-2,4-diaminobutyric acid (DABA) and a transporter that mediates unidirectional transport of ornithine into the cytoplasm. Diamines that result from this second system are retained within the cytosol.

    Application of patient safety indicators internationally: a pilot study among seven countries

    Get PDF
    Objective To explore the potential for international comparison of patient safety as part of the Health Care Quality Indicators project of the Organization for Economic Co-operation and Development (OECD) by evaluating patient safety indicators originally published by the US Agency for Healthcare Research and Quality (AHRQ). Design A retrospective cross-sectional study. Setting Acute care hospitals in the USA, UK, Sweden, Spain, Germany, Canada and Australia in 2004 and 2005/2006. Data sources Routine hospitalization-related administrative data from seven countries were analyzed. Using algorithms adapted to the diagnosis and procedure coding systems in place in each country, authorities in each of the participating countries reported summaries of the distribution of hospital-level and overall (national) rates for each AHRQ Patient Safety Indicator to the OECD project secretariat. Results Each country's vector of national indicator rates and the vector of American patient safety indicators rates published by AHRQ (and re-estimated as part of this study) were highly correlated (0.821-0.966). However, there was substantial systematic variation in rates across countries. Conclusions This pilot study reveals that AHRQ Patient Safety Indicators can be applied to international hospital data. However, the analyses suggest that certain indicators (e.g. ‘birth trauma', ‘complications of anesthesia') may be too unreliable for international comparisons. Data quality varies across countries; undercoding may be a systematic problem in some countries. Efforts at international harmonization of hospital discharge data sets as well as improved accuracy of documentation should facilitate future comparative analyses of routine database

    Teaching Hospital Five-Year Mortality Trends in the Wake of Duty Hour Reforms

    Get PDF
    Background The Accreditation Council for Graduate Medical Education (ACGME) implemented duty hour regulations for residents in 2003 and again in 2011. While previous studies showed no systematic impacts in the first 2 years post-reform, the impact on mortality in subsequent years has not been examined. OBJECTIVE To determine whether duty hour regulations were associated with changes in mortality among Medicare patients in hospitals of different teaching intensity after the first 2 years post-reform. DESIGN Observational study using interrupted time series analysis with data from July 1, 2000 to June 30, 2008. Logistic regression was used to examine the change in mortality for patients in more versus less teaching-intensive hospitals before (2000–2003) and after (2003–2008) duty hour reform, adjusting for patient comorbidities, time trends, and hospital site. PATIENTS Medicare patients (n  = 13,678,956) admitted to short-term acute care non-federal hospitals with principal diagnoses of acute myocardial infarction (AMI), gastrointestinal bleeding, or congestive heart failure (CHF); or a diagnosis-related group (DRG) classification of general, orthopedic, or vascular surgery. MAIN MEASURE All-location mortality within 30 days of hospital admission. KEY RESULTS In medical and surgical patients, there were no consistent changes in the odds of mortality at more vs. less teaching intensive hospitals in post-reform years 1–3. However, there were significant relative improvements in mortality for medical patients in the fourth and fifth years post-reform: Post4 (OR 0.88, 95 % CI [0.93–0.94]); Post5 (OR 0.87, [0.82–0.92]) and for surgical patients in the fifth year post-reform: Post5 (OR 0.91, [0.85–0.96]). CONCLUSIONS Duty hour reform was associated with no significant change in mortality in the early years after implementation, and with a trend toward improved mortality among medical patients in the fourth and fifth years. It is unclear whether improvements in outcomes long after implementation can be attributed to the reform, but concerns about worsening outcomes seem unfounded
    corecore