26 research outputs found

    Systematic review of emergency medicine clinical practice guidelines: Implications for research and policy

    No full text
    <div><p>Introduction</p><p>Over 25 years, emergency medicine in the United States has amassed a large evidence base that has been systematically assessed and interpreted through ACEP Clinical Policies. While not previously studied in emergency medicine, prior work has shown that nearly half of all recommendations in medical specialty practice guidelines may be based on limited or inconclusive evidence. We sought to describe the proportion of clinical practice guideline recommendations in Emergency Medicine that are based upon expert opinion and low level evidence.</p><p>Methods</p><p>Systematic review of clinical practice guidelines (Clinical Policies) published by the American College of Emergency Physicians from January 1990 to January 2016. Standardized data were abstracted from each Clinical Policy including the number and level of recommendations as well as the reported class of evidence. Primary outcomes were the proportion of Level C equivalent recommendations and Class III equivalent evidence. The primary analysis was limited to current Clinical Policies, while secondary analysis included all Clinical Policies.</p><p>Results</p><p>A total of 54 Clinical Policies including 421 recommendations and 2801 cited references, with an average of 7.8 recommendations and 52 references per guideline were included. Of 19 current Clinical Policies, 13 of 141 (9.2%) recommendations were Level A, 57 (40.4%) Level B, and 71 (50.4%) Level C. Of 845 references in current Clinical Policies, 67 (7.9%) were Class I, 272 (32.3%) Class II, and 506 (59.9%) Class III equivalent. Among all Clinical Policies, 200 (47.5%) recommendations were Level C equivalent, and 1371 (48.9%) of references were Class III equivalent.</p><p>Conclusions</p><p>Emergency medicine clinical practice guidelines are largely based on lower classes of evidence and a majority of recommendations are expert opinion based. Emergency medicine appears to suffer from an evidence gap that should be prioritized in the national research agenda and considered by policymakers prior to developing future quality standards.</p></div

    Discovery of temporal and disease association patterns in condition-specific hospital utilization rates

    No full text
    <div><p>Identifying temporal variation in hospitalization rates may provide insights about disease patterns and thereby inform research, policy, and clinical care. However, the majority of medical conditions have not been studied for their potential seasonal variation. The objective of this study was to apply a data-driven approach to characterize temporal variation in condition-specific hospitalizations. Using a dataset of 34 million inpatient discharges gathered from hospitals in New York State from 2008–2011, we grouped all discharges into 263 clinical conditions based on the principal discharge diagnosis using Clinical Classification Software in order to mitigate the limitation that administrative claims data reflect clinical conditions to varying specificity. After applying Seasonal-Trend Decomposition by LOESS, we estimated the periodicity of the seasonal component using spectral analysis and applied harmonic regression to calculate the amplitude and phase of the condition’s seasonal utilization pattern. We also introduced four new indices of temporal variation: mean oscillation width, seasonal coefficient, trend coefficient, and linearity of the trend. Finally, K-means clustering was used to group conditions across these four indices to identify common temporal variation patterns. Of all 263 clinical conditions considered, 164 demonstrated statistically significant seasonality. Notably, we identified conditions for which seasonal variation has not been previously described such as ovarian cancer, tuberculosis, and schizophrenia. Clustering analysis yielded three distinct groups of conditions based on multiple measures of seasonal variation. Our study was limited to New York State and results may not directly apply to other regions with distinct climates and health burden. A substantial proportion of medical conditions, larger than previously described, exhibit seasonal variation in hospital utilization. Moreover, the application of clustering tools yields groups of clinically heterogeneous conditions with similar seasonal phenotypes. Further investigation is necessary to uncover common etiologies underlying these shared seasonal phenotypes.</p></div

    Describing the performance of U.S. hospitals by applying big data analytics

    No full text
    <div><p>Public reporting of measures of hospital performance is an important component of quality improvement efforts in many countries. However, it can be challenging to provide an overall characterization of hospital performance because there are many measures of quality. In the United States, the Centers for Medicare and Medicaid Services reports over 100 measures that describe various domains of hospital quality, such as outcomes, the patient experience and whether established processes of care are followed. Although individual quality measures provide important insight, it is challenging to understand hospital performance as characterized by multiple quality measures. Accordingly, we developed a novel approach for characterizing hospital performance that highlights the similarities and differences between hospitals and identifies common patterns of hospital performance. Specifically, we built a semi-supervised machine learning algorithm and applied it to the publicly-available quality measures for 1,614 U.S. hospitals to graphically and quantitatively characterize hospital performance. In the resulting visualization, the varying density of hospitals demonstrates that there are key clusters of hospitals that share specific performance profiles, while there are other performance profiles that are rare. Several popular hospital rating systems aggregate some of the quality measures included in our study to produce a composite score; however, hospitals that were top-ranked by such systems were scattered across our visualization, indicating that these top-ranked hospitals actually excel in many different ways. Our application of a novel graph analytics method to data describing U.S. hospitals revealed nuanced differences in performance that are obscured in existing hospital rating systems.</p></div

    STL decomposition for monthly condition-specific hospitalization rates.

    No full text
    <p>(a.) Pneumonia (CCS 122) peaks in the winter. (b.) Poisoning by non-medicinal substances (CCS 243) peaks in the summer. Horizontal axis: time of hospital utilization. Top panel: original time series. For both the seasonal component and residual component, the vertical axis is the variation in admissions; for the trend component, the axis represents a count of admissions. The shaded bar shown on the right is the degree by which the plot must be shrunk to display the component on the same scale as the raw data. A larger bar indicates that a greater amount of shrinking is required, and therefore, that variations in a component with a large bar are small relative to variations in the data. Bottom panel: harmonic regression model fit of seasonal component summed with trend component (dashed black line) plotted over original time series.</p
    corecore