2,331 research outputs found

    Accuracy of the discharge destination field in administrative data for identifying transfer to a long-term acute care hospital

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Long-term acute care hospitals (LTACs) provide specialized care for patients recovering from severe acute illness. In order to facilitate research into LTAC utilization and outcomes, we studied whether or not the discharge destination field in administrative data accurately identifies patients transferred to an LTAC following acute care hospitalization.</p> <p>Findings</p> <p>We used the 2006 hospitalization claims for United States Medicare beneficiaries to examine the performance characteristics of the discharge destination field in the administrative record, compared to the reference standard of directly observing LTAC transfers in the claims. We found that the discharge destination field was highly specific (99.7%, 95 percent CI: 99.7% - 99.8%) but modestly sensitive (77.3%, 95 percent CI: 77.0% - 77.6%), with corresponding low positive predictive value (72.6%, 95 percent CI: 72.3% - 72.9%) and high negative predictive value (99.8%, 95 percent CI: 99.8% - 99.8%). Sensitivity and specificity were similar when limiting the analysis to only intensive care unit patients and mechanically ventilated patients, two groups with higher rates of LTAC utilization. Performance characteristics were slightly better when limiting the analysis to Pennsylvania, a state with relatively high LTAC penetration.</p> <p>Conclusions</p> <p>The discharge destination field in administrative data can result in misclassification when used to identify patients transferred to long-term acute care hospitals. Directly observing transfers in the claims is the preferable method, although this approach is only feasible in identified data.</p

    Statistical Basis for Predicting Technological Progress

    Get PDF
    Forecasting technological progress is of great interest to engineers, policy makers, and private investors. Several models have been proposed for predicting technological improvement, but how well do these models perform? An early hypothesis made by Theodore Wright in 1936 is that cost decreases as a power law of cumulative production. An alternative hypothesis is Moore's law, which can be generalized to say that technologies improve exponentially with time. Other alternatives were proposed by Goddard, Sinclair et al., and Nordhaus. These hypotheses have not previously been rigorously tested. Using a new database on the cost and production of 62 different technologies, which is the most expansive of its kind, we test the ability of six different postulated laws to predict future costs. Our approach involves hindcasting and developing a statistical model to rank the performance of the postulated laws. Wright's law produces the best forecasts, but Moore's law is not far behind. We discover a previously unobserved regularity that production tends to increase exponentially. A combination of an exponential decrease in cost and an exponential increase in production would make Moore's law and Wright's law indistinguishable, as originally pointed out by Sahal. We show for the first time that these regularities are observed in data to such a degree that the performance of these two laws is nearly tied. Our results show that technological progress is forecastable, with the square root of the logarithmic error growing linearly with the forecasting horizon at a typical rate of 2.5% per year. These results have implications for theories of technological change, and assessments of candidate technologies and policies for climate change mitigation

    A cluster randomized trial evaluating electronic prescribing in an ambulatory care setting

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Medication errors, adverse drug events and potential adverse drug events are common and serious in terms of the harms and costs that they impose on the health system and those who use it. Errors resulting in preventable adverse drug events have been shown to occur most often at the stages of ordering and administration. This paper describes the protocol for a pragmatic trial of electronic prescribing to reduce prescription error. The trial was designed to overcome the limitations associated with traditional study design.</p> <p>Design</p> <p>This study was designed as a 65-week, cluster randomized, parallel study.</p> <p>Methods</p> <p>The trial was conducted within ambulatory outpatient clinics in an academic tertiary care centre in Ontario, Canada. The electronic prescribing software for the study is a Canadian electronic prescribing software package which provides physician prescription entry with decision support at the point of care. Using a handheld computer (PDA) the physician selects medications using an error minimising menu-based pick list from a comprehensive drug database, create specific prescription instructions and then transmit the prescription directly and electronically to a participating pharmacy via facsimile or to the physician's printer using local area wireless technology. The unit of allocation and randomization is by 'week', i.e. the system is "on" or "off" according to the randomization scheme and the unit of analysis is the prescription, with adjustment for clustering of patients within practitioners.</p> <p>Discussion</p> <p>This paper describes the protocol for a pragmatic cluster randomized trial of point-of-care electronic prescribing, which was specifically designed to overcome the limitations associated with traditional study design.</p> <p>Trial Registration</p> <p>This trial has been registered with clinicaltrials.gov (ID: NCT00252395)</p

    A method for encoding clinical datasets with SNOMED CT

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Over the past decade there has been a growing body of literature on how the Systematised Nomenclature of Medicine Clinical Terms (SNOMED CT) can be implemented and used in different clinical settings. Yet, for those charged with incorporating SNOMED CT into their organisation's clinical applications and vocabulary systems, there are few detailed encoding instructions and examples available to show how this can be done and the issues involved. This paper describes a heuristic method that can be used to encode clinical terms in SNOMED CT and an illustration of how it was applied to encode an existing palliative care dataset.</p> <p>Methods</p> <p>The encoding process involves: identifying input data items; cleaning the data items; encoding the cleaned data items; and exporting the encoded terms as output term sets. Four outputs are produced: the SNOMED CT reference set; interface terminology set; SNOMED CT extension set and unencodeable term set.</p> <p>Results</p> <p>The original palliative care database contained 211 data elements, 145 coded values and 37,248 free text values. We were able to encode ~84% of the terms, another ~8% require further encoding and verification while terms that had a frequency of fewer than five were not encoded (~7%).</p> <p>Conclusions</p> <p>From the pilot, it would seem our SNOMED CT encoding method has the potential to become a general purpose terminology encoding approach that can be used in different clinical systems.</p

    Increasing burden of community-acquired pneumonia leading to hospitalisation, 1998-2014

    Get PDF
    BACKGROUND: Community-acquired pneumonia (CAP) is a major cause of mortality and morbidity in many countries but few recent large-scale studies have examined trends in its incidence. METHODS: Incidence of CAP leading to hospitalisation in one UK region (Oxfordshire) was calculated over calendar time using routinely collected diagnostic codes, and modelled using piecewise-linear Poisson regression. Further models considered other related diagnoses, typical administrative outcomes, and blood and microbiology test results at admission to determine whether CAP trends could be explained by changes in case-mix, coding practices or admission procedures. RESULTS: CAP increased by 4.2%/year (95% CI 3.6 to 4.8) from 1998 to 2008, and subsequently much faster at 8.8%/year (95% CI 7.8 to 9.7) from 2009 to 2014. Pneumonia-related conditions also increased significantly over this period. Length of stay and 30-day mortality decreased slightly in later years, but the proportions with abnormal neutrophils, urea and C reactive protein (CRP) did not change (p>0.2). The proportion with severely abnormal CRP (>100 mg/L) decreased slightly in later years. Trends were similar in all age groups. Streptococcus pneumoniae was the most common causative organism found; however other organisms, particularly Enterobacteriaceae, increased in incidence over the study period (p<0.001). CONCLUSIONS: Hospitalisations for CAP have been increasing rapidly in Oxfordshire, particularly since 2008. There is little evidence that this is due only to changes in pneumonia coding, an ageing population or patients with substantially less severe disease being admitted more frequently. Healthcare planning to address potential further increases in admissions and consequent antibiotic prescribing should be a priority

    Shortcuts to adiabaticity in a time-dependent box

    Full text link
    A method is proposed to drive an ultrafast non-adiabatic dynamics of an ultracold gas trapped in a box potential. The resulting state is free from spurious excitations associated with the breakdown of adiabaticity, and preserves the quantum correlations of the initial state up to a scaling factor. The process relies on the existence of an adiabatic invariant and the inversion of the dynamical self-similar scaling law dictated by it. Its physical implementation generally requires the use of an auxiliary expulsive potential analogous to those used in soliton control. The method is extended to a broad family of many-body systems. As illustrative examples we consider the ultrafast expansion of a Tonks-Girardeau gas and of Bose-Einstein condensates in different dimensions, where the method exhibits an excellent robustness against different regimes of interactions and the features of an experimentally realizable box potential.Comment: 6 pp, 4 figures, typo in Eq. (6) fixe

    PPLook: an automated data mining tool for protein-protein interaction

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Extracting and visualizing of protein-protein interaction (PPI) from text literatures are a meaningful topic in protein science. It assists the identification of interactions among proteins. There is a lack of tools to extract PPI, visualize and classify the results.</p> <p>Results</p> <p>We developed a PPI search system, termed PPLook, which automatically extracts and visualizes protein-protein interaction (PPI) from text. Given a query protein name, PPLook can search a dataset for other proteins interacting with it by using a keywords dictionary pattern-matching algorithm, and display the topological parameters, such as the number of nodes, edges, and connected components. The visualization component of PPLook enables us to view the interaction relationship among the proteins in a three-dimensional space based on the OpenGL graphics interface technology. PPLook can also provide the functions of selecting protein semantic class, counting the number of semantic class proteins which interact with query protein, counting the literature number of articles appearing the interaction relationship about the query protein. Moreover, PPLook provides heterogeneous search and a user-friendly graphical interface.</p> <p>Conclusions</p> <p>PPLook is an effective tool for biologists and biosystem developers who need to access PPI information from the literature. PPLook is freely available for non-commercial users at <url>http://meta.usc.edu/softs/PPLook</url>.</p
    • …
    corecore