2,749 research outputs found
Accuracy of the discharge destination field in administrative data for identifying transfer to a long-term acute care hospital
<p>Abstract</p> <p>Background</p> <p>Long-term acute care hospitals (LTACs) provide specialized care for patients recovering from severe acute illness. In order to facilitate research into LTAC utilization and outcomes, we studied whether or not the discharge destination field in administrative data accurately identifies patients transferred to an LTAC following acute care hospitalization.</p> <p>Findings</p> <p>We used the 2006 hospitalization claims for United States Medicare beneficiaries to examine the performance characteristics of the discharge destination field in the administrative record, compared to the reference standard of directly observing LTAC transfers in the claims. We found that the discharge destination field was highly specific (99.7%, 95 percent CI: 99.7% - 99.8%) but modestly sensitive (77.3%, 95 percent CI: 77.0% - 77.6%), with corresponding low positive predictive value (72.6%, 95 percent CI: 72.3% - 72.9%) and high negative predictive value (99.8%, 95 percent CI: 99.8% - 99.8%). Sensitivity and specificity were similar when limiting the analysis to only intensive care unit patients and mechanically ventilated patients, two groups with higher rates of LTAC utilization. Performance characteristics were slightly better when limiting the analysis to Pennsylvania, a state with relatively high LTAC penetration.</p> <p>Conclusions</p> <p>The discharge destination field in administrative data can result in misclassification when used to identify patients transferred to long-term acute care hospitals. Directly observing transfers in the claims is the preferable method, although this approach is only feasible in identified data.</p
Statistical Basis for Predicting Technological Progress
Forecasting technological progress is of great interest to engineers, policy
makers, and private investors. Several models have been proposed for predicting
technological improvement, but how well do these models perform? An early
hypothesis made by Theodore Wright in 1936 is that cost decreases as a power
law of cumulative production. An alternative hypothesis is Moore's law, which
can be generalized to say that technologies improve exponentially with time.
Other alternatives were proposed by Goddard, Sinclair et al., and Nordhaus.
These hypotheses have not previously been rigorously tested. Using a new
database on the cost and production of 62 different technologies, which is the
most expansive of its kind, we test the ability of six different postulated
laws to predict future costs. Our approach involves hindcasting and developing
a statistical model to rank the performance of the postulated laws. Wright's
law produces the best forecasts, but Moore's law is not far behind. We discover
a previously unobserved regularity that production tends to increase
exponentially. A combination of an exponential decrease in cost and an
exponential increase in production would make Moore's law and Wright's law
indistinguishable, as originally pointed out by Sahal. We show for the first
time that these regularities are observed in data to such a degree that the
performance of these two laws is nearly tied. Our results show that
technological progress is forecastable, with the square root of the logarithmic
error growing linearly with the forecasting horizon at a typical rate of 2.5%
per year. These results have implications for theories of technological change,
and assessments of candidate technologies and policies for climate change
mitigation
A cluster randomized trial evaluating electronic prescribing in an ambulatory care setting
<p>Abstract</p> <p>Background</p> <p>Medication errors, adverse drug events and potential adverse drug events are common and serious in terms of the harms and costs that they impose on the health system and those who use it. Errors resulting in preventable adverse drug events have been shown to occur most often at the stages of ordering and administration. This paper describes the protocol for a pragmatic trial of electronic prescribing to reduce prescription error. The trial was designed to overcome the limitations associated with traditional study design.</p> <p>Design</p> <p>This study was designed as a 65-week, cluster randomized, parallel study.</p> <p>Methods</p> <p>The trial was conducted within ambulatory outpatient clinics in an academic tertiary care centre in Ontario, Canada. The electronic prescribing software for the study is a Canadian electronic prescribing software package which provides physician prescription entry with decision support at the point of care. Using a handheld computer (PDA) the physician selects medications using an error minimising menu-based pick list from a comprehensive drug database, create specific prescription instructions and then transmit the prescription directly and electronically to a participating pharmacy via facsimile or to the physician's printer using local area wireless technology. The unit of allocation and randomization is by 'week', i.e. the system is "on" or "off" according to the randomization scheme and the unit of analysis is the prescription, with adjustment for clustering of patients within practitioners.</p> <p>Discussion</p> <p>This paper describes the protocol for a pragmatic cluster randomized trial of point-of-care electronic prescribing, which was specifically designed to overcome the limitations associated with traditional study design.</p> <p>Trial Registration</p> <p>This trial has been registered with clinicaltrials.gov (ID: NCT00252395)</p
A method for encoding clinical datasets with SNOMED CT
<p>Abstract</p> <p>Background</p> <p>Over the past decade there has been a growing body of literature on how the Systematised Nomenclature of Medicine Clinical Terms (SNOMED CT) can be implemented and used in different clinical settings. Yet, for those charged with incorporating SNOMED CT into their organisation's clinical applications and vocabulary systems, there are few detailed encoding instructions and examples available to show how this can be done and the issues involved. This paper describes a heuristic method that can be used to encode clinical terms in SNOMED CT and an illustration of how it was applied to encode an existing palliative care dataset.</p> <p>Methods</p> <p>The encoding process involves: identifying input data items; cleaning the data items; encoding the cleaned data items; and exporting the encoded terms as output term sets. Four outputs are produced: the SNOMED CT reference set; interface terminology set; SNOMED CT extension set and unencodeable term set.</p> <p>Results</p> <p>The original palliative care database contained 211 data elements, 145 coded values and 37,248 free text values. We were able to encode ~84% of the terms, another ~8% require further encoding and verification while terms that had a frequency of fewer than five were not encoded (~7%).</p> <p>Conclusions</p> <p>From the pilot, it would seem our SNOMED CT encoding method has the potential to become a general purpose terminology encoding approach that can be used in different clinical systems.</p
Recommended from our members
Turbulent flow at 190 m height above London during 2006-2008: A climatology and the applicability of similarity theory
Flow and turbulence above urban terrain is more complex than above rural terrain, due to the different momentum and heat transfer characteristics that are affected by the presence of buildings (e.g. pressure variations around buildings). The applicability of similarity theory (as developed over rural terrain) is tested using observations of flow from a sonic anemometer located at 190.3 m height in London, U.K. using about 6500 h of data. Turbulence statisticsâdimensionless wind speed and temperature, standard deviations and correlation coefficients for momentum and heat transferâwere analysed in three ways. First, turbulence statistics were plotted as a function only of a local stability parameter z/Î (where Î is the local Obukhov length and z is the height above ground); the Ï_i/u_* values (i = u, v, w) for neutral conditions are 2.3, 1.85 and 1.35 respectively, similar to canonical values. Second, analysis of urban mixed-layer formulations during daytime convective conditions over London was undertaken, showing that atmospheric turbulence at high altitude over large cities might not behave dissimilarly from that over rural terrain. Third, correlation coefficients for heat and momentum were analyzed with respect to local stability. The results give confidence in using the framework of local similarity for turbulence measured over London, and perhaps other cities. However, the following caveats for our data are worth noting: (i) the terrain is reasonably flat, (ii) building heights vary little over a large area, and (iii) the sensor height is above the mean roughness sublayer depth
Increasing burden of community-acquired pneumonia leading to hospitalisation, 1998-2014
BACKGROUND: Community-acquired pneumonia (CAP) is a major cause of mortality and morbidity in many countries but few recent large-scale studies have examined trends in its incidence. METHODS: Incidence of CAP leading to hospitalisation in one UK region (Oxfordshire) was calculated over calendar time using routinely collected diagnostic codes, and modelled using piecewise-linear Poisson regression. Further models considered other related diagnoses, typical administrative outcomes, and blood and microbiology test results at admission to determine whether CAP trends could be explained by changes in case-mix, coding practices or admission procedures. RESULTS: CAP increased by 4.2%/year (95% CI 3.6 to 4.8) from 1998 to 2008, and subsequently much faster at 8.8%/year (95% CI 7.8 to 9.7) from 2009 to 2014. Pneumonia-related conditions also increased significantly over this period. Length of stay and 30-day mortality decreased slightly in later years, but the proportions with abnormal neutrophils, urea and C reactive protein (CRP) did not change (p>0.2). The proportion with severely abnormal CRP (>100â
mg/L) decreased slightly in later years. Trends were similar in all age groups. Streptococcus pneumoniae was the most common causative organism found; however other organisms, particularly Enterobacteriaceae, increased in incidence over the study period (p<0.001). CONCLUSIONS: Hospitalisations for CAP have been increasing rapidly in Oxfordshire, particularly since 2008. There is little evidence that this is due only to changes in pneumonia coding, an ageing population or patients with substantially less severe disease being admitted more frequently. Healthcare planning to address potential further increases in admissions and consequent antibiotic prescribing should be a priority
Shortcuts to adiabaticity in a time-dependent box
A method is proposed to drive an ultrafast non-adiabatic dynamics of an
ultracold gas trapped in a box potential. The resulting state is free from
spurious excitations associated with the breakdown of adiabaticity, and
preserves the quantum correlations of the initial state up to a scaling factor.
The process relies on the existence of an adiabatic invariant and the inversion
of the dynamical self-similar scaling law dictated by it. Its physical
implementation generally requires the use of an auxiliary expulsive potential
analogous to those used in soliton control. The method is extended to a broad
family of many-body systems. As illustrative examples we consider the ultrafast
expansion of a Tonks-Girardeau gas and of Bose-Einstein condensates in
different dimensions, where the method exhibits an excellent robustness against
different regimes of interactions and the features of an experimentally
realizable box potential.Comment: 6 pp, 4 figures, typo in Eq. (6) fixe
Beyond Volume: The Impact of Complex Healthcare Data on the Machine Learning Pipeline
From medical charts to national census, healthcare has traditionally operated
under a paper-based paradigm. However, the past decade has marked a long and
arduous transformation bringing healthcare into the digital age. Ranging from
electronic health records, to digitized imaging and laboratory reports, to
public health datasets, today, healthcare now generates an incredible amount of
digital information. Such a wealth of data presents an exciting opportunity for
integrated machine learning solutions to address problems across multiple
facets of healthcare practice and administration. Unfortunately, the ability to
derive accurate and informative insights requires more than the ability to
execute machine learning models. Rather, a deeper understanding of the data on
which the models are run is imperative for their success. While a significant
effort has been undertaken to develop models able to process the volume of data
obtained during the analysis of millions of digitalized patient records, it is
important to remember that volume represents only one aspect of the data. In
fact, drawing on data from an increasingly diverse set of sources, healthcare
data presents an incredibly complex set of attributes that must be accounted
for throughout the machine learning pipeline. This chapter focuses on
highlighting such challenges, and is broken down into three distinct
components, each representing a phase of the pipeline. We begin with attributes
of the data accounted for during preprocessing, then move to considerations
during model building, and end with challenges to the interpretation of model
output. For each component, we present a discussion around data as it relates
to the healthcare domain and offer insight into the challenges each may impose
on the efficiency of machine learning techniques.Comment: Healthcare Informatics, Machine Learning, Knowledge Discovery: 20
Pages, 1 Figur
- âŠ