20,208 research outputs found

    An assessment of failure to rescue derived from routine NHS data as a nursing sensitive patient safety indicator (report to Policy Research Programme)

    No full text
    Objectives: This study aims to assess the potential for deriving 2 mortality based failure to rescue indicators and a proxy measure, based on exceptionally long length of stay, from English hospital administrative data by exploring change in coding practice over time and measuring associations between failure to rescue and factors which would suggest indicators derived from these data are valid.Design: Cross sectional observational study of routinely collected administrative data.Setting: 146 general acute hospital trusts in England.Participants: Discharge data from 66,100,672 surgical admissions (1997 to 2009).Results: Median percentage of surgical admissions with at least one secondary diagnosis recorded increased from 26% in 1997/8 to 40% in 2008/9. The failure to rescue rate for a hospital appears to be relatively stable over time: inter-year correlations between 2007/8 and 2008/9 were r=0.92 to r=0.94. No failure to rescue indicator was significantly correlated with average number of secondary diagnoses coded per hospital. Regression analyses showed that failure to rescue was significantly associated (p<0.05) with several hospital characteristics previously associated with quality including staffing levels. Higher medical staffing (doctors + nurses) per bed and more doctors relative to the number of nurses were associated with lower failure to rescue. Conclusion: Coding practice has improved, and failure to rescue can be derived from English administrative data. The suggestion that it is particularly sensitive to nursing is not clearly supported. Although the patient population is more homogenous than for other mortality measures, risk adjustment is still required

    Approaches to canine health surveillance

    Get PDF
    Effective canine health surveillance systems can be used to monitor disease in the general population, prioritise disorders for strategic control and focus clinical research, and to evaluate the success of these measures. The key attributes for optimal data collection systems that support canine disease surveillance are representativeness of the general population, validity of disorder data and sustainability. Limitations in these areas present as selection bias, misclassification bias and discontinuation of the system respectively. Canine health data sources are reviewed to identify their strengths and weaknesses for supporting effective canine health surveillance. Insurance data benefit from large and well-defined denominator populations but are limited by selection bias relating to the clinical events claimed and animals covered. Veterinary referral clinical data offer good reliability for diagnoses but are limited by referral bias for the disorders and animals included. Primary-care practice data have the advantage of excellent representation of the general dog population and recording at the point of care by veterinary professionals but may encounter misclassification problems and technical difficulties related to management and analysis of large datasets. Questionnaire surveys offer speed and low cost but may suffer from low response rates, poor data validation, recall bias and ill-defined denominator population information. Canine health scheme data benefit from well-characterised disorder and animal data but reflect selection bias during the voluntary submissions process. Formal UK passive surveillance systems are limited by chronic under-reporting and selection bias. It is concluded that active collection systems using secondary health data provide the optimal resource for canine health surveillance

    What does validation of cases in electronic record databases mean? The potential contribution of free text

    Get PDF
    Electronic health records are increasingly used for research. The definition of cases or endpoints often relies on the use of coded diagnostic data, using a pre-selected group of codes. Validation of these cases, as ‘true’ cases of the disease, is crucial. There are, however, ambiguities in what is meant by validation in the context of electronic records. Validation usually implies comparison of a definition against a gold standard of diagnosis and the ability to identify false negatives (‘true’ cases which were not detected) as well as false positives (detected cases which did not have the condition). We argue that two separate concepts of validation are often conflated in existing studies. Firstly, whether the GP thought the patient was suffering from a particular condition (which we term confirmation or internal validation) and secondly, whether the patient really had the condition (external validation). Few studies have the ability to detect false negatives who have not received a diagnostic code. Natural language processing is likely to open up the use of free text within the electronic record which will facilitate both the validation of the coded diagnosis and searching for false negatives

    Performance Measures Using Electronic Health Records: Five Case Studies

    Get PDF
    Presents the experiences of five provider organizations in developing, testing, and implementing four types of electronic quality-of-care indicators based on EHR data. Discusses challenges, and compares results with those from traditional indicators

    Extracting information from the text of electronic medical records to improve case detection: a systematic review

    Get PDF
    Background: Electronic medical records (EMRs) are revolutionizing health-related research. One key issue for study quality is the accurate identification of patients with the condition of interest. Information in EMRs can be entered as structured codes or unstructured free text. The majority of research studies have used only coded parts of EMRs for case-detection, which may bias findings, miss cases, and reduce study quality. This review examines whether incorporating information from text into case-detection algorithms can improve research quality. Methods: A systematic search returned 9659 papers, 67 of which reported on the extraction of information from free text of EMRs with the stated purpose of detecting cases of a named clinical condition. Methods for extracting information from text and the technical accuracy of case-detection algorithms were reviewed. Results: Studies mainly used US hospital-based EMRs, and extracted information from text for 41 conditions using keyword searches, rule-based algorithms, and machine learning methods. There was no clear difference in case-detection algorithm accuracy between rule-based and machine learning methods of extraction. Inclusion of information from text resulted in a significant improvement in algorithm sensitivity and area under the receiver operating characteristic in comparison to codes alone (median sensitivity 78% (codes + text) vs 62% (codes), P = .03; median area under the receiver operating characteristic 95% (codes + text) vs 88% (codes), P = .025). Conclusions: Text in EMRs is accessible, especially with open source information extraction algorithms, and significantly improves case detection when combined with codes. More harmonization of reporting within EMR studies is needed, particularly standardized reporting of algorithm accuracy metrics like positive predictive value (precision) and sensitivity (recall)

    Deepr: A Convolutional Net for Medical Records

    Full text link
    Feature engineering remains a major bottleneck when creating predictive systems from electronic medical records. At present, an important missing element is detecting predictive regular clinical motifs from irregular episodic records. We present Deepr (short for Deep record), a new end-to-end deep learning system that learns to extract features from medical records and predicts future risk automatically. Deepr transforms a record into a sequence of discrete elements separated by coded time gaps and hospital transfers. On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk. Deepr permits transparent inspection and visualization of its inner working. We validate Deepr on hospital data to predict unplanned readmission after discharge. Deepr achieves superior accuracy compared to traditional techniques, detects meaningful clinical motifs, and uncovers the underlying structure of the disease and intervention space
    corecore