4,068 research outputs found

    Classification of postoperative surgical site infections from blood measurements with missing data using recurrent neural networks

    Full text link
    Clinical measurements that can be represented as time series constitute an important fraction of the electronic health records and are often both uncertain and incomplete. Recurrent neural networks are a special class of neural networks that are particularly suitable to process time series data but, in their original formulation, cannot explicitly deal with missing data. In this paper, we explore imputation strategies for handling missing values in classifiers based on recurrent neural network (RNN) and apply a recently proposed recurrent architecture, the Gated Recurrent Unit with Decay, specifically designed to handle missing data. We focus on the problem of detecting surgical site infection in patients by analyzing time series of their blood sample measurements and we compare the results obtained with different RNN-based classifiers

    Time series cluster kernels to exploit informative missingness and incomplete label information

    Get PDF
    The time series cluster kernel (TCK) provides a powerful tool for analysing multivariate time series subject to missing data. TCK is designed using an ensemble learning approach in which Bayesian mixture models form the base models. Because of the Bayesian approach, TCK can naturally deal with missing values without resorting to imputation and the ensemble strategy ensures robustness to hyperparameters, making it particularly well suited for unsupervised learning. However, TCK assumes missing at random and that the underlying missingness mechanism is ignorable, i.e. uninformative, an assumption that does not hold in many real-world applications, such as e.g. medicine. To overcome this limitation, we present a kernel capable of exploiting the potentially rich information in the missing values and patterns, as well as the information from the observed data. In our approach, we create a representation of the missing pattern, which is incorporated into mixed mode mixture models in such a way that the information provided by the missing patterns is effectively exploited. Moreover, we also propose a semi-supervised kernel, capable of taking advantage of incomplete label information to learn more accurate similarities. Experiments on benchmark data, as well as a real-world case study of patients described by longitudinal electronic health record data who potentially suffer from hospital-acquired infections, demonstrate the effectiveness of the proposed method

    Automated Detection Of Surgical Adverse Events From Retrospective Clinical Data

    Get PDF
    University of Minnesota Ph.D. dissertation. August 2017. Major: Health Informatics. Advisors: GENEVIEVE MELTON-MEAUX, GYORGY SIMON. 1 computer file (PDF); iv 101 pages.The Detection of surgical adverse events has become increasingly important with the growing demand for quality improvement and public health surveillance with surgery. Event reporting is one of the key steps in determining the impact of postoperative complications from a variety of perspectives and is an integral component of improving transparency around surgical care and ultimately around addressing complications. Manual chart review is the most commonly used method in identification of adverse events. Though the manual chart review is the most commonly used method that is considered the “gold-standard” for detecting adverse events for many patient safety studies (research setting), it could be very labor-intensive and time-consuming and thus many hospitals have found it too expensive to routinely use. In this dissertation, aiming to accelerate the process of extracting postoperative outcomes from medical charts, an automated postoperative adverse events detection application has been developed by using structured electronic health record (EHR) data and unstructured clinical notes. First, pilot studies are conducted to test the feasibility by using only completed EHR data and focusing on three types of surgical site infection (SSI). The built models have high specificity as well as very high negative predictive values, reliably eliminating the vast majority of patients without SSI, thereby significantly reducing the chart reviewers’ burden. Practical missing data treatments have also been explored and compared. To address modeling challenges, such as high-dimensional dataset, and imbalanced distribution, several machine learning methods haven been applied. Particularly, one single-task and five multi-task learning methods are developed and compared for their detection performance. The models demonstrated high detection performance, which ensures the feasibility of accelerating the manual process of extracting postoperative outcomes from medical chart. Finally, the use of structured EHR data, clinical notes and the combination of these data types have been separately investigated. Models using different types of data were compared on their detection performance. Models developed with very high AUC score have demonstrated that supervised machine learning methods can be effective for automated detection of surgical adverse events

    National survey of variations in practice in the prevention of surgical site infections in adult cardiac surgery, United Kingdom and Republic of Ireland

    Get PDF
    Background: Currently no national standards exist for the prevention of surgical site infection (SSI) in cardiac surgery. SSI rates range from 1% to 8% between centres. Aim: The aim of this study was to explore and characterize variation in approaches to SSI prevention in the UK and the Republic of Ireland (ROI). Methods: Cardiac surgery centres were surveyed using electronic web-based questionnaires to identify variation in SSI prevention at the level of both institution and consultant teams. Surveys were developed and undertaken through collaboration between the Cardiothoracic Interdisciplinary Research Network (CIRN), Public Health England (PHE) and the National Cardiac Benchmarking Collaborative (NCBC) to encompass routine pre-, intra- and postoperative practice. Findings: Nineteen of 38 centres who were approached provided data and included responses from 139 consultant teams. There was no missing data from those centres that responded. The results demonstrated substantial variation in over 40 aspects of SSI prevention. These included variation in SSI surveillance, reporting of SSI infection rates to external bodies, utilization of SSI risk prediction tools, and the use of interventions such as sternal support devices and gentamicin impregnated sponges. Conclusion: Measured variation in SSI prevention in cardiac centres across the UK and ROI is evidence of clinical uncertainty as to best practice, and has identified areas for quality improvement as well as knowledge gaps to be addressed by future research

    Impact of the WHO Surgical Safety Checklist implementation on perioperative work and risk perceptions : A process evaluation by use of quantitative and qualitative methods

    Get PDF
    Background: Human performance deficiencies account for a large proportion of adverse surgical events. The World Health Organization (WHO) Surgical Safety Checklist (SSC) was launched to improve teamwork and patient outcome. Its introduction in hospitals worldwide has been associated with beneficial impacts on a range of patient and team outcomes. However, both the implementation quality and the comprehensive inclusion of all parts of the checklist is reported to differ among hospitals, surgical specialties and surgical staff members. To understand and engage with these differences, studies were warranted to investigate both perioperative work processes and process indicators associated with positive SSC outcomes. Aims: To investigate the impact of WHO SSC implementation on perioperative care processes and patient outcome. To explore perioperative work processes in the provision of surgical antibiotic prophylaxis (SAP) following the SSC implementation. To explore how the WHO SSC fits with existing perioperative risk management strategies among the multidisciplinary team members. Methods: A combination of quantitative and qualitative methods was used in the studies for this thesis, including data from patients, healthcare personnel and perioperative teamwork observations. In Study 1, we performed a secondary analysis of a WHO SSC stepped wedge cluster randomised control trial. A total of 3,708 surgical procedures were analysed from three surgical units (neurosurgery, cardiothoracic, and orthopaedic) from Haukeland University Hospital. We examined how the SSC implementation quality affected perioperative work processes and patient outcome. In Study 2 and Study 3, we used a prospective ethnographic design, combining 40 hours of observations and 22 single face-to-face interviews of key informants, conducted at Haraldsplass Deaconess Hospital, Førde Central Hospital and Haukeland University Hospital. We explored perioperative work processes in relation to SSC utilisation. In Study 2, we outlined the provision of surgical antibiotic prophylaxis, and in Study 3, we analysed the integration of the SSC in local and professional perioperative risk management. Results: In Study 1, the results showed that high-quality SSC implementation, i.e., all 3 checklist parts used, was significantly associated with improved perioperative work processes (preoperative site marking, normothermia protection, and timely provision of SAP pre-incision) and reduction of complications (surgical infections, wound rupture, perioperative bleeding, and cardiac and respiratory complications). In Study 2, we identified that the provision of SAP was a complex process and outlined the linked perioperative work processes. This involved several interacting factors related to preparation and administration, prescription accuracy and systems, patient specific conditions and changes in the operating theatre schedules. The timeframe of 60 minutes described in the SSC was a prominent mechanism in facilitating administration of SAP before incision. In Study 3, we identified three dominant strategies: “assessing utility”, “customising SSC implementation”, and “interactive micro-team communication”. Each of these reflected on how the SSC was integrated into risk management strategies in daily surgical practice. Each strategy had corresponding categories describing how SSC utility assessment was carried out and how performance of SSC was customized, mainly according to actual presence of team members and barriers of performance. The strategy of “interactive micro-team communication” included formal and informal micro-team formations where detailed, and specific risk assessments unfolded. Conclusion: Utilisation of all 3 parts of the SSC was significantly associated with improved processes and outcomes of care. Overall improvement of SAP administration is likely to have been influenced by the SSC timeframe of “60 minutes prior to incision”, either as a cognitive “reminder” of timely administration and /or as an educational intervention. Although the SSC use has made significant impact on specific perioperative work processes, identified norms of behaviour and communication indicate that the SSC seemed not to be fully integrated into existing perioperative risk management strategies on a daily basis among the multidisciplinary team members

    The design and application of surveillance systems in improving health outcomes and identifying risk factors for healthcare associated infections

    Get PDF
    The risks of patients acquiring an infection as a result of healthcare are considerable, with between 6.4% and 9.1% of patients in hospital found to have an healthcare associated infection (HCAI). These infections account for a considerable burden of disease; they are associated with significant morbidity and mortality, and incur costs to the patient, healthcare organisations and society. There is considerable evidence for measures that are effective in preventing HCAI, however there are challenges in ensuring that healthcare workers are aware of the risks and adhere to recommended practice. Surveillance systems that systematically capture, analyse and feedback data on rates of HCAI have been found to be a key component of effective infection control strategies, especially when they incorporate benchmarking. The large datasets captured by national surveillance systems also provide a unique opportunity to explore the epidemiology of HCAI, factors that contribute to their occurrence and their impact on public health. This thesis concerns the design and application of surveillance systems for infections associated with healthcare. It reflects the programme of research originating from my involvement with the development and delivery of national HCAI surveillance systems in England from the mid-1990s. This research has addressed my underpinning hypothesis that: 'there are real differences in rates of HCAI which reflect variation in clinical practice and indicate where improvement may prevent these infections'. The thesis includes eight primary publications focused on two key types of HCAI, surgical site infections (SSI) and bloodstream infections (BSI). The publications related to SSI describe my work on: the risks of SSI in terms of mortality and increased length of hospital stay; significant independent risk factors for SSI following hip prosthesis; the relationship between duration of operations and risk of SSI; inter-country comparisons of rates; an innovative approach to performance monitoring based on funnel plots; and the impact of psot-discharge surveillance on benchmarking. They are based on the analysis of data contributed to the national SSI surveillance system. A further two publications related to BSI explored trends in causative pathogens and source of methicillin resistant Staphylococcus aureus. The thesis describes the main methods and findings of these studies, their contribution to contemporary knowledge and subsequent contributions to the field, ilustrating my contribution to each of the works and my professional development as a researcher. The body of work has identified important trends in pathogens causing BSI, in particular the emergence of Escherichia coli as a major cause of these infections, and provided evidence of possible contributory factors. It has also identified factors contributing to the reduction of methicillin resistant Staphylococcus aureus as a cause of BSI. It has added to the body of knowledge on outcomes of SSI, demonstrating that SSI doubles the length of hospital stay and the more severe infections significantly increases the risk of mortality in some types of surgery. It has informed the design and delivery of SSI surveillance systems in England and Europe through identifying the impact of key risk factors, such as the duration of operation and type of hip replacement procedure, and exploring the impact of variation in application of surveillance methods, in particualr post-discharge surveillance, on rates of SSI. It has enhanced the value of surveillance as a performance monitoring through the application of innovative approaches to adjusting and comparing rates, such as the use of funnel plots for the detection of outliers. In conclusion, these analyses of data on HCAI have informed the development of national surveillance systems, improved understanding of variation in rates, and identified factors that may influence them. Further work is required to enhance and develop surveillance systems in order that they can continue to support the evaluation of effective infection prevention strategies in a rapidly changing healthcare environment

    Groin surgical site infection incidence in vascular surgery with intradermal suture versus metallic stapling skin closure. A study protocol for a pragmatic open-label parallel-group randomized clinical trial (VASC-INF trial)

    Full text link
    Background: Surgical site infection is 1 of the most frightening complications in vascular surgery due to its high morbimortality. The use of intradermal sutures for skin closure might be associated with a reduction in infections incidence. However, the data available in the literature is scarce and primarily built on low-evidence studies. To our knowledge, no multicenter clinical trial has been published to assess if the intradermal suture is associated with a lower surgical site infection incidence than metallic staples in patients who will undergo revascularization surgery requiring a femoral approach. Methods: VASC-INF is a pragmatic, multicenter, multistate (Spain, Italy, and Greece), randomized, open-label, clinical trial assessing the surgical site infection incidence in patients undergoing revascularization surgery requiring a femoral approach. Patients will be randomized on a 1:1 ratio to intradermal suture closure (experimental group) or to metallic staples closure (control group).The primary outcome is the number (percentage) of patients with surgical site infection (superficial and/or deep) associated with a femoral approach up to 28 (±2) days after surgery. Among the secondary outcomes are the number (percentage) of patients with other surgical wound complications; the number (percentage) of patients with surgical site infections who develop sepsis; type of antibiotic therapy used; type of microorganisms' species isolated and to describe the surgical site infection risk factors. Discussion: Intradermal suture closure may be beneficial in patients undergoing revascularization surgery requiring a femoral approach. Our working hypothesis is that intradermal suture closure reduces the incidence of surgical site infection respect to metallic staples closure

    Predicting infections using computational intelligence – A systematic review

    Get PDF
    Infections encompass a set of medical conditions of very diverse kinds that can pose a significant risk to health, and even death. As with many other diseases, early diagnosis can help to provide patients with proper care to minimize the damage produced by the disease, or to isolate them to avoid the risk of spread. In this context, computational intelligence can be useful to predict the risk of infection in patients, raising early alarms that can aid medical teams to respond as quick as possible. In this paper, we survey the state of the art on infection prediction using computer science by means of a systematic literature review. The objective is to find papers where computational intelligence is used to predict infections in patients using physiological data as features. We have posed one major research question along with nine specific subquestions. The whole review process is thoroughly described, and eight databases are considered which index most of the literature published in different scholarly formats. A total of 101 relevant documents have been found in the period comprised between 2003 and 2019, and a detailed study of these documents is carried out to classify the works and answer the research questions posed, resulting to our best knowledge in the most comprehensive study of its kind. We conclude that the most widely addressed infection is by far sepsis, followed by Clostridium difficile infection and surgical site infections. Most works use machine learning techniques, from which logistic regression, support vector machines, random forest and naive Bayes are the most common. Some machine learning works provide some ideas on the problems of small data and class imbalance, which can be of interest. The current systematic literature review shows that automatic diagnosis of infectious diseases using computational intelligence is well documented in the medical literature.publishedVersio
    corecore