187 research outputs found

    Design and Evaluation of User-Centered Explanations for Machine Learning Model Predictions in Healthcare

    Get PDF
    Challenges in interpreting some high-performing models present complications in applying machine learning (ML) techniques to healthcare problems. Recently, there has been rapid growth in research on model interpretability; however, approaches to explaining complex ML models are rarely informed by end-user needs and user evaluations of model interpretability are lacking, especially in healthcare. This makes it challenging to determine what explanation approaches might enable providers to understand model predictions in a comprehensible and useful way. Therefore, I aimed to utilize clinician perspectives to inform the design of explanations for ML-based prediction tools and improve the adoption of these systems in practice. In this dissertation, I proposed a new theoretical framework for designing user-centered explanations for ML-based systems. I then utilized the framework to propose explanation designs for predictions from a pediatric in-hospital mortality risk model. I conducted focus groups with healthcare providers to obtain feedback on the proposed designs, which was used to inform the design of a user-centered explanation. The user-centered explanation was evaluated in a laboratory study to assess its effect on healthcare provider perceptions of the model and decision-making processes. The results demonstrated that the user-centered explanation design improved provider perceptions of utilizing the predictive model in practice, but exhibited no significant effect on provider accuracy, confidence, or efficiency in making decisions. Limitations of the evaluation study design, including a small sample size, may have affected the ability to detect an impact on decision-making. Nonetheless, the predictive model with the user-centered explanation was positively received by healthcare providers, and demonstrated a viable approach to explaining ML model predictions in healthcare. Future work is required to address the limitations of this study and further explore the potential benefits of user-centered explanation designs for predictive models in healthcare. This work contributes a new theoretical framework for user-centered explanation design for ML-based systems that is generalizable outside the domain of healthcare. Moreover, the work provides meaningful insights into the role of model interpretability and explanation in healthcare while advancing the discussion on how to effectively communicate ML model information to healthcare providers

    Network Alignment in Healthcare: A Socio-Technical Approach to System-Wide Improvement and Patient Safety.

    Full text link
    Local process improvement efforts have permeated the healthcare industry, yet the ability to extend these improvements across the system continues to be a challenge. Coordinating services, or patient care, across organizational boundaries can be difficult and can impact leadership’s ability to enable widespread organizational change. This research presents a socio-technical approach to cross-unit coordination and system-wide improvement by forwarding a network alignment methodology that can aid in the identification of gaps throughout a system. The proposed model examines the alignment of patient or diagnostic information flow, the technical flow network, with the ability to clearly define customer requirements and problem solve with suppliers, the safety control network. This research uses a case study approach to assess the current situation and demonstrate an improvement approach to coordinate across organizational boundaries for improved quality in health care. Using both qualitative and quantitative data, we observe empirically a relationship between unit coordination and quality, safety culture, and process improvement efforts. This work provides a method for analyzing value streams that differ from the linear, sequential value stream mapping techniques commonly employed in manufacturing and introduces a coordination assessment measurement approach to quantify mismatches between technical flow and organizational structure. The ability of leadership to understand where breakdowns occur and develop countermeasures can impact the effectiveness of system-wide problem solving which, in turn, becomes the basis for continuous organizational learning and improvement.Ph.D.Industrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91582/1/ballards_1.pd

    Layout Evaluation by Simulation Protocol for Identifying Potential Inefficiencies Created by Medical Building Configuration

    Get PDF
    With the healthcare industry in a state of change, one focus is on efficiency in the healthcare environment. The trend for architects is a focus on an evidence-based design decision making process. In this context, simulation is gaining acceptance as a source of evidence. This research developed the Layout Evaluation by Simulation (LES) protocol to evaluate the design of a healthcare facility layout. The approach contains a Systems-of-Systems analysis for developing a healthcare delivery (HD) model, a computer model and simulation of an existing medical facility validated by existing data. Then simulations are run through the validated model inserting the future facility design to evaluate efficiency in a proposed new spatial layout. Through a real-world case study, the research contains an evaluation of the predictive capacity of the LES protocol. In the research, a completely Agent Based Modeling and Simulation, a completely Discrete Event Simulation, and a hybrid were investigated. As detail was added to all models, simulations were run creating a matrix of results for comparison to existing data. The LES protocol was confirmed to be effective. The results demonstrate that the healthcare delivery (HD) model provides a sufficient basis from which to develop the computer model and simulation. The LES protocol is a valuable tool for evaluating situations for emergent behavior. The research also confirmed the need for some degree of agent based modeling to detect emergent behavior

    Common data elements for pediatric traumatic brain injury: Recommendations from the working group on demographics and clinical assessment

    Get PDF
    The Common Data Elements (CDEs) initiative is a National Institutes of Health (NIH) interagency effort to standardize naming, definitions, and data structure for clinical research variables. Comparisons of the results of clinical studies of neurological disorders have been hampered by variability in data coding, definitions, and procedures for sample collection. The CDE project objective is to enable comparison of future clinical trials results in major neurological disorders, including traumatic brain injury (TBI), stroke, multiple sclerosis, and epilepsy. As part of this effort, recommendations for CDEs for research on TBI were developed through a 2009 multi-agency initiative. Following the initial recommendations of the Working Group on Demographics and Clinical Assessment, a separate workgroup developed recommendations on the coding of clinical and demographic variables specific to pediatric TBI studies for subjects younger than 18 years. This article summarizes the selection of measures by the Pediatric TBI Demographics and Clinical Assessment Working Group. The variables are grouped into modules which are grouped into categories. For consistency with other CDE working groups, each variable was classified by priority (core, supplemental, and emerging). Templates were produced to summarize coding formats, guide selection of data points, and provide procedural recommendations. This proposed standardization, together with the products of the other pediatric TBI working groups in imaging, biomarkers, and outcome assessment, will facilitate multi-center studies, comparison of results across studies, and high-quality meta-analyses of individual patient data

    Basic Science to Clinical Research: Segmentation of Ultrasound and Modelling in Clinical Informatics

    Get PDF
    The world of basic science is a world of minutia; it boils down to improving even a fraction of a percent over the baseline standard. It is a domain of peer reviewed fractions of seconds and the world of squeezing every last ounce of efficiency from a processor, a storage medium, or an algorithm. The field of health data is based on extracting knowledge from segments of data that may improve some clinical process or practice guideline to improve the time and quality of care. Clinical informatics and knowledge translation provide this information in order to reveal insights to the world of improving patient treatments, regimens, and overall outcomes. In my world of minutia, or basic science, the movement of blood served an integral role. The novel detection of sound reverberations map out the landscape for my research. I have applied my algorithms to the various anatomical structures of the heart and artery system. This serves as a basis for segmentation, active contouring, and shape priors. The algorithms presented, leverage novel applications in segmentation by using anatomical features of the heart for shape priors and the integration of optical flow models to improve tracking. The presented techniques show improvements over traditional methods in the estimation of left ventricular size and function, along with plaque estimation in the carotid artery. In my clinical world of data understanding, I have endeavoured to decipher trends in Alzheimer’s disease, Sepsis of hospital patients, and the burden of Melanoma using mathematical modelling methods. The use of decision trees, Markov models, and various clustering techniques provide insights into data sets that are otherwise hidden. Finally, I demonstrate how efficient data capture from providers can achieve rapid results and actionable information on patient medical records. This culminated in generating studies on the burden of illness and their associated costs. A selection of published works from my research in the world of basic sciences to clinical informatics has been included in this thesis to detail my transition. This is my journey from one contented realm to a turbulent one

    ARIANA: Adaptive Robust and Integrative Analysis for finding Novel Associations

    Get PDF
    The effective mining of biological literature can provide a range of services such as hypothesis-generation, semantic-sensitive information retrieval, and knowledge discovery, which can be important to understand the confluence of different diseases, genes, and risk factors. Furthermore, integration of different tools at specific levels could be valuable. The main focus of the dissertation is developing and integrating tools in finding network of semantically related entities. The key contribution is the design and implementation of an Adaptive Robust and Integrative Analysis for finding Novel Associations. ARIANA is a software architecture and a web-based system for efficient and scalable knowledge discovery. It integrates semantic-sensitive analysis of text-data through ontology-mapping with database search technology to ensure the required specificity. ARIANA was prototyped using the Medical Subject Headings ontology and PubMed database and has demonstrated great success as a dynamic-data-driven system. ARIANA has five main components: (i) Data Stratification, (ii) Ontology-Mapping, (iii) Parameter Optimized Latent Semantic Analysis, (iv) Relevance Model and (v) Interface and Visualization. The other contribution is integration of ARIANA with Online Mendelian Inheritance in Man database, and Medical Subject Headings ontology to provide gene-disease associations. Empirical studies produced some exciting knowledge discovery instances. Among them was the connection between the hexamethonium and pulmonary inflammation and fibrosis. In 2001, a research study at John Hopkins used the drug hexamethonium on a healthy volunteer that ended in a tragic death due to pulmonary inflammation and fibrosis. This accident might have been prevented if the researcher knew of published case report. Since the original case report in 1955, there has not been any publications regarding that association. ARIANA extracted this knowledge even though its database contains publications from 1960 to 2012. Out of 2,545 concepts, ARIANA ranked “Scleroderma, Systemic”, “Neoplasms, Fibrous Tissue”, “Pneumonia”, “Fibroma”, and “Pulmonary Fibrosis” as the 13th, 16th, 38th, 174th and 257th ranked concept respectively. The researcher had access to such knowledge this drug would likely not have been used on healthy subjects.In today\u27s world where data and knowledge are moving away from each other, semantic-sensitive tools such as ARIANA can bridge that gap and advance dissemination of knowledge

    Doctor of Philosophy

    Get PDF
    dissertationTemporal reasoning denotes the modeling of causal relationships between different variables across different instances of time, and the prediction of future events or the explanation of past events. Temporal reasoning helps in modeling and understanding interactions between human pathophysiological processes, and in predicting future outcomes such as response to treatment or complications. Dynamic Bayesian Networks (DBN) support modeling changes in patients' condition over time due to both diseases and treatments, using probabilistic relationships between different clinical variables, both within and across different points in time. We describe temporal reasoning and representation in general and DBN in particular, with special attention to DBN parameter learning and inference. We also describe temporal data preparation (aggregation, consolidation, and abstraction) techniques that are applicable to medical data that were used in our research. We describe and evaluate various data discretization methods that are applicable to medical data. Projeny, an opensource probabilistic temporal reasoning toolkit developed as part of this research, is also described. We apply these methods, techniques, and algorithms to two disease processes modeled as Dynamic Bayesian Networks. The first test case is hyperglycemia due to severe illness in patients treated in the Intensive Care Unit (ICU). We model the patients' serum glucose and insulin drip rates using Dynamic Bayesian Networks, and recommend insulin drip rates to maintain the patients' serum glucose within a normal range. The model's safety and efficacy are proven by comparing it to the current gold standard. The second test case is the early prediction of sepsis in the emergency department. Sepsis is an acute life threatening condition that requires timely diagnosis and treatment. We present various DBN models and data preparation techniques that detect sepsis with very high accuracy within two hours after the patients' admission to the emergency department. We also discuss factors affecting the computational tractability of the models and appropriate optimization techniques. In this dissertation, we present a guide to temporal reasoning, evaluation of various data preparation, discretization, learning and inference methods, proofs using two test cases using real clinical data, an open-source toolkit, and recommend methods and techniques for temporal reasoning in medicine

    Deep Risk Prediction and Embedding of Patient Data: Application to Acute Gastrointestinal Bleeding

    Get PDF
    Acute gastrointestinal bleeding is a common and costly condition, accounting for over 2.2 million hospital days and 19.2 billion dollars of medical charges annually. Risk stratification is a critical part of initial assessment of patients with acute gastrointestinal bleeding. Although all national and international guidelines recommend the use of risk-assessment scoring systems, they are not commonly used in practice, have sub-optimal performance, may be applied incorrectly, and are not easily updated. With the advent of widespread electronic health record adoption, longitudinal clinical data captured during the clinical encounter is now available. However, this data is often noisy, sparse, and heterogeneous. Unsupervised machine learning algorithms may be able to identify structure within electronic health record data while accounting for key issues with the data generation process: measurements missing-not-at-random and information captured in unstructured clinical note text. Deep learning tools can create electronic health record-based models that perform better than clinical risk scores for gastrointestinal bleeding and are well-suited for learning from new data. Furthermore, these models can be used to predict risk trajectories over time, leveraging the longitudinal nature of the electronic health record. The foundation of creating relevant tools is the definition of a relevant outcome measure; in acute gastrointestinal bleeding, a composite outcome of red blood cell transfusion, hemostatic intervention, and all-cause 30-day mortality is a relevant, actionable outcome that reflects the need for hospital-based intervention. However, epidemiological trends may affect the relevance and effectiveness of the outcome measure when applied across multiple settings and patient populations. Understanding the trends in practice, potential areas of disparities, and value proposition for using risk stratification in patients presenting to the Emergency Department with acute gastrointestinal bleeding is important in understanding how to best implement a robust, generalizable risk stratification tool. Key findings include a decrease in the rate of red blood cell transfusion since 2014 and disparities in access to upper endoscopy for patients with upper gastrointestinal bleeding by race/ethnicity across urban and rural hospitals. Projected accumulated savings of consistent implementation of risk stratification tools for upper gastrointestinal bleeding total approximately $1 billion 5 years after implementation. Most current risk scores were designed for use based on the location of the bleeding source: upper or lower gastrointestinal tract. However, the location of the bleeding source is not always clear at presentation. I develop and validate electronic health record based deep learning and machine learning tools for patients presenting with symptoms of acute gastrointestinal bleeding (e.g., hematemesis, melena, hematochezia), which is more relevant and useful in clinical practice. I show that they outperform leading clinical risk scores for upper and lower gastrointestinal bleeding, the Glasgow Blatchford Score and the Oakland score. While the best performing gradient boosted decision tree model has equivalent overall performance to the fully connected feedforward neural network model, at the very low risk threshold of 99% sensitivity the deep learning model identifies more very low risk patients. Using another deep learning model that can model longitudinal risk, the long-short-term memory recurrent neural network, need for transfusion of red blood cells can be predicted at every 4-hour interval in the first 24 hours of intensive care unit stay for high risk patients with acute gastrointestinal bleeding. Finally, for implementation it is important to find patients with symptoms of acute gastrointestinal bleeding in real time and characterize patients by risk using available data in the electronic health record. A decision rule-based electronic health record phenotype has equivalent performance as measured by positive predictive value compared to deep learning and natural language processing-based models, and after live implementation appears to have increased the use of the Acute Gastrointestinal Bleeding Clinical Care pathway. Patients with acute gastrointestinal bleeding but with other groups of disease concepts can be differentiated by directly mapping unstructured clinical text to a common ontology and treating the vector of concepts as signals on a knowledge graph; these patients can be differentiated using unbalanced diffusion earth mover’s distances on the graph. For electronic health record data with data missing not at random, MURAL, an unsupervised random forest-based method, handles data with missing values and generates visualizations that characterize patients with gastrointestinal bleeding. This thesis forms a basis for understanding the potential for machine learning and deep learning tools to characterize risk for patients with acute gastrointestinal bleeding. In the future, these tools may be critical in implementing integrated risk assessment to keep low risk patients out of the hospital and guide resuscitation and timely endoscopic procedures for patients at higher risk for clinical decompensation

    Modeling Clinicians’ Cognitive and Collaborative Work in Post-Operative Hospital Care

    Get PDF
    abstract: Clinicians confront formidable challenges with information management and coordination activities. When not properly integrated into clinical workflow, technologies can further burden clinicians’ cognitive resources, which is associated with medical errors and risks to patient safety. An understanding of workflow is necessary to redesign information technologies (IT) that better support clinical processes. This is particularly important in surgical care, which is among the most clinical and resource intensive settings in healthcare, and is associated with a high rate of adverse events. There are a growing number of tools to study workflow; however, few produce the kinds of in-depth analyses needed to understand health IT-mediated workflow. The goals of this research are to: (1) investigate and model workflow and communication processes across technologies and care team members in post-operative hospital care; (2) introduce a mixed-method framework, and (3) demonstrate the framework by examining two health IT-mediated tasks. This research draws on distributed cognition and cognitive engineering theories to develop a micro-analytic strategy in which workflow is broken down into constituent people, artifacts, information, and the interactions between them. It models the interactions that enable information flow across people and artifacts, and identifies dependencies between them. This research found that clinicians manage information in particular ways to facilitate planned and emergent decision-making and coordination processes. Barriers to information flow include frequent information transfers, clinical reasoning absent in documents, conflicting and redundant data across documents and applications, and that clinicians are burdened as information managers. This research also shows there is enormous variation in how clinicians interact with electronic health records (EHRs) to complete routine tasks. Variation is best evidenced by patterns that occur for only one patient case and patterns that contain repeated events. Variation is associated with the users’ experience (EHR and clinical), patient case complexity, and a lack of cognitive support provided by the system to help the user find and synthesize information. The methodology is used to assess how health IT can be improved to better support clinicians’ information management and coordination processes (e.g., context-sensitive design), and to inform how resources can best be allocated for clinician observation and training.Dissertation/ThesisDoctoral Dissertation Biomedical Informatics 201

    Best-subset Selection for Complex Systems using Agent-based Simulation

    Get PDF
    It is difficult to analyze and determine strategies to control complex systems due to their inherent complexity. The complex interactions among elements make it difficult to develop and test decision makers' intuition of how the system will behave under different policies. Computer models are often used to simulate the system and to observe both direct and indirect effects of alternative interventions. However, many decision makers are unwilling to concede complete control to a computer model because of the abstractions in the model, and the other factors that cannot be modeled, such as physical, human, social and organizational relationship constraints. This dissertation develops an agent-based simulation (ABS) model to analyze a complex system and its policy alternatives, and contributes a best-subset selection (BSS) procedure that provides a group of good performing alternatives to which decision makers can then apply their subject and context knowledge in making a final decision for implementation. As a specific example of a complex system, a mass casualty incident (MCI) response system was simulated using an ABS model consisting of three interrelated sub-systems. The model was then validated by a series of sensitivity analysis experiments. The model provides a good test bed to evaluate various evacuation policies. In order to find the best policy that minimizes the overall mortality, two ranking-and-selection (R&S) procedures from the literature (Rinott (1978) and Kim and Nelson (2001)) were implemented and compared. Then a new best-subset selection (BSS) procedure was developed to efficiently select a statistically guaranteed best-subset containing all alternatives that are close enough to the best one for a pre-specified probability. Extensive numerical experiments were organized to prove the effectiveness and demonstrate the performance of the BSS procedure. The BSS procedure was then implemented in conjunction with the MCI ABS model to select the best evacuation policies. The experimental results demonstrate the feasibility and effectiveness of our agent-based optimization methodology for complex system policy evaluation and selection
    • …
    corecore