333 research outputs found

    Integrated Block Sharing: A Win–Win Strategy for Hospitals and Surgeons

    Get PDF
    We consider the problem of balancing two competing objectives in the pursuit of efïŹcient management of operating rooms in a hospital: providing surgeons with predictable, reliable access to the operating room and maintaining high utilization of capacity. The common solution to the ïŹrst problem (in practice) is to grant exclusive “block time,” in which a portion of the week in an operating room is designated to a particular surgeon, barring other surgeons from using this room/time. As a major improvement over this existing approach, we model the possibility of “shared” block time, which need only satisfy capacity constraints in expectation. We reduce the computational difïŹculty of the resulting NP-hard block-scheduling problem by implementing a column-generation approach and demonstrate the efïŹcacy of this technique using simulation, calibrated to a real hospital’s historical data and objectives. Our simulations illustrate substantial beneïŹts to hospitals under a variety of circumstances and demonstrate the advantages of our new approach relative to a benchmark method taken from the recent literature

    Integrated Machine Learning and Optimization Frameworks with Applications in Operations Management

    Full text link
    Incorporation of contextual inference in the optimality analysis of operational problems is a canonical characteristic of data-informed decision making that requires interdisciplinary research. In an attempt to achieve individualization in operations management, we design rigorous and yet practical mechanisms that boost efficiency, restrain uncertainty and elevate real-time decision making through integration of ideas from machine learning and operations research literature. In our first study, we investigate the decision of whether to admit a patient to a critical care unit which is a crucial operational problem that has significant influence on both hospital performance and patient outcomes. Hospitals currently lack a methodology to selectively admit patients to these units in a way that patient’s individual health metrics can be incorporated while considering the hospital’s operational constraints. We model the problem as a complex loss queueing network with a stochastic model of how long risk-stratified patients spend time in particular units and how they transition between units. A data-driven optimization methodology then approximates an optimal admission control policy for the network of units. While enforcing low levels of patient blocking, we optimize a monotonic dual-threshold admission policy. Our methodology captures utilization and accessibility in a network model of care pathways while supporting the personalized allocation of scarce care resources to the neediest patients. The interesting benefits of admission thresholds that vary by day of week are also examined. In the second study, we analyze the efficiency of surgical unit operations in the era of big data. The accuracy of surgical case duration predictions is a crucial element in hospital operational performance. We propose a comprehensive methodology that incorporates both structured and unstructured data to generate individualized predictions regarding the overall distribution of surgery durations. Consequently, we investigate methods to incorporate such individualized predictions into operational decision-making. We introduce novel prescriptive models to address optimization under uncertainty in the fundamental surgery appointment scheduling problem by utilizing the multi-dimensional data features available prior to the surgery. Electronic medical records systems provide detailed patient features that enable the prediction of individualized case time distributions; however, existing approaches in this context usually employ only limited, aggregate information, and do not take advantages of these detailed features. We show how the quantile regression forest, can be integrated into three common optimization formulations that capture the stochasticity in addressing this problem, including stochastic optimization, robust optimization and distributionally robust optimization. In the last part of this dissertation, we provide the first study on online learning problems under stochastic constraints that are "soft", i.e., need to be satisfied with high likelihood. Under a Bayesian framework, we propose and analyze a scheme that provides statistical feasibility guarantees throughout the learning horizon, by using posterior Monte Carlo samples to form sampled constraints that generalize the scenario generation approach commonly used in chance-constrained programming. We demonstrate how our scheme can be integrated into Thompson sampling and illustrate it with an application in online advertisement.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145936/1/meisami_1.pd

    TRADE-OFF BALANCING FOR STABLE AND SUSTAINABLE OPERATING ROOM SCHEDULING

    Get PDF
    The implementation of the mandatory alternative payment model (APM) guarantees savings for Medicare regardless of participant hospitals ability for reducing spending that shifts the cost minimization burden from insurers onto the hospital administrators. Surgical interventions account for more than 30% and 40% of hospitals total cost and total revenue, respectively, with a cost structure consisting of nearly 56% direct cost, thus, large cost reduction is possible through efficient operation management. However, optimizing operating rooms (ORs) schedules is extraordinarily challenging due to the complexities involved in the process. We present new algorithms and managerial guidelines to address the problem of OR planning and scheduling with disturbances in demand and case times, and inconsistencies among the performance measures. We also present an extension of these algorithms that addresses production scheduling for sustainability. We demonstrate the effectiveness and efficiency of these algorithms via simulation and statistical analyses

    Discrete Event Simulations

    Get PDF
    Considered by many authors as a technique for modelling stochastic, dynamic and discretely evolving systems, this technique has gained widespread acceptance among the practitioners who want to represent and improve complex systems. Since DES is a technique applied in incredibly different areas, this book reflects many different points of view about DES, thus, all authors describe how it is understood and applied within their context of work, providing an extensive understanding of what DES is. It can be said that the name of the book itself reflects the plurality that these points of view represent. The book embraces a number of topics covering theory, methods and applications to a wide range of sectors and problem areas that have been categorised into five groups. As well as the previously explained variety of points of view concerning DES, there is one additional thing to remark about this book: its richness when talking about actual data or actual data based analysis. When most academic areas are lacking application cases, roughly the half part of the chapters included in this book deal with actual problems or at least are based on actual data. Thus, the editor firmly believes that this book will be interesting for both beginners and practitioners in the area of DES

    Development of a Human Reliability Analysis (HRA) model for break scheduling management in human-intensive working activities

    Get PDF
    2016 - 2017Human factors play an inevitable role in working contexts and the occurrence of human errors impacts on system reliability and safety, equipment performance and economic results. If human fallibility contributes to majority of incidents and accidents in high-risk systems, it mainly affects the quality and productivity in low-risk systems. Due to the prevalence of human error and the huge and often costly consequences, a considerable effort has been made in the field of Human Reliability Analysis (HRA), thus arriving to develop methods with the common purpose to predict the human error probability (HEP) and to enable safer and more productive designs. The purpose of each HRA method should be the HEP quantification to reduce and prevent possible conditions of error in a working context. However, existing HRA methods do not always pursue this aim in an efficient way, focusing on the qualitative error evaluation and on high-risk contexts. Moreover, several working aspects have been considered to prevent accidents and improve human performance in human-intensive working contexts, as for example the selection of adequate work-rest policies. It is well-known that introducing breaks is a key intervention to provide recovery after fatiguing physical work, prevent the growth of accident risks, and improve human reliability and productivity for individuals engaged in either mental or physical tasks. This is a very efficient approach even if it is not widely applied. ... [edited by Author]XXX cicl

    Modeling Clinicians’ Cognitive and Collaborative Work in Post-Operative Hospital Care

    Get PDF
    abstract: Clinicians confront formidable challenges with information management and coordination activities. When not properly integrated into clinical workflow, technologies can further burden clinicians’ cognitive resources, which is associated with medical errors and risks to patient safety. An understanding of workflow is necessary to redesign information technologies (IT) that better support clinical processes. This is particularly important in surgical care, which is among the most clinical and resource intensive settings in healthcare, and is associated with a high rate of adverse events. There are a growing number of tools to study workflow; however, few produce the kinds of in-depth analyses needed to understand health IT-mediated workflow. The goals of this research are to: (1) investigate and model workflow and communication processes across technologies and care team members in post-operative hospital care; (2) introduce a mixed-method framework, and (3) demonstrate the framework by examining two health IT-mediated tasks. This research draws on distributed cognition and cognitive engineering theories to develop a micro-analytic strategy in which workflow is broken down into constituent people, artifacts, information, and the interactions between them. It models the interactions that enable information flow across people and artifacts, and identifies dependencies between them. This research found that clinicians manage information in particular ways to facilitate planned and emergent decision-making and coordination processes. Barriers to information flow include frequent information transfers, clinical reasoning absent in documents, conflicting and redundant data across documents and applications, and that clinicians are burdened as information managers. This research also shows there is enormous variation in how clinicians interact with electronic health records (EHRs) to complete routine tasks. Variation is best evidenced by patterns that occur for only one patient case and patterns that contain repeated events. Variation is associated with the users’ experience (EHR and clinical), patient case complexity, and a lack of cognitive support provided by the system to help the user find and synthesize information. The methodology is used to assess how health IT can be improved to better support clinicians’ information management and coordination processes (e.g., context-sensitive design), and to inform how resources can best be allocated for clinician observation and training.Dissertation/ThesisDoctoral Dissertation Biomedical Informatics 201
    • 

    corecore