1,125,727 research outputs found

    Process Mining Dashboard in Operating Rooms: Analysis of Staff Expectations with Analytic Hierarchy Process

    Full text link
    [EN] The widespread adoption of real-time location systems is boosting the development of software applications to track persons and assets in hospitals. Among the vast amount of applications, real-time location systems in operating rooms have the advantage of grounding advanced data analysis techniques to improve surgical processes, such as process mining. However, such applications still find entrance barriers in the clinical context. In this paper, we aim to evaluate the preferred features of a process mining-based dashboard deployed in the operating rooms of a hospital equipped with a real-time location system. The dashboard allows to discover and enhance flows of patients based on the location data of patients undergoing an intervention. Analytic hierarchy process was applied to quantify the prioritization of the dashboard features (filtering data, enhancement, node selection, statistics, etc.), distinguishing the priorities that each of the different roles in the operating room service assigned to each feature. The staff in the operating rooms (n = 10) was classified into three groups: Technical, clinical, and managerial staff according to their responsibilities. Results showed different weights for the features in the process mining dashboard for each group, suggesting that a flexible process mining dashboard is needed to boost its potential in the management of clinical interventions in operating rooms. This paper is an extension of a communication presented in the Process-Oriented Data Science for Health Workshop in the Business Process Management Conference 2018.This project received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 812386.Martinez-Millana, A.; Lizondo, A.; Gatta, R.; Vera, S.; Traver Salcedo, V.; Fernández Llatas, C. (2019). Process Mining Dashboard in Operating Rooms: Analysis of Staff Expectations with Analytic Hierarchy Process. International Journal of Environmental research and Public Health. 16(2):1-14. https://doi.org/10.3390/ijerph16020199S114162Agnoletti, V., Buccioli, M., Padovani, E., Corso, R. M., Perger, P., Piraccini, E., … Gambale, G. (2013). Operating room data management: improving efficiency and safety in a surgical block. BMC Surgery, 13(1). doi:10.1186/1471-2482-13-7Marques, I., Captivo, M. E., & Vaz Pato, M. (2011). An integer programming approach to elective surgery scheduling. OR Spectrum, 34(2), 407-427. doi:10.1007/s00291-011-0279-7Haynes, A. B., Weiser, T. G., Berry, W. R., Lipsitz, S. R., Breizat, A.-H. S., Dellinger, E. P., … Gawande, A. A. (2009). A Surgical Safety Checklist to Reduce Morbidity and Mortality in a Global Population. New England Journal of Medicine, 360(5), 491-499. doi:10.1056/nejmsa0810119Dexter, F., Epstein, R. H., Traub, R. D., Xiao, Y., & Warltier, D. C. (2004). Making Management Decisions on the Day of Surgery Based on Operating Room Efficiency and Patient Waiting Times. Anesthesiology, 101(6), 1444-1453. doi:10.1097/00000542-200412000-00027Fernández-Llatas, C., Meneu, T., Traver, V., & Benedi, J.-M. (2013). Applying Evidence-Based Medicine in Telehealth: An Interactive Pattern Recognition Approximation. International Journal of Environmental Research and Public Health, 10(11), 5671-5682. doi:10.3390/ijerph10115671Westbrook, J. I., & Braithwaite, J. (2010). Will information and communication technology disrupt the health system and deliver on its promise? Medical Journal of Australia, 193(7), 399-400. doi:10.5694/j.1326-5377.2010.tb03968.xFisher, J. A., & Monahan, T. (2012). Evaluation of real-time location systems in their hospital contexts. International Journal of Medical Informatics, 81(10), 705-712. doi:10.1016/j.ijmedinf.2012.07.001Bath, P. A., Pendleton, N., Bracale, M., & Pecchia, L. (2011). Analytic Hierarchy Process (AHP) for Examining Healthcare Professionals’ Assessments of Risk Factors. Methods of Information in Medicine, 50(05), 435-444. doi:10.3414/me10-01-0028Lee, V. S., Kawamoto, K., Hess, R., Park, C., Young, J., Hunter, C., … Pendleton, R. C. (2016). Implementation of a Value-Driven Outcomes Program to Identify High Variability in Clinical Costs and Outcomes and Association With Reduced Cost and Improved Quality. JAMA, 316(10), 1061. doi:10.1001/jama.2016.12226Sloane, E. B., Liberatore, M. J., Nydick, R. L., Luo, W., & Chung, Q. B. (2003). Using the analytic hierarchy process as a clinical engineering tool to facilitate an iterative, multidisciplinary, microeconomic health technology assessment. Computers & Operations Research, 30(10), 1447-1465. doi:10.1016/s0305-0548(02)00187-9Saaty, T. L. (1977). A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 15(3), 234-281. doi:10.1016/0022-2496(77)90033-5Bridges, J. F. P., Hauber, A. B., Marshall, D., Lloyd, A., Prosser, L. A., Regier, D. A., … Mauskopf, J. (2011). Conjoint Analysis Applications in Health—a Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. Value in Health, 14(4), 403-413. doi:10.1016/j.jval.2010.11.013Proceedings of the 2011 annual conference on Human factors in computing systems - CHI ’11. (2011). doi:10.1145/1978942Anual Report 2014http://chguv.san.gva.es/documents/10184/81032/Informe_anual2014.pdf/713c6559-0e29-4838-967c-93380c24eff9Ratwani, R. M., Fairbanks, R. J., Hettinger, A. Z., & Benda, N. C. (2015). Electronic health record usability: analysis of the user-centered design processes of eleven electronic health record vendors. Journal of the American Medical Informatics Association, 22(6), 1179-1182. doi:10.1093/jamia/ocv050Van der Aalst, W. M. P., Reijers, H. A., Weijters, A. J. M. M., van Dongen, B. F., Alves de Medeiros, A. K., Song, M., & Verbeek, H. M. W. (2007). Business process mining: An industrial application. Information Systems, 32(5), 713-732. doi:10.1016/j.is.2006.05.00

    Proactive Assessment of Accident Risk to Improve Safety on a System of Freeways, Research Report 11-15

    Get PDF
    This report describes the development and evaluation of real-time crash risk-assessment models for four freeway corridors: U.S. Route 101 NB (northbound) and SB (southbound) and Interstate 880 NB and SB. Crash data for these freeway segments for the 16-month period from January 2010 through April 2011 are used to link historical crash occurrences with real-time traffic patterns observed through loop-detector data. \u27The crash risk-assessment models are based on a binary classification approach (crash and non-crash outcomes), with traffic parameters measured at surrounding vehicle detection station (VDS) locations as the independent variables. The analysis techniques used in this study are logistic regression and classification trees. Prior to developing the models, some data-related issues such as data cleaning and aggregation were addressed. The modeling efforts revealed that the turbulence resulting from speed variation is significantly associated with crash risk on the U.S. 101 NB corridor. The models estimated with data from U.S. 101 NB were evaluated on the basis of their classification performance, not only on U.S. 101 NB, but also on the other three freeway segments for transferability assessment. It was found that the predictive model derived from one freeway can be readily applied to other freeways, although the classification performance decreases. The models that transfer best to other roadways were determined to be those that use the least number of VDSs–that is, those that use one upstream or downstream station rather than two or three.\ The classification accuracy of the models is discussed in terms of how the models can be used for real-time crash risk assessment. The models can be applied to developing and testing variable speed limits (VSLs) and ramp-metering strategies that proactively attempt to reduce crash risk

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    An information assistant system for the prevention of tunnel vision in crisis management

    Get PDF
    In the crisis management environment, tunnel vision is a set of bias in decision makers’ cognitive process which often leads to incorrect understanding of the real crisis situation, biased perception of information, and improper decisions. The tunnel vision phenomenon is a consequence of both the challenges in the task and the natural limitation in a human being’s cognitive process. An information assistant system is proposed with the purpose of preventing tunnel vision. The system serves as a platform for monitoring the on-going crisis event. All information goes through the system before arrives at the user. The system enhances the data quality, reduces the data quantity and presents the crisis information in a manner that prevents or repairs the user’s cognitive overload. While working with such a system, the users (crisis managers) are expected to be more likely to stay aware of the actual situation, stay open minded to possibilities, and make proper decisions

    Bridges Structural Health Monitoring and Deterioration Detection Synthesis of Knowledge and Technology

    Get PDF
    INE/AUTC 10.0

    Estimating Uncertainty of Bus Arrival Times and Passenger Occupancies

    Get PDF
    Travel time reliability and the availability of seating and boarding space are important indicators of bus service quality and strongly influence users’ satisfaction and attitudes towards bus transit systems. With Automated Vehicle Location (AVL) and Automated Passenger Counter (APC) units becoming common on buses, some agencies have begun to provide real-time bus location and passenger occupancy information as a means to improve perceived transit reliability. Travel time prediction models have also been established based on AVL and APC data. However, existing travel time prediction models fail to provide an indication of the uncertainty associated with these estimates. This can cause a false sense of precision, which can lead to experiences associated with unreliable service. Furthermore, no existing models are available to predict individual bus occupancies at downstream stops to help travelers understand if there will be space available to board. The purpose of this project was to develop modeling frameworks to predict travel times (and associated uncertainties) as well as individual bus passenger occupancies. For travel times, accelerated failure-time survival models were used to predict the entire distribution of travel times expected. The survival models were found to be just as accurate as models developed using traditional linear regression techniques. However, the survival models were found to have smaller variances associated with predictions. For passenger occupancies, linear and count regression models were compared. The linear regression models were found to outperform count regression models, perhaps due to the additive nature of the passenger boarding process. Various modeling frameworks were tested and the best frameworks were identified for predictions at near stops (within five stops downstream) and far stops (further than eight stops). Overall, these results can be integrated into existing real-time transit information systems to improve the quality of information provided to passengers

    Estimation Of Hybrid Models For Real-time Crash Risk Assessment On Freeways

    Get PDF
    Relevance of reactive traffic management strategies such as freeway incident detection has been diminishing with advancements in mobile phone usage and video surveillance technology. On the other hand, capacity to collect, store, and analyze traffic data from underground loop detectors has witnessed enormous growth in the recent past. These two facts together provide us with motivation as well as the means to shift the focus of freeway traffic management toward proactive strategies that would involve anticipating incidents such as crashes. The primary element of proactive traffic management strategy would be model(s) that can separate \u27crash prone\u27 conditions from \u27normal\u27 traffic conditions in real-time. The aim in this research is to establish relationship(s) between historical crashes of specific types and corresponding loop detector data, which may be used as the basis for classifying real-time traffic conditions into \u27normal\u27 or \u27crash prone\u27 in the future. In this regard traffic data in this study were also collected for cases which did not lead to crashes (non-crash cases) so that the problem may be set up as a binary classification. A thorough review of the literature suggested that existing real-time crash \u27prediction\u27 models (classification or otherwise) are generic in nature, i.e., a single model has been used to identify all crashes (such as rear-end, sideswipe, or angle), even though traffic conditions preceding crashes are known to differ by type of crash. Moreover, a generic model would yield no information about the collision most likely to occur. To be able to analyze different groups of crashes independently, a large database of crashes reported during the 5-year period from 1999 through 2003 on Interstate-4 corridor in Orlando were collected. The 36.25-mile instrumented corridor is equipped with 69 dual loop detector stations in each direction (eastbound and westbound) located approximately every ½ mile. These stations report speed, volume, and occupancy data every 30-seconds from the three through lanes of the corridor. Geometric design parameters for the freeway were also collected and collated with historical crash and corresponding loop detector data. The first group of crashes to be analyzed were the rear-end crashes, which account to about 51% of the total crashes. Based on preliminary explorations of average traffic speeds; rear-end crashes were grouped into two mutually exclusive groups. First, those occurring under extended congestion (referred to as regime 1 traffic conditions) and the other which occurred with relatively free-flow conditions (referred to as regime 2 traffic conditions) prevailing 5-10 minutes before the crash. Simple rules to separate these two groups of rear-end crashes were formulated based on the classification tree methodology. It was found that the first group of rear-end crashes can be attributed to parameters measurable through loop detectors such as the coefficient of variation in speed and average occupancy at stations in the vicinity of crash location. For the second group of rear-end crashes (referred to as regime 2) traffic parameters such as average speed and occupancy at stations downstream of the crash location were significant along with off-line factors such as the time of day and presence of an on-ramp in the downstream direction. It was found that regime 1 traffic conditions make up only about 6% of the traffic conditions on the freeway. Almost half of rear-end crashes occurred under regime 1 traffic regime even with such little exposure. This observation led to the conclusion that freeway locations operating under regime 1 traffic may be flagged for (rear-end) crashes without any further investigation. MLP (multilayer perceptron) and NRBF (normalized radial basis function) neural network architecture were explored to identify regime 2 rear-end crashes. The performance of individual neural network models was improved by hybridizing their outputs. Individual and hybrid PNN (probabilistic neural network) models were also explored along with matched case control logistic regression. The stepwise selection procedure yielded the matched logistic regression model indicating the difference between average speeds upstream and downstream as significant. Even though the model provided good interpretation, its classification accuracy over the validation dataset was far inferior to the hybrid MLP/NRBF and PNN models. Hybrid neural network models along with classification tree model (developed to identify the traffic regimes) were able to identify about 60% of the regime 2 rear-end crashes in addition to all regime 1 rear-end crashes with a reasonable number of positive decisions (warnings). It translates into identification of more than ¾ (77%) of all rear-end crashes. Classification models were then developed for the next most frequent type, i.e., lane change related crashes. Based on preliminary analysis, it was concluded that the location specific characteristics, such as presence of ramps, mile-post location, etc. were not significantly associated with these crashes. Average difference between occupancies of adjacent lanes and average speeds upstream and downstream of the crash location were found significant. The significant variables were then subjected as inputs to MLP and NRBF based classifiers. The best models in each category were hybridized by averaging their respective outputs. The hybrid model significantly improved on the crash identification achieved through individual models and 57% of the crashes in the validation dataset could be identified with 30% warnings. Although the hybrid models in this research were developed with corresponding data for rear-end and lane-change related crashes only, it was observed that about 60% of the historical single vehicle crashes (other than rollovers) could also be identified using these models. The majority of the identified single vehicle crashes, according to the crash reports, were caused due to evasive actions by the drivers in order to avoid another vehicle in front or in the other lane. Vehicle rollover crashes were found to be associated with speeding and curvature of the freeway section; the established relationship, however, was not sufficient to identify occurrence of these crashes in real-time. Based on the results from modeling procedure, a framework for parallel real-time application of these two sets of models (rear-end and lane-change) in the form of a system was proposed. To identify rear-end crashes, the data are first subjected to classification tree based rules to identify traffic regimes. If traffic patterns belong to regime 1, a rear-end crash warning is issued for the location. If the patterns are identified to be regime 2, then they are subjected to hybrid MLP/NRBF model employing traffic data from five surrounding traffic stations. If the model identifies the patterns as crash prone then the location may be flagged for rear-end crash, otherwise final check for a regime 2 rear-end crash is applied on the data through the hybrid PNN model. If data from five stations are not available due to intermittent loop failures, the system is provided with the flexibility to switch to models with more tolerant data requirements (i.e., model using traffic data from only one station or three stations). To assess the risk of a lane-change related crash, if all three lanes at the immediate upstream station are functioning, the hybrid of the two of the best individual neural network models (NRBF with three hidden neurons and MLP with four hidden neurons) is applied to the input data. A warning for a lane-change related crash may be issued based on its output. The proposed strategy is demonstrated over a complete day of loop data in a virtual real-time application. It was shown that the system of models may be used to continuously assess and update the risk for rear-end and lane-change related crashes. The system developed in this research should be perceived as the primary component of proactive traffic management strategy. Output of the system along with the knowledge of variables critically associated with specific types of crashes identified in this research can be used to formulate ways for avoiding impending crashes. However, specific crash prevention strategies e.g., variable speed limit and warnings to the commuters demand separate attention and should be addressed through thorough future research

    Great East Japan Earthquake, JR East Mitigation Successes, and Lessons for California High-Speed Rail, MTI Report 12-37

    Get PDF
    California and Japan both experience frequent seismic activity, which is often damaging to infrastructure. Seismologists have developed systems for detecting and analyzing earthquakes in real-time. JR East has developed systems to mitigate the damage to their facilities and personnel, including an early earthquake detection system, retrofitting of existing facilities for seismic safety, development of more seismically resistant designs for new facilities, and earthquake response training and exercises for staff members. These systems demonstrated their value in the Great East Japan Earthquake of 2011 and have been further developed based on that experience. Researchers in California are developing an earthquake early warning system for the state, and the private sector has seismic sensors in place. These technologies could contribute to the safety of the California High-Speed Rail Authority’s developing system, which could emulate the best practices demonstrated in Japan in the construction of the Los Angeles-to-San Jose segment
    • …
    corecore