280 research outputs found

    Executable clinical models for acute care

    Get PDF
    Medical errors are the third leading cause of death in the U.S., after heart disease and cancer, causing at least 250,000 deaths every year. These errors are often caused by slips and lapses, which include, but are not limited to delayed diagnosis, delayed or ineffective therapeutic interventions, and unintended deviation from the best practice guidelines. These situations may occur more often in acute care settings, where the staff are overloaded, under stress, and must make quick decisions based on the best available evidence. An \textit{integrated clinical guidance system} can reduce such medical errors by helping medical staff track and assess patient state more accurately and adapt the care plan according to the best practice guidelines. However, a main prerequisite for developing a guideline system is to create computer interpretable representations of the clinical knowledge. The main focus of this thesis is to develop executable clinical models for acute care. We propose an organ-centric pathophysiology-based modeling paradigm, in which we translate the medical text into executable interactive disease and organ state machines. We formally verify the correctness and safety of the developed models. Afterward, we integrate the models into a best practice guidance system. We study the cardiac arrest and sepsis case studies to demonstrate the applicability of proposed modeling paradigm. We validate the clinical correctness and usefulness of our model-driven cardiac arrest guidance system in an ACLS training class. We have also conducted a preliminary clinical simulation of our model-driven sepsis screening system

    Low complexity system architecture design for medical Cyber-Physical-Human Systems (CPHS)

    Get PDF
    Cyber-Physical-Human Systems (CHPS) are safety-critical systems, where the interaction between cyber components and physical components can be influenced by the human operator. Guaranteeing correctness and safety in these highly interactive computations is challenging. In particular, the interaction between these three components needs to be coordinated collectively in order to conduct safe and effective operations. The interaction nevertheless increases by orders of magnitude the levels of complexity and prevents formal verification techniques, such as model checking, from thoroughly verifying the safety and correctness properties of systems. In addition, the interactions could also significantly increase human operators' cognitive load and lead to human errors. In this thesis, we focus on medical CPHS and examine the complexity from a safety angle. Medical CPHS are both safety-critical and highly complex, because medical staff need to coordinate with distributed medical devices and supervisory controllers to monitor and control multiple aspects of the patient's physiology. Our goal is to reduce and control the complexity by introducing novel architectural patterns, coordination protocols and user-centric guidance system. This thesis makes three major contributions for improving safety of medical CPHS. Reducing verification complexity: Formal verification is a promising technique to guarantee correctness and safety, but the high complexity significantly increases the verification cost, which is known as state space explosion problems. We propose two architectural patterns: Interruptible Remote Procedure Call (RPC) and Consistent View Generation and Coordination (CVGC) protocol to properly handle asynchronous communication and exceptions with low complexity. Reducing cyber-medical treatment complexity: Cyber medical treatment complexity is defined as the number of steps and time to perform a treatment and monitor the corresponding physiological responses. We propose treatment and workflow adaptation and validation protocols to semi-autonomously validate the preconditions and adapt the workflows to patient conditions, which reduces the complexity of performing treatments and following best practice workflows. Reducing human cognitive load complexity: Cognitive load (also called mental workload) complexity measures human memory and mental computation demand for performing tasks. We first model individual medical staff's responsibility and team interactions in cardiac arrest resuscitation and decomposed their overall task into a set of distinct cognitive tasks that must be specifically supported to achieve successful human-centered system design. We then prototype a medical Best Practice Guidance (BPG) system to reduce medical staff's cognitive load and foster adherence to best practice workflows. Our BPG system transforms the implementation of best practice medical workflow

    Design of low complexity fault tolerance for life critical situation awareness systems

    Get PDF
    In cyber-human-medical environments, coordinating supervisory medical systems and medical staff to perform treatment in accordance with best practice is essential for patient safety. However, the dynamics of patient conditions and the non-deterministic nature of potential side effects of treatment pose significant challenges. This work covers my contribution to one such system in development of its low complexity workflow which enhances situation awareness and in the design and implementation of it fault tolerance. In the first part of this document, we cover a validation protocol to enforce the correct execution sequence of treatments, preconditions validation, side effects monitoring and checking expected responses based on patho-physiological models. The proposed protocol organizes the medical information concisely and comprehensively to help medical staff validate treatments.The proposed protocol dynamically adapts to the patient conditions and side effects of treatments. A cardiac arrest scenario is used as a case study to verify the safety properties of the proposed protocol. In the second part of this document, we describe the integration of some well understood fault tolerance strategies in context of safety critical systems. We list out the requirements of our system and explore the traditional Active/Standby in context of certain guiding design principles to fit our specific requirement. Like any software engineering project, we design test suites to ensure QOS. We go a step further and try to make this design verifiable using model checking tools like UPPAAL to demonstrate the correctness of our system architecture under conditions of normal operation and failure

    From Resilience-Building to Resilience-Scaling Technologies: Directions -- ReSIST NoE Deliverable D13

    Get PDF
    This document is the second product of workpackage WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellence. The problem that ReSIST addresses is achieving sufficient resilience in the immense systems of ever evolving networks of computers and mobile devices, tightly integrated with human organisations and other technology, that are increasingly becoming a critical part of the information infrastructure of our society. This second deliverable D13 provides a detailed list of research gaps identified by experts from the four working groups related to assessability, evolvability, usability and diversit

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    On Counterinsurgency: Firepower, Biopower, and the Collateralization of Milliatry Violence

    Get PDF
    This dissertation investigates the most recent cycle of North Atlantic expeditionary warfare by addressing the resuscitation of counterinsurgency warfare with a specific focus on the war in Afghanistan from 2006 to 2014. The project interrogates the lasting aesthetic, epistemological, philosophical, and territorial implications of counterinsurgency, which should be understood as part of wider transformations in military affairs in relation to discourses of adaptation, complexity, and systemic design, and to the repertoire of global contingency and stability operations. Afghanistan served as a counterinsurgency laboratory, and the experiments will shape the conduct of future wars, domestic security practices, and the increasingly indistinct boundary between them. Using work from Michel Foucault and liberal war studies, the project undertakes a genealogy of contemporary population-centred counterinsurgency and interrogates how its conduct is constituted by and as a mixture firepower and biopower. Insofar as this mix employs force with different speeds, doses, and intensities, the dissertation argues that counterinsurgency unrestricts and collateralizes violence, which is emblematic of liberal war that kills selectively to secure and make life live in ways amenable to local and global imperatives of liberal rule. Contemporary military counterinsurgents, in conducting operations on the edges of liberal rule's jurisdiction and in recursively influencing the domestic spaces of North Atlantic states, fashion biopoweras custodial power to conduct the conduct of lifeto shape different interventions into the everyday lives of target populations. The 'lesser evil' logic of counterinsurgency is used to frame counterinsurgency as a type of warfare that is comparatively low-intensity and less harmful, and this justification actually lowers the threshold for violence by making increasingly indiscriminate the ways in which its employment damages and envelops populations and communities, thereby allowing counterinsurgents to speculate on the practice of expeditionary warfare and efforts to sustain occupations. Thus, the dissertation argues that counterinsurgency is a communicative process, better understood as mobile military media with an atmospheric-environmental register blending acute and ambient measures that are always-already kinetic. The counterinsurgent gaze enframes a world picture where everything can be a force amplifier and everywhere is a possible theatre of operations

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    • …
    corecore