14,192 research outputs found

    A Process Modelling Framework Based on Point Interval Temporal Logic with an Application to Modelling Patient Flows

    Get PDF
    This thesis considers an application of a temporal theory to describe and model the patient journey in the hospital accident and emergency (A&E) department. The aim is to introduce a generic but dynamic method applied to any setting, including healthcare. Constructing a consistent process model can be instrumental in streamlining healthcare issues. Current process modelling techniques used in healthcare such as flowcharts, unified modelling language activity diagram (UML AD), and business process modelling notation (BPMN) are intuitive and imprecise. They cannot fully capture the complexities of the types of activities and the full extent of temporal constraints to an extent where one could reason about the flows. Formal approaches such as Petri have also been reviewed to investigate their applicability to the healthcare domain to model processes. Additionally, to schedule patient flows, current modelling standards do not offer any formal mechanism, so healthcare relies on critical path method (CPM) and program evaluation review technique (PERT), that also have limitations, i.e. finish-start barrier. It is imperative to specify the temporal constraints between the start and/or end of a process, e.g., the beginning of a process A precedes the start (or end) of a process B. However, these approaches failed to provide us with a mechanism for handling these temporal situations. If provided, a formal representation can assist in effective knowledge representation and quality enhancement concerning a process. Also, it would help in uncovering complexities of a system and assist in modelling it in a consistent way which is not possible with the existing modelling techniques. The above issues are addressed in this thesis by proposing a framework that would provide a knowledge base to model patient flows for accurate representation based on point interval temporal logic (PITL) that treats point and interval as primitives. These objects would constitute the knowledge base for the formal description of a system. With the aid of the inference mechanism of the temporal theory presented here, exhaustive temporal constraints derived from the proposed axiomatic system’ components serves as a knowledge base. The proposed methodological framework would adopt a model-theoretic approach in which a theory is developed and considered as a model while the corresponding instance is considered as its application. Using this approach would assist in identifying core components of the system and their precise operation representing a real-life domain deemed suitable to the process modelling issues specified in this thesis. Thus, I have evaluated the modelling standards for their most-used terminologies and constructs to identify their key components. It will also assist in the generalisation of the critical terms (of process modelling standards) based on their ontology. A set of generalised terms proposed would serve as an enumeration of the theory and subsume the core modelling elements of the process modelling standards. The catalogue presents a knowledge base for the business and healthcare domains, and its components are formally defined (semantics). Furthermore, a resolution theorem-proof is used to show the structural features of the theory (model) to establish it is sound and complete. After establishing that the theory is sound and complete, the next step is to provide the instantiation of the theory. This is achieved by mapping the core components of the theory to their corresponding instances. Additionally, a formal graphical tool termed as point graph (PG) is used to visualise the cases of the proposed axiomatic system. PG facilitates in modelling, and scheduling patient flows and enables analysing existing models for possible inaccuracies and inconsistencies supported by a reasoning mechanism based on PITL. Following that, a transformation is developed to map the core modelling components of the standards into the extended PG (PG*) based on the semantics presented by the axiomatic system. A real-life case (from the King’s College hospital accident and emergency (A&E) department’s trauma patient pathway) is considered to validate the framework. It is divided into three patient flows to depict the journey of a patient with significant trauma, arriving at A&E, undergoing a procedure and subsequently discharged. Their staff relied upon the UML-AD and BPMN to model the patient flows. An evaluation of their representation is presented to show the shortfalls of the modelling standards to model patient flows. The last step is to model these patient flows using the developed approach, which is supported by enhanced reasoning and scheduling

    Software dependability modeling using an industry-standard architecture description language

    Full text link
    Performing dependability evaluation along with other analyses at architectural level allows both making architectural tradeoffs and predicting the effects of architectural decisions on the dependability of an application. This paper gives guidelines for building architectural dependability models for software systems using the AADL (Architecture Analysis and Design Language). It presents reusable modeling patterns for fault-tolerant applications and shows how the presented patterns can be used in the context of a subsystem of a real-life application

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Co-simulation of Event-B and Ptolemy II Models via FMI

    Get PDF
    In the framework of model-based design formal modelling, verification and simulation of safety-critical systems are supported by several methods and tools. Interfacing these tools often becomes challenging for heterogeneous systems. The FMI standard enables integration of different simulation tools through artefacts called Functional Mockup Units (FMU) [1]. The FMI standard is mainly based on the concept of scalability of the simulation as it deals with heterogeneous cyber-physical systems. The combination of discrete behaviour and continuous-time environment is a common case study in hybrid simulation. Moreover, another aspect of the FMI is to enhance the capability of the tools. Thus, a collaborative simulation between the Rodin [2] and Ptolemy [3] is leveraged by both platforms. While Event-B is enhanced by new models of computation of Ptolemy,Ptolemy leverages the expressivity and properties validation (theorem/invariant proofs) implemented by Event-B. The main rationale of the co-simulation between Event-B and Ptolemy relies on the intention of dissimilarity and complementarity of the modelling viewpoints. Event-B provides formal modelling by specifying conditions, actions and properties that manage discrete event behaviour, whereas Ptolemy gives a structural viewpoint in terms of actors, components or functions with relation to concerned behaviour. Thus, the association of Ptolemy and Event-B puts together structural and formal aspects.This paper focuses on the collaborative simulation of models supported by both Ptolemy II and Event-B. The ongoing work consists of the design of a diagrammatic co-simulation surface and its application to a controller case study

    Interim Evaluation of the Direct Actions under the Euratom Research and Training Programme (2014 - 2018)

    Get PDF
    This report presents an interim evaluation of the direct actions of the Joint Research Centre (JRC) of the European Commission under the Euratom research and training programme (2014 - 2018), conducted halfway through the programme by a panel of high-level independent experts. The panel had extensive knowledge and experience in matters of Euratom research and the wider responsibilities of the European Commission related to nuclear safety and security in a European and an international context. During the reporting period the JRC has shown the ability to lead through coordination, bringing together its own research efforts with those in the Member States. The JRC concentrated its nuclear work in one directorate and more in general the JRC has given successful follow up to recommendations from previous evaluations. The European knowledge manager for nuclear safety and security; the European voice for nuclear; Responsible for the largest single nuclear research effort of the European Atomic Energy Community, the JRC shows its frontline position in this area in all modesty. As the European Commission’s science and knowledge service, the JRC has an excellent position to communicate reliable information on nuclear matters, not only to the nuclear organisations, but also to the other stakeholders, notably the politicians and the public. The positive conclusions and recommendations at the end of this report should help the JRC and the Commission preparing sound proposals for a Council regulation for the Euratom research and training programme 2019 - 2020 and for the next Euratom programme (2021 - 2025).JRC.ADV02-Adviser for Evaluation and Scientific Integrit

    Positively Deviant Organizational Performance and the Role of Leadership Values

    Get PDF
    Cameron cites the infusion of collaborative values and restructuring of relationships as a primary reason for the successful clean up and closure of Rocky Flats, one of U.S.’s most hazardous and controversial toxic dumps. Success was contingent upon mutual trust and respect of and between traditionally adversarial groups by adopting a mutual proactive, sharing orientation and empathetic attitudes. The true leaders in this venture shifted from a profit-first stance to changing organizational culture, ensuring that individuals (especially leaders and influencers) pursued an abundance-based vision

    Challenges and Work Directions for Europe

    Get PDF
    International audienceEmbedded Systems are components integrating software and hardware, that are jointly and specifically designed to provide a given set of functionalities. These components may be used in a huge variety of applications, including transport (avionics, space, automotive, trains), electrical and electronic appliances (cameras, toys, television, washers, dryers, audio systems, and cellular phones), process control (energy production and distribution, factory automation), telecommunications (satellites, mobile phones and telecom networks), security (e-commerce, smart cards), etc. We expect that within a short timeframe, embedded systems will be a part of virtually all equipment designed or manufactured in Europe, the USA, and Asia

    Rigorous System Design

    Get PDF
    The monograph advocates rigorous system design as a coherent and accountable model-based process leading from requirements to correct implementations. It presents the current state of the art in system design, discusses its limitations, and identifies possible avenues for overcoming them. A rigorous system design flow is defined as a formal accountable and iterative process composed of steps, and based on four principles: (1) separation of concerns; (2) component-based construction; (3) semantic coherency; and (4) correctness-by-construction. The combined application of these principles allows the definition of a methodology clearly identifying where human intervention and ingenuity are needed to resolve design choices, as well as activities that can be supported by tools to automate tedious and error-prone tasks. An implementable system model is progressively derived by source-to-source automated transformations in a single host component-based language rooted in well-defined semantics. Using a single modeling language throughout the design flow enforces semantic coherency. Correct-by-construction techniques allow well-known limitations of a posteriori verification to be overcome and ensure accountability. It is possible to explain, at each design step, which among the requirements are satisfied and which may not be satisfied. The presented view for rigorous system design has been amply implemented in the BIP (Behavior, Interaction, Priority) component framework and substantiated by numerous experimental results showing both its relevance and feasibility. The monograph concludes with a discussion advocating a system-centric vision for computing, identifying possible links with other disciplines, and emphasizing centrality of system design
    • 

    corecore