602 research outputs found

    A Process Modelling Framework Based on Point Interval Temporal Logic with an Application to Modelling Patient Flows

    Get PDF
    This thesis considers an application of a temporal theory to describe and model the patient journey in the hospital accident and emergency (A&E) department. The aim is to introduce a generic but dynamic method applied to any setting, including healthcare. Constructing a consistent process model can be instrumental in streamlining healthcare issues. Current process modelling techniques used in healthcare such as flowcharts, unified modelling language activity diagram (UML AD), and business process modelling notation (BPMN) are intuitive and imprecise. They cannot fully capture the complexities of the types of activities and the full extent of temporal constraints to an extent where one could reason about the flows. Formal approaches such as Petri have also been reviewed to investigate their applicability to the healthcare domain to model processes. Additionally, to schedule patient flows, current modelling standards do not offer any formal mechanism, so healthcare relies on critical path method (CPM) and program evaluation review technique (PERT), that also have limitations, i.e. finish-start barrier. It is imperative to specify the temporal constraints between the start and/or end of a process, e.g., the beginning of a process A precedes the start (or end) of a process B. However, these approaches failed to provide us with a mechanism for handling these temporal situations. If provided, a formal representation can assist in effective knowledge representation and quality enhancement concerning a process. Also, it would help in uncovering complexities of a system and assist in modelling it in a consistent way which is not possible with the existing modelling techniques. The above issues are addressed in this thesis by proposing a framework that would provide a knowledge base to model patient flows for accurate representation based on point interval temporal logic (PITL) that treats point and interval as primitives. These objects would constitute the knowledge base for the formal description of a system. With the aid of the inference mechanism of the temporal theory presented here, exhaustive temporal constraints derived from the proposed axiomatic system’ components serves as a knowledge base. The proposed methodological framework would adopt a model-theoretic approach in which a theory is developed and considered as a model while the corresponding instance is considered as its application. Using this approach would assist in identifying core components of the system and their precise operation representing a real-life domain deemed suitable to the process modelling issues specified in this thesis. Thus, I have evaluated the modelling standards for their most-used terminologies and constructs to identify their key components. It will also assist in the generalisation of the critical terms (of process modelling standards) based on their ontology. A set of generalised terms proposed would serve as an enumeration of the theory and subsume the core modelling elements of the process modelling standards. The catalogue presents a knowledge base for the business and healthcare domains, and its components are formally defined (semantics). Furthermore, a resolution theorem-proof is used to show the structural features of the theory (model) to establish it is sound and complete. After establishing that the theory is sound and complete, the next step is to provide the instantiation of the theory. This is achieved by mapping the core components of the theory to their corresponding instances. Additionally, a formal graphical tool termed as point graph (PG) is used to visualise the cases of the proposed axiomatic system. PG facilitates in modelling, and scheduling patient flows and enables analysing existing models for possible inaccuracies and inconsistencies supported by a reasoning mechanism based on PITL. Following that, a transformation is developed to map the core modelling components of the standards into the extended PG (PG*) based on the semantics presented by the axiomatic system. A real-life case (from the King’s College hospital accident and emergency (A&E) department’s trauma patient pathway) is considered to validate the framework. It is divided into three patient flows to depict the journey of a patient with significant trauma, arriving at A&E, undergoing a procedure and subsequently discharged. Their staff relied upon the UML-AD and BPMN to model the patient flows. An evaluation of their representation is presented to show the shortfalls of the modelling standards to model patient flows. The last step is to model these patient flows using the developed approach, which is supported by enhanced reasoning and scheduling

    Instruction scheduling in micronet-based asynchronous ILP processors

    Get PDF

    Integrated timing verification for distributed embedded real-time systems

    Get PDF
    More and more parts of our lives are controlled by software systems that are usually not recognised as such. This is due to the fact that they are embedded in non-computer systems, like washing machines or cars. A modern car, for example, is controlled by up to 80 electronic control units (ECU). Most of these ECUs do not just have to fulfil functional correctness requirements but also have to execute a control action within a given time bound. An airbag, for example, does not work correctly if it is triggered a single second too late. These so-called real-time properties have to be verified for safety-critical systems as well as for non-safety-critical real-time systems. The growing distribution of functions over several ECUs increases the amount of complex dependencies in the entire automotive system. Therefore, an integrated approach for timing verification on all development levels (System, ECU, Software, etc.) and in all development phases is necessary. Today's most often used timing analysis method - the timing measurement of a system under test - is insufficient in many respects. First of all, it is very unlikely to find the actual worst-case response times this way. Furthermore, only the consequences of time consumption can thus be detected but not the potentially very complex causes for the consumption itself. The complexity of timing behaviour is one reason for the often late and thus expensive detection of timing problems in the development process. In contrast to measurement with the mentioned drawbacks, there is the static timing verification which exists since many years and is applicable with commercial tools. This thesis studies the current problems of industrial applicability of the static timing analysis (effort, imprecision, over-estimation, etc.) and solves them by process integration and the development of new analysis methods. In order to show the real benefit of the proposed methods, the approach will be demonstrated using an industrial example at every development stage.Unser tĂ€gliches Leben wird immer stĂ€rker von Software-Systemen durchdrungen, die oftmals nicht als solche wahrgenommen werden, da sie in Nicht-Computer-Systeme (Waschmaschinen, Autos, usw.) eingebettet sind. So arbeiten in einem aktuellen PKW bis zu 80 SteuergerĂ€te. Diese mĂŒssen in vielen FĂ€llen nicht nur funktional korrekt arbeiten, sondern eine geforderte Berechnung auch innerhalb vorgegebener Zeitschranken ausfĂŒhren. Ein Airbag erfĂŒllt seine Aufgabe beispielsweise nicht, wenn er auch nur eine Sekunde zu spĂ€t ausgelöst wird. Die so genannten Echtzeiteigenschaften mĂŒssen fĂŒr sicherheitskritische Anwendungen und soweit wie möglich auch fĂŒr alle anderen Echtzeitsysteme, abgesichert werden. Insbesondere sorgt die steigende Verteilung von Funktionen ĂŒber mehrere SteuergerĂ€te hinweg zunehmend fĂŒr komplexe AbhĂ€ngigkeiten im gesamten Fahrzeugsystem. Dies macht eine im Entwicklungsprozess und auf allen Abstraktionsebenen der Entwicklung (System, SteuergerĂ€te, Software, usw.) durchgĂ€ngige Methodik der Zeitverifikation notwendig. Das heute ĂŒbliche Verfahren der Zeitmessung von Systemen wĂ€hrend der TestdurchfĂŒhrung ist in vielerlei Hinsicht ungenĂŒgend. Zum einen werden die tatsĂ€chlichen Grenzwerte nur mit sehr geringer Wahrscheinlichkeit erreicht. Zum anderen werden auf diese Weise nur die Auswirkungen von ZeitverbrĂ€uchen gemessen, nicht aber deren Ursachen analysiert, die möglicherweise sehr komplex sein können. Dies fĂŒhrt auch dazu, dass Probleme erst spĂ€t im Entwicklungsprozess erkannt und folglich nur mit hohen Kosten behoben werden können. Neben den Zeitmessungen mit den genannten Nachteilen gibt es die statische Zeitverifikation. Diese ist bereits seit vielen Jahren bekannt und auch ĂŒber entsprechende Werkzeuge einsetzbar. In der vorliegenden Dissertation werden die Probleme der industriellen Anwendbarkeit der statischen Zeitverifikation (Aufwand, Ungenauigkeit, ÜberschĂ€tzung, usw.) untersucht und mit einer durchgĂ€ngigen Prozessintegration sowie der Entwicklung neuer Analyse-Methoden gelöst. Der hier vorgestellte Ansatz wird deshalb in jedem Schritt mit einem Beispiel aus der Industrie dargestellt und geprĂŒft

    Safe and scalable parallel programming with session types

    Get PDF
    Parallel programming is a technique that can coordinate and utilise multiple hardware resources simultaneously, to improve the overall computation performance. However, reasoning about the communication interactions between the resources is difficult. Moreover, scaling an application often leads to increased number and complexity of interactions, hence we need a systematic way to ensure the correctness of the communication aspects of parallel programs. In this thesis, we take an interaction-centric view of parallel programming, and investigate applying and adapting the theory of Session Types, a formal typing discipline for structured interaction-based communication, to guarantee the lack of communication mismatches and deadlocks in concurrent systems. We focus on scalable, distributed parallel systems that use message-passing for communication. We explore programming language primitives, tools and frameworks to simplify parallel programming. First, we present the design and implementation of Session C, a program ming toolchain for message-passing parallel programming. Session C can ensure deadlock freedom, communication safety and global progress through static type checking, and supports optimisations by refinements through session subtyping. Then we introduce Pabble, a protocol description language for designing parametric interaction protocols. The language can capture scalable interaction patterns found in parallel applications, and guarantees communication-safety and deadlock-freedom despite the undecidability of the underlying parameterised session type theory. Next, we demonstrate an application of Pabble in a workflow that combines Pabble protocols and computation kernel code describing the sequential computation behaviours, to generate a Message-Passing Interface (MPI) parallel application. The framework guarantees, by construction, that generated code are free from communication errors and deadlocks. Finally, we formalise an extension of binary session types and new language primitives for safe and efficient implementations of multiparty parallel applications in a binary server-client programming environment. Our exploration with session-based parallel programming shows that it is a feasible and practical approach to guaranteeing communication aspects of complex, interaction-based scalable parallel programming.Open Acces

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability

    Engineering the performance of parallel applications

    Get PDF

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Intelligent Systems

    Get PDF
    This book is dedicated to intelligent systems of broad-spectrum application, such as personal and social biosafety or use of intelligent sensory micro-nanosystems such as "e-nose", "e-tongue" and "e-eye". In addition to that, effective acquiring information, knowledge management and improved knowledge transfer in any media, as well as modeling its information content using meta-and hyper heuristics and semantic reasoning all benefit from the systems covered in this book. Intelligent systems can also be applied in education and generating the intelligent distributed eLearning architecture, as well as in a large number of technical fields, such as industrial design, manufacturing and utilization, e.g., in precision agriculture, cartography, electric power distribution systems, intelligent building management systems, drilling operations etc. Furthermore, decision making using fuzzy logic models, computational recognition of comprehension uncertainty and the joint synthesis of goals and means of intelligent behavior biosystems, as well as diagnostic and human support in the healthcare environment have also been made easier

    Granularity in Large-Scale Parallel Functional Programming

    Get PDF
    This thesis demonstrates how to reduce the runtime of large non-strict functional programs using parallel evaluation. The parallelisation of several programs shows the importance of granularity, i.e. the computation costs of program expressions. The aspect of granularity is studied both on a practical level, by presenting and measuring runtime granularity improvement mechanisms, and at a more formal level, by devising a static granularity analysis. By parallelising several large functional programs this thesis demonstrates for the first time the advantages of combining lazy and parallel evaluation on a large scale: laziness aids modularity, while parallelism reduces runtime. One of the parallel programs is the Lolita system which, with more than 47,000 lines of code, is the largest existing parallel non-strict functional program. A new mechanism for parallel programming, evaluation strategies, to which this thesis contributes, is shown to be useful in this parallelisation. Evaluation strategies simplify parallel programming by separating algorithmic code from code specifying dynamic behaviour. For large programs the abstraction provided by functions is maintained by using a data-oriented style of parallelism, which defines parallelism over intermediate data structures rather than inside the functions. A highly parameterised simulator, GRANSIM, has been constructed collaboratively and is discussed in detail in this thesis. GRANSIM is a tool for architecture-independent parallelisation and a testbed for implementing runtime-system features of the parallel graph reduction model. By providing an idealised as well as an accurate model of the underlying parallel machine, GRANSIM has proven to be an essential part of an integrated parallel software engineering environment. Several parallel runtime- system features, such as granularity improvement mechanisms, have been tested via GRANSIM. It is publicly available and in active use at several universities worldwide. In order to provide granularity information this thesis presents an inference-based static granularity analysis. This analysis combines two existing analyses, one for cost and one for size information. It determines an upper bound for the computation costs of evaluating an expression in a simple strict higher-order language. By exposing recurrences during cost reconstruction and using a library of recurrences and their closed forms, it is possible to infer the costs for some recursive functions. The possible performance improvements are assessed by measuring the parallel performance of a hand-analysed and annotated program
    • 

    corecore