900 research outputs found

    A compositional method for reliability analysis of workflows affected by multiple failure modes

    Get PDF
    We focus on reliability analysis for systems designed as workflow based compositions of components. Components are characterized by their failure profiles, which take into account possible multiple failure modes. A compositional calculus is provided to evaluate the failure profile of a composite system, given failure profiles of the components. The calculus is described as a syntax-driven procedure that synthesizes a workflows failure profile. The method is viewed as a design-time aid that can help software engineers reason about systems reliability in the early stage of development. A simple case study is presented to illustrate the proposed approach

    SAFE-FLOW : a systematic approach for safety analysis of clinical workflows

    Get PDF
    The increasing use of technology in delivering clinical services brings substantial benefits to the healthcare industry. At the same time, it introduces potential new complications to clinical workflows that generate new risks and hazards with the potential to affect patients’ safety. These workflows are safety critical and can have a damaging impact on all the involved parties if they fail.Due to the large number of processes included in the delivery of a clinical service, it can be difficult to determine the individuals or the processes that are responsible for adverse events. Using methodological approaches and automated tools to carry out an analysis of the workflow can help in determining the origins of potential adverse events and consequently help in avoiding preventable errors. There is a scarcity of studies addressing this problem; this was a partial motivation for this thesis.The main aim of the research is to demonstrate the potential value of computer science based dependability approaches to healthcare and in particular, the appropriateness and benefits of these dependability approaches to overall clinical workflows. A particular focus is to show that model-based safety analysis techniques can be usefully applied to such areas and then to evaluate this application.This thesis develops the SAFE-FLOW approach for safety analysis of clinical workflows in order to establish the relevance of such application. SAFE-FLOW detailed steps and guidelines for its application are explained. Then, SAFE-FLOW is applied to a case study and is systematically evaluated. The proposed evaluation design provides a generic evaluation strategy that can be used to evaluate the adoption of safety analysis methods in healthcare.It is concluded that safety of clinical workflows can be significantly improved by performing safety analysis on workflow models. The evaluation results show that SAFE-FLOW is feasible and it has the potential to provide various benefits; it provides a mechanism for a systematic identification of both adverse events and safeguards, which is helpful in terms of identifying the causes of possible adverse events before they happen and can assist in the design of workflows to avoid such occurrences. The clear definition of the workflow including its processes and tasks provides a valuable opportunity for formulation of safety improvement strategies

    QoS verification and model tuning @ runtime

    No full text

    Perfomance Analysis and Resource Optimisation of Critical Systems Modelled by Petri Nets

    Get PDF
    Un sistema crítico debe cumplir con su misión a pesar de la presencia de problemas de seguridad. Este tipo de sistemas se suele desplegar en entornos heterogéneos, donde pueden ser objeto de intentos de intrusión, robo de información confidencial u otro tipo de ataques. Los sistemas, en general, tienen que ser rediseñados después de que ocurra un incidente de seguridad, lo que puede conducir a consecuencias graves, como el enorme costo de reimplementar o reprogramar todo el sistema, así como las posibles pérdidas económicas. Así, la seguridad ha de ser concebida como una parte integral del desarrollo de sistemas y como una necesidad singular de lo que el sistema debe realizar (es decir, un requisito no funcional del sistema). Así pues, al diseñar sistemas críticos es fundamental estudiar los ataques que se pueden producir y planificar cómo reaccionar frente a ellos, con el fin de mantener el cumplimiento de requerimientos funcionales y no funcionales del sistema. A pesar de que los problemas de seguridad se consideren, también es necesario tener en cuenta los costes incurridos para garantizar un determinado nivel de seguridad en sistemas críticos. De hecho, los costes de seguridad puede ser un factor muy relevante ya que puede abarcar diferentes dimensiones, como el presupuesto, el rendimiento y la fiabilidad. Muchos de estos sistemas críticos que incorporan técnicas de tolerancia a fallos (sistemas FT) para hacer frente a las cuestiones de seguridad son sistemas complejos, que utilizan recursos que pueden estar comprometidos (es decir, pueden fallar) por la activación de los fallos y/o errores provocados por posibles ataques. Estos sistemas pueden ser modelados como sistemas de eventos discretos donde los recursos son compartidos, también llamados sistemas de asignación de recursos. Esta tesis se centra en los sistemas FT con recursos compartidos modelados mediante redes de Petri (Petri nets, PN). Estos sistemas son generalmente tan grandes que el cálculo exacto de su rendimiento se convierte en una tarea de cálculo muy compleja, debido al problema de la explosión del espacio de estados. Como resultado de ello, una tarea que requiere una exploración exhaustiva en el espacio de estados es incomputable (en un plazo prudencial) para sistemas grandes. Las principales aportaciones de esta tesis son tres. Primero, se ofrecen diferentes modelos, usando el Lenguaje Unificado de Modelado (Unified Modelling Language, UML) y las redes de Petri, que ayudan a incorporar las cuestiones de seguridad y tolerancia a fallos en primer plano durante la fase de diseño de los sistemas, permitiendo así, por ejemplo, el análisis del compromiso entre seguridad y rendimiento. En segundo lugar, se proporcionan varios algoritmos para calcular el rendimiento (también bajo condiciones de fallo) mediante el cálculo de cotas de rendimiento superiores, evitando así el problema de la explosión del espacio de estados. Por último, se proporcionan algoritmos para calcular cómo compensar la degradación de rendimiento que se produce ante una situación inesperada en un sistema con tolerancia a fallos

    Correlative Framework of Techniques for the Inspection, Evaluation, and Design of Micro-electronic Devices

    Get PDF
    Trillions of micro- and nano-electronic devices are manufactured every year. They service countless electronic systems across a diverse range of applications ranging from civilian, military, and medical sectors. Examples of these devices include: packaged and board-mounted semiconductor devices such as ceramic capacitors, CPUs, GPUs, DSPs, etc., biomedical implantable electrochemical devices such as pacemakers, defibrillators, and neural stimulators, electromechanical sensors such as MEMS/NEMS accelerometers and positioning systems and many others. Though a diverse collection of devices, they are unified by their length scale. Particularly, with respect to the ever-present objectives of device miniaturization and performance improvement. Pressures to meet these objectives have left significant room for the development of widely applicable inspection and evaluation techniques to accurately and reliably probe new and failed devices on an ever-shrinking length scale. Presented in this study is a framework of correlative, cross-modality microscopy workflows coupled with novel in-situ experimentation and testing, and computational reverse engineering and modeling methods, aimed at addressing the current and future challenges of evaluating micro- and nano-electronic devices. The current challenges are presented through a unique series of micro- and nano-electronic devices from a wide range of applications with ties to industrial relevance. Solutions were reached for the challenges and through the development of these workflows, they were successfully expanded to areas outside the immediate area of the original project. Limitations on techniques and capabilities were noted to contextualize the applicability of these workflows to other current and future challenges

    A Syntactic-Semantic Approach to Incremental Verification

    Get PDF
    Software verification of evolving systems is challenging mainstream methodologies and tools. Formal verification techniques often conflict with the time constraints imposed by change management practices for evolving systems. Since changes in these systems are often local to restricted parts, an incremental verification approach could be beneficial. This paper introduces SiDECAR, a general framework for the definition of verification procedures, which are made incremental by the framework itself. Verification procedures are driven by the syntactic structure (defined by a grammar) of the system and encoded as semantic attributes associated with the grammar. Incrementality is achieved by coupling the evaluation of semantic attributes with an incremental parsing technique. We show the application of SiDECAR to the definition of two verification procedures: probabilistic verification of reliability requirements and verification of safety properties.Comment: 22 pages, 8 figures. Corrected typo

    A design-build-test-learn tool for synthetic biology

    Get PDF
    Modern synthetic gene regulatory networks emerge from iterative design-build-test cycles that encompass the decisions and actions necessary to design, build, and test target genetic systems. Historically, such cycles have been performed manually, with limited formal problem-definition and progress-tracking. In recent years, researchers have devoted substantial effort to define and automate many sub-problems of these cycles and create systems for data management and documentation that result in useful tools for solving portions of certain workflows. However, biologists generally must still manually transfer information between tools, a process that frequently results in information loss. Furthermore, since each tool applies to a different workflow, tools often will not fit together in a closed-loop and, typically, additional outstanding sub-problems still require manual solutions. This thesis describes an attempt to create a tool that harnesses many smaller tools to automate a fully closed-loop decision-making process to design, build, and test synthetic biology networks and use the outcomes to inform redesigns. This tool, called Phoenix, inputs a performance-constrained signal-temporal-logic (STL) equation and an abstract genetic-element structural description to specify a design and then returns iterative sets of building and testing instructions. The user executes the instructions and returns the data to Phoenix, which then processes it and uses it to parameterize models for simulation of the behavior of compositional designs. A model-checking algorithm then evaluates these simulations, and returns to the user a new set of instructions for building and testing the next set of constructs. In cases where experimental results disagree with simulations, Phoenix uses grammars to determine where likely points of design failure might have occurred and instructs the building and testing of an intermediate composition to test where failures occurred. A design tree represents the design hierarchy displayed in the user interface where progress can be tracked and electronic datasheets generated to review results. Users can validate the computations performed by Phoenix by using them to create sets of classic and novel temporal synthetic genetic regulatory functions in E. coli.2016-12-31T00:00:00

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
    • …
    corecore