225 research outputs found

    Modeling and Simulation of Task Allocation with Colored Petri Nets

    Get PDF
    The task allocation problem is a key element in the solution of several applications from different engineering fields. With the explosion of the amount of information produced by the today Internet-connected solutions, scheduling techniques for the allocation of tasks relying on grids, clusters of computers, or in the cloud computing, is at the core of efficient solutions. The task allocation is an important problem within some branch of the computer sciences and operations research, where it is usually modeled as an optimization of a combinatorial problem with the inconvenience of a state explosion problem. This chapter proposes the modeling of the task allocation problem by the use of Colored Petri nets. The proposed methodology allows the construction of compact models for task scheduling problems. Moreover, a simulation process is possible within the constructed model, which allows the study of some performance aspects of the task allocation problem before any implementation stage

    SAFE-FLOW : a systematic approach for safety analysis of clinical workflows

    Get PDF
    The increasing use of technology in delivering clinical services brings substantial benefits to the healthcare industry. At the same time, it introduces potential new complications to clinical workflows that generate new risks and hazards with the potential to affect patients’ safety. These workflows are safety critical and can have a damaging impact on all the involved parties if they fail.Due to the large number of processes included in the delivery of a clinical service, it can be difficult to determine the individuals or the processes that are responsible for adverse events. Using methodological approaches and automated tools to carry out an analysis of the workflow can help in determining the origins of potential adverse events and consequently help in avoiding preventable errors. There is a scarcity of studies addressing this problem; this was a partial motivation for this thesis.The main aim of the research is to demonstrate the potential value of computer science based dependability approaches to healthcare and in particular, the appropriateness and benefits of these dependability approaches to overall clinical workflows. A particular focus is to show that model-based safety analysis techniques can be usefully applied to such areas and then to evaluate this application.This thesis develops the SAFE-FLOW approach for safety analysis of clinical workflows in order to establish the relevance of such application. SAFE-FLOW detailed steps and guidelines for its application are explained. Then, SAFE-FLOW is applied to a case study and is systematically evaluated. The proposed evaluation design provides a generic evaluation strategy that can be used to evaluate the adoption of safety analysis methods in healthcare.It is concluded that safety of clinical workflows can be significantly improved by performing safety analysis on workflow models. The evaluation results show that SAFE-FLOW is feasible and it has the potential to provide various benefits; it provides a mechanism for a systematic identification of both adverse events and safeguards, which is helpful in terms of identifying the causes of possible adverse events before they happen and can assist in the design of workflows to avoid such occurrences. The clear definition of the workflow including its processes and tasks provides a valuable opportunity for formulation of safety improvement strategies

    Dynamic system safety analysis in HiP-HOPS with Petri Nets and Bayesian Networks

    Get PDF
    YesDynamic systems exhibit time-dependent behaviours and complex functional dependencies amongst their components. Therefore, to capture the full system failure behaviour, it is not enough to simply determine the consequences of different combinations of failure events: it is also necessary to understand the order in which they fail. Pandora temporal fault trees (TFTs) increase the expressive power of fault trees and allow modelling of sequence-dependent failure behaviour of systems. However, like classical fault tree analysis, TFT analysis requires a lot of manual effort, which makes it time consuming and expensive. This in turn makes it less viable for use in modern, iterated system design processes, which requires a quicker turnaround and consistency across evolutions. In this paper, we propose for a model-based analysis of temporal fault trees via HiP-HOPS, which is a state-of-the-art model-based dependability analysis method supported by tools that largely automate analysis and optimisation of systems. The proposal extends HiP-HOPS with Pandora, Petri Nets and Bayesian Networks and results to dynamic dependability analysis that is more readily integrated into modern design processes. The effectiveness is demonstrated via application to an aircraft fuel distribution system.Partly funded by the DEIS H2020 project (Grant Agreement 732242)

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio

    Seventh Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools, Aarhus, Denmark, October 24-26, 2006

    Get PDF
    This booklet contains the proceedings of the Seventh Workshop on Practical Use of Coloured Petri Nets and the CPN Tools, October 24-26, 2006. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark. The papers are also available in electronic form via the web pages: http://www.daimi.au.dk/CPnets/workshop0

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This book is Open Access under a CC BY licence. The LNCS 11427 and 11428 proceedings set constitutes the proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019. The total of 42 full and 8 short tool demo papers presented in these volumes was carefully reviewed and selected from 164 submissions. The papers are organized in topical sections as follows: Part I: SAT and SMT, SAT solving and theorem proving; verification and analysis; model checking; tool demo; and machine learning. Part II: concurrent and distributed systems; monitoring and runtime verification; hybrid and stochastic systems; synthesis; symbolic verification; and safety and fault-tolerant systems

    Aggregation and Adaptation of Web Services

    Get PDF
    Service-oriented computing highly supports the development of future business applications through the use of (Web) services. Two main challenges for Web services are the aggregation of services into new (complex) business applications, and the adaptation of services presenting various types of interaction mismatches. The ultimate objective of this thesis is to define a methodology for the semi-automated aggregation and adaptation of Web services capable of suitably overcoming semantic and behaviour mismatches in view of business process integration within and across organisational boundaries. We tackle the aggregation and adaptation of services described by service contracts, which consist of signature (WSDL), ontology information (OWL), and behaviour specification (YAWL). We first describe an aggregation technique that automatically generates contracts of composite services satisfying (behavioural) client requests from a registry of service contracts. Further on, we present a behaviour-aware adaptation technique that supports the customisation of services to fulfil client requests. The adaptation technique can be used to adapt the behaviour of services to satisfy both functional and behavioural requests. In order to support the generation of service contracts from real-world service descriptions, we also introduce a pattern-based compositional translator for the automated generation of YAWL workflows from BPEL business processes. In this way, we pave the way for the formal analysis, aggregation, and adaptation of BPEL processes

    Systems reliability modelling for phased missions with maintenance-free operating periods

    Get PDF
    In 1996, a concept was proposed by the UK Ministry of Defence with the intention of making the field of reliability more useful to the end user, particularly within the field of military aerospace. This idea was the Maintenance Free Operating Period (MFOP), a duration of time in which the overall system can complete all of its required missions without the need to undergo emergency repairs or maintenance, with a defined probability of success. The system can encounter component or subsystem failures, but these must be carried with no effect to the overall mission, until such time as repair takes place. It is thought that advanced technologies such as redundant systems, prognostics and diagnostics will play a major role in the successful use of MFOP in practical applications. Many types of system operate missions that are made up of several sequential phases. For a mission to be successful, the system must satisfactorily complete each of the objectives in each of the phases. If the system fails or cannot complete its goals in any one phase, the mission has failed. Each phase will require the system to use different items, and so the failure logic changes from phase to phase. Mission unreliability is defined as the probability that the system fails to function successfully during at least one phase of the mission. An important problem is the efficient calculation of the value of mission unreliability. This thesis investigates the creation of a modelling method to consider as many features of systems undergoing both MFOPs and phased missions as possible. This uses Petri nets, a type of digraph allowing storage and transit of tokens which represent system states. A simple model is presented, following which, a more complex model is developed and explained, encompassing those ideas which are believed to be important in delivering a long MFOP with a high degree of confidence. A demonstration of the process by which the modelling method could be used to improve the reliability performance of a large system is then shown. The complex model is employed in the form of a Monte-Carlo simulation program, which is applied to a life-size system such as may be encountered in the real world. Improvements are suggested and results from their implementation analysed.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • …
    corecore