2,430 research outputs found

    Vérification efficace de systèmes à compteurs à l'aide de relaxations

    Get PDF
    Abstract : Counter systems are popular models used to reason about systems in various fields such as the analysis of concurrent or distributed programs and the discovery and verification of business processes. We study well-established problems on various classes of counter systems. This thesis focusses on three particular systems, namely Petri nets, which are a type of model for discrete systems with concurrent and sequential events, workflow nets, which form a subclass of Petri nets that is suited for modelling and reasoning about business processes, and continuous one-counter automata, a novel model that combines continuous semantics with one-counter automata. For Petri nets, we focus on reachability and coverability properties. We utilize directed search algorithms, using relaxations of Petri nets as heuristics, to obtain novel semi-decision algorithms for reachability and coverability, and positively evaluate a prototype implementation. For workflow nets, we focus on the problem of soundness, a well-established correctness notion for such nets. We precisely characterize the previously widely-open complexity of three variants of soundness. Based on our insights, we develop techniques to verify soundness in practice, based on reachability relaxation of Petri nets. Lastly, we introduce the novel model of continuous one-counter automata. This model is a natural variant of one-counter automata, which allows reasoning in a hybrid manner combining continuous and discrete elements. We characterize the exact complexity of the reachability problem in several variants of the model.Les systèmes à compteurs sont des modèles utilisés afin de raisonner sur les systèmes de divers domaines tels l’analyse de programmes concurrents ou distribués, et la découverte et la vérification de systèmes d’affaires. Nous étudions des problèmes bien établis de différentes classes de systèmes à compteurs. Cette thèse se penche sur trois systèmes particuliers : les réseaux de Petri, qui sont un type de modèle pour les systèmes discrets à événements concurrents et séquentiels ; les « réseaux de processus », qui forment une sous-classe des réseaux de Petri adaptée à la modélisation et au raisonnement des processus d’affaires ; les automates continus à un compteur, un nouveau modèle qui combine une sémantique continue à celles des automates à un compteur. Pour les réseaux de Petri, nous nous concentrons sur les propriétés d’accessibilité et de couverture. Nous utilisons des algorithmes de parcours de graphes, avec des relaxations de réseaux de Petri comme heuristiques, afin d’obtenir de nouveaux algorithmes de semi-décision pour l’accessibilité et la couverture, et nous évaluons positivement un prototype. Pour les «réseaux de processus», nous nous concentrons sur le problème de validité, une notion de correction bien établie pour ces réseaux. Nous caractérisions précisément la complexité calculatoire jusqu’ici largement ouverte de trois variantes du problème de validité. En nous basant sur nos résultats, nous développons des techniques pour vérifier la validité en pratique, à l’aide de relaxations d’accessibilité dans les réseaux de Petri. Enfin, nous introduisons le nouveau modèle d’automates continus à un compteur. Ce modèle est une variante naturelle des automates à un compteur, qui permet de raisonner de manière hybride en combinant des éléments continus et discrets. Nous caractérisons la complexité exacte du problème d’accessibilité dans plusieurs variantes du modèle

    A Process Modelling Framework Based on Point Interval Temporal Logic with an Application to Modelling Patient Flows

    Get PDF
    This thesis considers an application of a temporal theory to describe and model the patient journey in the hospital accident and emergency (A&E) department. The aim is to introduce a generic but dynamic method applied to any setting, including healthcare. Constructing a consistent process model can be instrumental in streamlining healthcare issues. Current process modelling techniques used in healthcare such as flowcharts, unified modelling language activity diagram (UML AD), and business process modelling notation (BPMN) are intuitive and imprecise. They cannot fully capture the complexities of the types of activities and the full extent of temporal constraints to an extent where one could reason about the flows. Formal approaches such as Petri have also been reviewed to investigate their applicability to the healthcare domain to model processes. Additionally, to schedule patient flows, current modelling standards do not offer any formal mechanism, so healthcare relies on critical path method (CPM) and program evaluation review technique (PERT), that also have limitations, i.e. finish-start barrier. It is imperative to specify the temporal constraints between the start and/or end of a process, e.g., the beginning of a process A precedes the start (or end) of a process B. However, these approaches failed to provide us with a mechanism for handling these temporal situations. If provided, a formal representation can assist in effective knowledge representation and quality enhancement concerning a process. Also, it would help in uncovering complexities of a system and assist in modelling it in a consistent way which is not possible with the existing modelling techniques. The above issues are addressed in this thesis by proposing a framework that would provide a knowledge base to model patient flows for accurate representation based on point interval temporal logic (PITL) that treats point and interval as primitives. These objects would constitute the knowledge base for the formal description of a system. With the aid of the inference mechanism of the temporal theory presented here, exhaustive temporal constraints derived from the proposed axiomatic system’ components serves as a knowledge base. The proposed methodological framework would adopt a model-theoretic approach in which a theory is developed and considered as a model while the corresponding instance is considered as its application. Using this approach would assist in identifying core components of the system and their precise operation representing a real-life domain deemed suitable to the process modelling issues specified in this thesis. Thus, I have evaluated the modelling standards for their most-used terminologies and constructs to identify their key components. It will also assist in the generalisation of the critical terms (of process modelling standards) based on their ontology. A set of generalised terms proposed would serve as an enumeration of the theory and subsume the core modelling elements of the process modelling standards. The catalogue presents a knowledge base for the business and healthcare domains, and its components are formally defined (semantics). Furthermore, a resolution theorem-proof is used to show the structural features of the theory (model) to establish it is sound and complete. After establishing that the theory is sound and complete, the next step is to provide the instantiation of the theory. This is achieved by mapping the core components of the theory to their corresponding instances. Additionally, a formal graphical tool termed as point graph (PG) is used to visualise the cases of the proposed axiomatic system. PG facilitates in modelling, and scheduling patient flows and enables analysing existing models for possible inaccuracies and inconsistencies supported by a reasoning mechanism based on PITL. Following that, a transformation is developed to map the core modelling components of the standards into the extended PG (PG*) based on the semantics presented by the axiomatic system. A real-life case (from the King’s College hospital accident and emergency (A&E) department’s trauma patient pathway) is considered to validate the framework. It is divided into three patient flows to depict the journey of a patient with significant trauma, arriving at A&E, undergoing a procedure and subsequently discharged. Their staff relied upon the UML-AD and BPMN to model the patient flows. An evaluation of their representation is presented to show the shortfalls of the modelling standards to model patient flows. The last step is to model these patient flows using the developed approach, which is supported by enhanced reasoning and scheduling

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This book is Open Access under a CC BY licence. The LNCS 11427 and 11428 proceedings set constitutes the proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019. The total of 42 full and 8 short tool demo papers presented in these volumes was carefully reviewed and selected from 164 submissions. The papers are organized in topical sections as follows: Part I: SAT and SMT, SAT solving and theorem proving; verification and analysis; model checking; tool demo; and machine learning. Part II: concurrent and distributed systems; monitoring and runtime verification; hybrid and stochastic systems; synthesis; symbolic verification; and safety and fault-tolerant systems

    Parameter dependencies for reusable performance specifications of software components

    Get PDF
    To avoid design-related per­for­mance problems, model-driven performance prediction methods analyse the response times, throughputs, and re­source utilizations of software architectures before and during implementation. This thesis proposes new modeling languages and according model transformations, which allow a reusable description of usage profile dependencies to the performance of software components. Predictions based on this new methods can support performance-related design decisions

    Simulation Modelling Practice and Theory

    Get PDF
    The influx of data in the world today needs analysis that no one method can handle. Some reports estimated the influx of data would reach 163 zitabytes by 2025, hence the need for simulation and modeling theory and practice. Simulation and modeling tools and techniques are most important in this day and age. While simulation carries the needed work, tools for visualizing the results help in the decision-making process. Simulation ranges from a simple queue to molecular dynamics, including seismic reliability analysis, structural integrity assessment, games, reliability engineering, and system safety. This book will introduce practitioners, researchers, and novice users to simulation and modeling, and to the world of imagination

    An Evaluation Framework for Comparative Analysis of Generalized Stochastic Petri Net Simulation Techniques

    Get PDF
    Availability of a common, shared benchmark to provide repeatable, quantifiable, and comparable results is an added value for any scientific community. International consortia provide benchmarks in a wide range of domains, being normally used by industry, vendors, and researchers for evaluating their software products. In this regard, a benchmark of untimed Petri net models was developed to be used in a yearly software competition driven by the Petri net community. However, to the best of our knowledge there is not a similar benchmark to evaluate solution techniques for Petri nets with timing extensions. In this paper, we propose an evaluation framework for the comparative analysis of generalized stochastic Petri nets (GSPNs) simulation techniques. Although we focus on simulation techniques, our framework provides a baseline for a comparative analysis of different GSPN solvers (e.g., simulators, numerical solvers, or other techniques). The evaluation framework encompasses a set of 50 GSPN models including test cases and case studies from the literature, and a set of evaluation guidelines for the comparative analysis. In order to show the applicability of the proposed framework, we carry out a comparative analysis of steady-state simulators implemented in three academic software tools, namely, GreatSPN, PeabraiN, and TimeNET. The results allow us to validate the trustfulness of these academic software tools, as well as to point out potential problems and algorithmic optimization opportunities

    Quality of process modeling using BPMN: a model-driven approach

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia InformáticaContext: The BPMN 2.0 specification contains the rules regarding the correct usage of the language’s constructs. Practitioners have also proposed best-practices for producing better BPMN models. However, those rules are expressed in natural language, yielding sometimes ambiguous interpretation, and therefore, flaws in produced BPMN models. Objective: Ensuring the correctness of BPMN models is critical for the automation of processes. Hence, errors in the BPMN models specification should be detected and corrected at design time, since faults detected at latter stages of processes’ development can be more costly and hard to correct. So, we need to assess the quality of BPMN models in a rigorous and systematic way. Method: We follow a model-driven approach for formalization and empirical validation of BPMN well-formedness rules and BPMN measures for enhancing the quality of BPMN models. Results: The rule mining of BPMN specification, as well as recently published BPMN works, allowed the gathering of more than a hundred of BPMN well-formedness and best-practices rules. Furthermore, we derived a set of BPMN measures aiming to provide information to process modelers regarding the correctness of BPMN models. Both BPMN rules, as well as BPMN measures were empirically validated through samples of BPMN models. Limitations: This work does not cover control-flow formal properties in BPMN models, since they were extensively discussed in other process modeling research works. Conclusion: We intend to contribute for improving BPMN modeling tools, through the formalization of well-formedness rules and BPMN measures to be incorporated in those tools, in order to enhance the quality of process modeling outcomes

    Extra Functional Properties Evaluation of Self-managed Software Systems with Formal Methods

    Get PDF
    Multitud de aplicaciones software actuales están abocadas a operar en contextos dinámicos. Estos pueden manifestarse en términos de cambios en el entorno de ejecución de la aplicación, cambios en los requisitos de la aplicación, cambios en la carga de trabajo recibida por la aplicación, o cambios en cualquiera de los elementos que la aplicación software pueda percibir y verse afectada. Además, estos contextos dinámicos no están restringidos a un dominio particular de aplicaciones sino que se pueden encontrar en múltiples dominios, tales como: sistemas empotrados, arquitecturas orientadas a servicios, clusters para computación de altas prestaciones, dispositivos móviles o software para el funcionamiento de la red. La existencia de estas características disuade a los ingenieros de desarrollar software que no sea capaz de cambiar de modo alguno su ejecución para acomodarla al contexto en el que se está ejecutando el software en cada momento. Por lo tanto, con el objetivo de que el software pueda satisfacer sus requisitos en todo momento, este debe incluir mecanismos para poder cambiar su configuración de ejecución. Además, debido a que los cambios de contexto son frecuentes y afectan a múltiples dispositivos de la aplicación, la intervención humana que cambie manualmente la configuración del software no es una solución factible. Para enfrentarse a estos desafíos, la comunidad de Ingeniería del Software ha propuesto nuevos paradigmas que posibilitan el desarrollo de software que se enfrenta a contextos cambiantes de un modo automático; por ejemplo las propuestas Autonomic Computing y Self-* Software. En tales propuestas es el propio software quien gestiona sus mecanismos para cambiar la configuración de ejecución, sin requerir por lo tanto intervención humana alguna. Un aspecto esencial del software auto-adaptativo (Self-adaptive Software es uno de los términos más generales para referirse a Self-* Software) es el de planear sus cambios o adaptaciones. Los planes de adaptación determinan tanto el modo en el que se adaptará el software como los momentos oportunos para ejecutar tales adaptaciones. Hay un gran conjunto de situaciones para las cuales la propiedad de auto- adaptación es una solución. Una de esas situaciones es la de mantener al sistema satisfaciendo sus requisitos extra funcionales, tales como la calidad de servicio (Quality of Service, QoS) y su consumo de energía. Esta tesis ha investigado esa situación mediante el uso de métodos formales. Una de las contribuciones de esta tesis es la propuesta para asentar en una arquitectura software los sistemas que son auto-adaptativos respecto a su QoS y su consumo de energía. Con este objetivo, esta parte de la investigación la guía una arquitectura de tres capas de referencia para sistemas auto-adaptativos. La bondad del uso de una arquitectura de referencia es que muestra fácilmente los nuevos desafíos en el diseño de este tipo de sistemas. Naturalmente, la planificación de la adaptación es una de las actividades consideradas en la arquitectura. Otra de las contribuciones de la tesis es la propuesta de métodos para la creación de planes de adaptación. Los métodos formales juegan un rol esencial en esta actividad, ya que posibilitan el estudio de las propiedades extra funcionales de los sistemas en diferentes configuraciones. El método formal utilizado para estos análisis es el de las redes de Petri markovianas. Una vez que se ha creado el plan de adaptación, hemos investigado la utilización de los métodos formales para la evaluación de QoS y consumo de energía de los sistemas auto-adaptativos. Por lo tanto, se ha contribuido a la comunidad de análisis de QoS con el análisis de un nuevo y particularmente complejo tipo de sistemas software. Para llevar a cabo este análisis se requiere el modelado de los cambios din·micos del contexto de ejecución, para lo que se han utilizado una variedad de métodos formales, como los Markov modulated Poisson processes para estimar los parámetros de las variaciones en la carga de trabajo recibida por la aplicación, o los hidden Markov models para predecir el estado del entorno de ejecución. Estos modelos han sido usados junto a las redes de Petri para evaluar sistemas auto-adaptativos y obtener resultados sobre su QoS y consumo de energía. El trabajo de investigación anterior sacó a la luz el hecho de que la adaptabilidad de un sistema no es una propiedad tan fácilmente cuantificable como las propiedades de QoS -por ejemplo, el tiempo de respuesta- o el consumo de energÌa. En consecuencia, se ha investigado en esa dirección y, como resultado, otra de las contribuciones de esta tesis es la propuesta de un conjunto de métricas para la cuantificación de la propiedad de adaptabilidad de sistemas basados en servicios. Para conseguir las anteriores contribuciones se realiza un uso intensivo de modelos y transformaciones de modelos; tarea para la que se han seguido las mejores prácticas en el campo de investigación de la Ingeniería orientada a modelos (Model-driven Engineering, MDE). El trabajo de investigación de esta tesis en el campo MDE ha contribuido con: el aumento de la potencia de modelado de un lenguaje de modelado de software propuesto anteriormente y métodos de transformación desde dos lenguajes de modelado de software a redes de Petri estocasticas
    • …
    corecore