32 research outputs found

    Modelling and Simulation of Asynchronous Real-Time Systems using Timed Rebeca

    Full text link
    In this paper we propose an extension of the Rebeca language that can be used to model distributed and asynchronous systems with timing constraints. We provide the formal semantics of the language using Structural Operational Semantics, and show its expressiveness by means of examples. We developed a tool for automated translation from timed Rebeca to the Erlang language, which provides a first implementation of timed Rebeca. We can use the tool to set the parameters of timed Rebeca models, which represent the environment and component variables, and use McErlang to run multiple simulations for different settings. Timed Rebeca restricts the modeller to a pure asynchronous actor-based paradigm, where the structure of the model represents the service oriented architecture, while the computational model matches the network infrastructure. Simulation is shown to be an effective analysis support, specially where model checking faces almost immediate state explosion in an asynchronous setting.Comment: In Proceedings FOCLASA 2011, arXiv:1107.584

    Evaluation of software architecture using fuzzy colored Petri nets

    Get PDF
    Software Architecture (SA) is one of the most important artifacts for life cycle of a software system because it incorporates some important decisions and principles for the system development. On the other hand, developing the systems based on uncertain and ambiguous requirement has been increased, significantly. Therefore, there have been significant attentions on SA requirements. In this paper, we present a new method for evaluation of performance characteristics based on a use case, response time, and queue length of SA. Since there are some ambiguities associated with considered systems, we use the idea of Fuzzy UML (F-UML) diagrams. In addition, these diagrams have been enriched with performance annotations using proposed Fuzzy-SPT sub profile, the extended version of SPT profile proposed by OMG. Then, these diagrams are mapped into an executable model based on Fuzzy Colored Petri Nets (FCPN) and finally the performance metrics are calculated using the proposed algorithms. We have implemented CPN-Tools for creating and evaluating the FCPN model

    The Impact of Petri Nets on System-of-Systems Engineering

    Get PDF
    The successful engineering of a large-scale system-of-systems project towards deterministic behaviour depends on integrating autonomous components using international communications standards in accordance with dynamic requirements. To-date, their engineering has been unsuccessful: no combination of top-down and bottom-up engineering perspectives is adopted, and information exchange protocol and interfaces between components are not being precisely specified. Various approaches such as modelling, and architecture frameworks make positive contributions to system-of-systems specification but their successful implementation is still a problem. One of the most popular modelling notations available for specifying systems, UML, is intuitive and graphical but also ambiguous and imprecise. Supplying a range of diagrams to represent a system under development, UML lacks simulation and exhaustive verification capability. This shortfall in UML has received little attention in the context of system-of-systems and there are two major research issues: 1. Where the dynamic, behavioural diagrams of UML can and cannot be used to model and analyse system-of-systems 2. Determining how Petri nets can be used to improve the specification and analysis of the dynamic model of a system-of-systems specified using UML This thesis presents the strengths and weaknesses of Petri nets in relation to the specification of system-of-systems and shows how Petri net models can be used instead of conventional UML Activity Diagrams. The model of the system-of-systems can then be analysed and verified using Petri net theory. The Petri net formalism of behaviour is demonstrated using two case studies from the military domain. The first case study uses Petri nets to specify and analyse a close air support mission. This case study concludes by indicating the strengths, weaknesses, and shortfalls of the proposed formalism in system-of-systems specification. The second case study considers specification of a military exchange network parameters problem and the results are compared with the strengths and weaknesses identified in the first case study. Finally, the results of the research are formulated in the form of a Petri net enhancement to UML (mapping existing activity diagram elements to Petri net elements) to meet the needs of system-of-systems specification, verification and validation

    Application of artificial neural networks and colored petri nets on earthquake resilient water distribution systems

    Get PDF
    Water distribution systems are important lifelines and a critical and complex infrastructure of a country. The performance of this system during unexpected rare events is important as it is one of the lifelines that people directly depend on and other factors indirectly impact the economy of a nation. In this thesis a couple of methods that can be used to predict damage and simulate the restoration process of a water distribution system are presented. Contributing to the effort of applying computational tools to infrastructure systems, Artificial Neural Network (ANN) is used to predict the rate of damage in the pipe network during seismic events. Prediction done in this thesis is based on earthquake intensity, peak ground velocity, and pipe size and material type. Further, restoration process of water distribution network in a seismic event is modeled and restoration curves are simulated using colored Petri nets. This dynamic simulation will aid decision makers to adopt the best strategies during disaster management. Prediction of damages, modeling and simulation in conjunction with other disaster reduction methodologies and strategies is expected to be helpful to be more resilient and better prepared for disasters --Abstract, page iv

    Compositional schedulability analysis of real-time actor-based systems

    Get PDF
    We present an extension of the actor model with real-time, including deadlines associated with messages, and explicit application-level scheduling policies, e.g.,"earliest deadline first" which can be associated with individual actors. Schedulability analysis in this setting amounts to checking whether, given a scheduling policy for each actor, every task is processed within its designated deadline. To check schedulability, we introduce a compositional automata-theoretic approach, based on maximal use of model checking combined with testing. Behavioral interfaces define what an actor expects from the environment, and the deadlines for messages given these assumptions. We use model checking to verify that actors match their behavioral interfaces. We extend timed automata refinement with the notion of deadlines and use it to define compatibility of actor environments with the behavioral interfaces. Model checking of compatibility is computationally hard, so we propose a special testing process. We show that the analyses are decidable and automate the process using the Uppaal model checke

    Performance requirements verification during software systems development

    Get PDF
    Requirements verification refers to the assurance that the implemented system reflects the specified requirements. Requirement verification is a process that continues through the life cycle of the software system. When the software crisis hit in 1960, a great deal of attention was placed on the verification of functional requirements, which were considered to be of crucial importance. Over the last decade, researchers have addressed the importance of integrating non-functional requirement in the verification process. An important non-functional requirement for software is performance. Performance requirement verification is known as Software Performance Evaluation. This thesis will look at performance evaluation of software systems. The performance evaluation of software systems is a hugely valuable task, especially in the early stages of a software project development. Many methods for integrating performance analysis into the software development process have been proposed. These methodologies work by utilising the software architectural models known in the software engineering field by transforming these into performance models, which can be analysed to gain the expected performance characteristics of the projected system. This thesis aims to bridge the knowledge gap between performance and software engineering domains by introducing semi-automated transformation methodologies. These are designed to be generic in order for them to be integrated into any software engineering development process. The goal of these methodologies is to provide performance related design guidance during the system development. This thesis introduces two model transformation methodologies. These are the improved state marking methodology and the UML-EQN methodology. It will also introduce the UML-JMT tool which was built to realise the UML-EQN methodology. With the help of automatic design models to performance model algorithms introduced in the UML-EQN methodology, a software engineer with basic knowledge of performance modelling paradigm can conduct a performance study on a software system design. This was proved in a qualitative study where the methodology and the tool deploying this methodology were tested by software engineers with varying levels of background, experience and from different sectors of the software development industry. The study results showed an acceptance for this methodology and the UML-JMT tool. As performance verification is a part of any software engineering methodology, we have to define frame works that would deploy performance requirements validation in the context of software engineering. Agile development paradigm was the result of changes in the overall environment of the IT and business worlds. These techniques are based on iterative development, where requirements, designs and developed programmes evolve continually. At present, the majority of literature discussing the role of requirements engineering in agile development processes seems to indicate that non-functional requirements verification is an unchartered territory. CPASA (Continuous Performance Assessment of Software Architecture) was designed to work in software projects where the performance can be affected by changes in the requirements and matches the main practices of agile modelling and development. The UML-JMT tool was designed to deploy the CPASA Performance evaluation tests

    Embedded System Design

    Get PDF
    A unique feature of this open access textbook is to provide a comprehensive introduction to the fundamental knowledge in embedded systems, with applications in cyber-physical systems and the Internet of things. It starts with an introduction to the field and a survey of specification models and languages for embedded and cyber-physical systems. It provides a brief overview of hardware devices used for such systems and presents the essentials of system software for embedded systems, including real-time operating systems. The author also discusses evaluation and validation techniques for embedded systems and provides an overview of techniques for mapping applications to execution platforms, including multi-core platforms. Embedded systems have to operate under tight constraints and, hence, the book also contains a selected set of optimization techniques, including software optimization techniques. The book closes with a brief survey on testing. This fourth edition has been updated and revised to reflect new trends and technologies, such as the importance of cyber-physical systems (CPS) and the Internet of things (IoT), the evolution of single-core processors to multi-core processors, and the increased importance of energy efficiency and thermal issues

    Modeling and formal verification of probabilistic reconfigurable systems

    Get PDF
    In this thesis, we propose a new approach for formal modeling and verification of adaptive probabilistic systems. Dynamic reconfigurable systems are the trend of all future technological systems, such as flight control systems, vehicle electronic systems, and manufacturing systems. In order to meet user and environmental requirements, such a dynamic reconfigurable system has to actively adjust its configuration at run-time by modifying its components and connections, while changes are detected in the internal/external execution environment. On the other hand, these changes may violate the memory usage, the required energy and the concerned real-time constraints since the behavior of the system is unpredictable. It might also make the system's functions unavailable for some time and make potential harm to human life or large financial investments. Thus, updating a system with any new configuration requires that the post reconfigurable system fully satisfies the related constraints. We introduce GR-TNCES formalism for the optimal functional and temporal specification of probabilistic reconfigurable systems under resource constraints. It enables the optimal specification of a probabilistic, energetic and memory constraints of such a system. To formally verify the correctness and the safety of such a probabilistic system specification, and the non-violation of its properties, an automatic transformation from GR-TNCES models into PRISM models is introduced. Moreover, a new approach XCTL is also proposed to formally verify reconfigurable systems. It enables the formal certification of uncompleted and reconfigurable systems. A new version of the software ZIZO is also proposed to model, simulate and verify such GR-TNCES model. To prove its relevance, the latter was applied to case studies; it was used to model and simulate the behavior of an IPV4 protocol to prevent the energy and memory resources violation. It was also used to optimize energy consumption of an automotive skid conveyor.In dieser Arbeit wird ein neuer Ansatz zur formalen Modellierung und Verifikation dynamisch rekonfigurierbarer Systeme vorgestellt. Dynamische rekonfigurierbare Systeme sind in vielen aktuellen und zukĂŒnftigen Anwendungen, wie beispielsweise Flugsteuerungssystemen, Fahrzeugelektronik und Fertigungssysteme zu finden. Diese Systeme weisen ein probabilistisches, adaptives Verhalten auf. Um die Benutzer- und Umgebungsbedingungen kontinuierlich zu erfĂŒllen, muss ein solches System seine Konfiguration zur Laufzeit aktiv anpassen, indem es seine Komponenten, Verbindungen zwischen Komponenten und seine Daten modifiziert (adaptiv), sobald Änderungen in der internen oder externen AusfĂŒhrungsumgebung erkannt werden (probabilistisch). Diese Anpassungen dĂŒrfen BeschrĂ€nkungen bei der Speichernutzung, der erforderlichen Energie und bestehende Echtzeitbedingungen nicht verletzen. Eine nicht geprĂŒfte Rekonfiguration könnte dazu fĂŒhren, dass die Funktionen des Systems fĂŒr einige Zeit nicht verfĂŒgbar wĂ€ren und potenziell menschliches Leben gefĂ€hrdet wĂŒrde oder großer finanzieller Schaden entstĂŒnde. Somit erfordert das Aktualisieren eines Systems mit einer neuen Konfiguration, dass das rekonfigurierte System die zugehörigen BeschrĂ€nkungen vollstĂ€ndig einhĂ€lt. Um dies zu ĂŒberprĂŒfen, wird in dieser Arbeit der GR-TNCES-Formalismus, eine Erweiterung von Petrinetzen, fĂŒr die optimale funktionale und zeitliche Spezifikation probabilistischer rekonfigurierbarer Systeme unter RessourcenbeschrĂ€nkungen vorgeschlagen. Die entstehenden Modelle sollen ĂŒber probabilistische model checking verifiziert werden. Dazu eignet sich die etablierte Software PRISM. Um die Verifikation zu ermöglichen wird in dieser Arbeit ein Verfahren zur Transformation von GR-TNCES-Modellen in PRISM-Modelle beschrieben. Eine neu eingefĂŒhrte Logik (XCTL) erlaubt zudem die einfache Beschreibung der zu prĂŒfenden Eigenschaften. Die genannten Schritte wurden in einer Softwareumgebung fĂŒr den automatisierten Entwurf, die Simulation und die formale Verifikation (durch eine automatische Transformation nach PRISM) umgesetzt. Eine Fallstudie zeigt die Anwendung des Verfahren

    Optimizing performance of workflow executions under authorization control

    Get PDF
    “Business processes or workflows are often used to model enterprise or scientific applications. It has received considerable attention to automate workflow executions on computing resources. However, many workflow scenarios still involve human activities and consist of a mixture of human tasks and computing tasks. Human involvement introduces security and authorization concerns, requiring restrictions on who is allowed to perform which tasks at what time. Role- Based Access Control (RBAC) is a popular authorization mechanism. In RBAC, the authorization concepts such as roles and permissions are defined, and various authorization constraints are supported, including separation of duty, temporal constraints, etc. Under RBAC, users are assigned to certain roles, while the roles are associated with prescribed permissions. When we assess resource capacities, or evaluate the performance of workflow executions on supporting platforms, it is often assumed that when a task is allocated to a resource, the resource will accept the task and start the execution once a processor becomes available. However, when the authorization policies are taken into account,” this assumption may not be true and the situation becomes more complex. For example, when a task arrives, a valid and activated role has to be assigned to a task before the task can start execution. The deployed authorization constraints may delay the workflow execution due to the roles’ availability, or other restrictions on the role assignments, which will consequently have negative impact on application performance. When the authorization constraints are present to restrict the workflow executions, it entails new research issues that have not been studied yet in conventional workflow management. This thesis aims to investigate these new research issues. First, it is important to know whether a feasible authorization solution can be found to enable the executions of all tasks in a workflow, i.e., check the feasibility of the deployed authorization constraints. This thesis studies the issue of the feasibility checking and models the feasibility checking problem as a constraints satisfaction problem. Second, it is useful to know when the performance of workflow executions will not be affected by the given authorization constraints. This thesis proposes the methods to determine the time durations when the given authorization constraints do not have impact. Third, when the authorization constraints do have the performance impact, how can we quantitatively analyse and determine the impact? When there are multiple choices to assign the roles to the tasks, will different choices lead to the different performance impact? If so, can we find an optimal way to conduct the task-role assignments so that the performance impact is minimized? This thesis proposes the method to analyze the delay caused by the authorization constraints if the workflow arrives beyond the non-impact time duration calculated above. Through the analysis of the delay, we realize that the authorization method, i.e., the method to select the roles to assign to the tasks affects the length of the delay caused by the authorization constraints. Based on this finding, we propose an optimal authorization method, called the Global Authorization Aware (GAA) method. Fourth, a key reason why authorization constraints may have impact on performance is because the authorization control directs the tasks to some particular roles. Then how to determine the level of workload directed to each role given a set of authorization constraints? This thesis conducts the theoretical analysis about how the authorization constraints direct the workload to the roles, and proposes the methods to calculate the arriving rate of the requests directed to each role under the role, temporal and cardinality constraints. Finally, the amount of resources allocated to support each individual role may have impact on the execution performance of the workflows. Therefore, it is desired to develop the strategies to determine the adequate amount of resources when the authorization control is present in the system. This thesis presents the methods to allocate the appropriate quantity for resources, including both human resources and computing resources. Different features of human resources and computing resources are taken into account. For human resources, the objective is to maximize the performance subject to the budgets to hire the human resources, while for computing resources, the strategy aims to allocate adequate amount of computing resources to meet the QoS requirements
    corecore