24 research outputs found

    Profiling the publish/subscribe paradigm for automated analysis using colored Petri nets

    Get PDF
    UML sequence diagrams are used to graphically describe the message interactions between the objects participating in a certain scenario. Combined fragments extend the basic functionality of UML sequence diagrams with control structures, such as sequences, alternatives, iterations, or parallels. In this paper, we present a UML profile to annotate sequence diagrams with combined fragments to model timed Web services with distributed resources under the publish/subscribe paradigm. This profile is exploited to automatically obtain a representation of the system based on Colored Petri nets using a novel model-to-model (M2M) transformation. This M2M transformation has been specified using QVT and has been integrated in a new add-on extending a state-of-the-art UML modeling tool. Generated Petri nets can be immediately used in well-known Petri net software, such as CPN Tools, to analyze the system behavior. Hence, our model-to-model transformation tool allows for simulating the system and finding design errors in early stages of system development, which enables us to fix them at these early phases and thus potentially saving development costs

    Performance requirements verification during software systems development

    Get PDF
    Requirements verification refers to the assurance that the implemented system reflects the specified requirements. Requirement verification is a process that continues through the life cycle of the software system. When the software crisis hit in 1960, a great deal of attention was placed on the verification of functional requirements, which were considered to be of crucial importance. Over the last decade, researchers have addressed the importance of integrating non-functional requirement in the verification process. An important non-functional requirement for software is performance. Performance requirement verification is known as Software Performance Evaluation. This thesis will look at performance evaluation of software systems. The performance evaluation of software systems is a hugely valuable task, especially in the early stages of a software project development. Many methods for integrating performance analysis into the software development process have been proposed. These methodologies work by utilising the software architectural models known in the software engineering field by transforming these into performance models, which can be analysed to gain the expected performance characteristics of the projected system. This thesis aims to bridge the knowledge gap between performance and software engineering domains by introducing semi-automated transformation methodologies. These are designed to be generic in order for them to be integrated into any software engineering development process. The goal of these methodologies is to provide performance related design guidance during the system development. This thesis introduces two model transformation methodologies. These are the improved state marking methodology and the UML-EQN methodology. It will also introduce the UML-JMT tool which was built to realise the UML-EQN methodology. With the help of automatic design models to performance model algorithms introduced in the UML-EQN methodology, a software engineer with basic knowledge of performance modelling paradigm can conduct a performance study on a software system design. This was proved in a qualitative study where the methodology and the tool deploying this methodology were tested by software engineers with varying levels of background, experience and from different sectors of the software development industry. The study results showed an acceptance for this methodology and the UML-JMT tool. As performance verification is a part of any software engineering methodology, we have to define frame works that would deploy performance requirements validation in the context of software engineering. Agile development paradigm was the result of changes in the overall environment of the IT and business worlds. These techniques are based on iterative development, where requirements, designs and developed programmes evolve continually. At present, the majority of literature discussing the role of requirements engineering in agile development processes seems to indicate that non-functional requirements verification is an unchartered territory. CPASA (Continuous Performance Assessment of Software Architecture) was designed to work in software projects where the performance can be affected by changes in the requirements and matches the main practices of agile modelling and development. The UML-JMT tool was designed to deploy the CPASA Performance evaluation tests

    Animation-based validation of reactive software systems using behavioural models

    Get PDF
    Tese de doutoramento em InformáticaDuring the development of software systems, validation is a crucial activity to guarantee that the software system ful lls the users' needs and expectations. A key issue to have a successful validation consists in adopting a process where users and clients can actively discuss the requirements of the system under development. A reactive system is expected to continuously interact with its environment. Usually, the interaction of a reactive system with its environment is supported by a set of nonterminating processes that operate in parallel. During the interaction, the reactive system must answer to high-priority events, even when the system is executing something else. Due to above characteristics, the behaviour of reactive systems can be very complex. The approach suggested in this thesis assumes that the requirements of reactive software systems are partially described by use case diagrams, and each use case is detailed by a collection of scenario descriptions. Within this approach, one can obtain, from a set of behavioural scenarios of a given system, an executable behavioural model that can support, when complemented with animation- and domain-speci c elements, a graphical animation for reproducing that set of scenarios for validation purposes. Animating the scenarios using graphical elements from the application domain ensures an e ective involvement of the users in the system's validation. The Coloured Petri nets (CPNs) modelling language is used as the notation to obtain the behavioural models, due to its natural support for mechanisms like concurrency, synchronisation, and resource sharing and its tool support. The obtained CPN model is guaranteed to be (1) parametric, allowing an easy modi cation of the initial conditions of the scenarios, (2) environment-descriptive, meaning that it includes the state of the relevant elements of the environment, and (3) animation-separated, implying that the elements related to the animation are separated from the other ones. We validate our approach based on its application to three case studies of reactive systems.Durante o desenvolvimento de sistemas de software, a validação é uma actividade crucial para garantir que o sistema de software satisfaz as necessidades e expectativas do utilizador. O sucesso na validação consiste na utilização de um processo onde os utilizadores e os clientes possam discutir de uma forma activa os requisitos do sistema que está a ser desenvolvido. Um sistema reactivo está continuamente em interacção com o seu ambiente, que é geralmente suportada por um conjunto de processos intermináveis que operam em paralelo. Durante a interacção, o sistema reactivo dever a responder aos eventos com alta prioridade, mesmo quando o sistema está a executar algo diferente. Devido às características anteriores, o comportamento dos sistemas reactivos pode ser muito complexo. A abordagem sugerida nesta tese assume que os requisitos de sistemas reactivos são em parte descritos por diagramas de casos de uso e que cada caso de uso é detalhado por uma colecção de descrições de cenários. Nesta abordagem, é possível obter, a partir de um conjunto de cenários de um dado sistema, um modelo comportamental que seja executável e que suporte, quando complementado com elementos específicos, uma animação gráfica que reproduza aquele conjunto de cenários para fins de validação. A animação dos cenários utilizando elementos gráficos do domínio da aplicação garante um envolvimento efectivo dos utilizadores na validação do sistema. A linguagem de modelação redes de Petri coloridas (CPNs) é usada como a notação para obter os modelos comportamentais, devido ao seu suporte natural a mecanismos como a concorrência, sincronização e partilha de recursos, e às suas ferramentas de suporte. Se as recomendações da abordagem proposta foram seguidas, temos a garantia que o modelo CPN: (1) parametriza as condições iniciais dos cenários, (2) contém uma descrição do ambiente, incluindo o estado dos seus elementos, e (3) separa os elementos relacionados com a animação dos outros elementos do modelo. A validação da nossa abordagem tem por base a sua aplicação a três casos de estudo de sistemas reactivos.Fundação para a Ciência e a Tecnologia (FCT) SFRH/BD/19718/200

    Quality assurance with dynamic meta modeling

    Get PDF
    Dynamic Meta Modeling (DMM) ist eine Semantikbeschreibungstechnik, die sich auf MOF-basierte Sprachen fokussiert und deren Verhalten durch graphische, operationale Regeln beschreibt. Der DMM-Ansatz wurde im Jahr 2000 von Engels et al. erstmals beschrieben und von Hausmann in 2006 in seiner Dissertation ausgearbeitet. Der nächste Schritt war nun, an verschiedenen Modellierungssprachen zu erproben, um die gemachten Erfahrungen in die Verbesserung von DMM und seinen Werkzeugen einfließen zu lassen. Das Ergebnis ist die DMM++-Methode, die in dieser Arbeit vorgestellt wird. Wir haben vorwiegend an drei Stellen Verbesserungen vorgenommen: Erstens haben wir basierend auf unseren Erfahrungen mit DMM neue Sprachkonzepte wie die Verfeinerung von Regeln entwickelt, und wir haben bestehende Konzepte wie die Behandlung von universell quantifizierten Strukturen oder Attributen verbessert. Zweitens haben wir einen testgetriebenen Semantikspezifizierungsprozess entwickelt: Zunächst wird eine Menge von Beispielmodellen erzeugt und deren erwartetes Verhalten formalisiert. Die DMM-Regeln werden dann inkrementell entwickelt, wobei geprüft wird, ob die Beispielmodelle tatsächlich das erwartete Verhalten erzeugen. Zudem haben wir Abdeckungskriterien für Tests von DMM-Spezifikationen entwickelt, die die Beurteilung der Qualität der Tests erlauben. Drittens haben wir gezeigt, wie funktionale und nichtfunktionale Anforderungen an Modelle und ihre DMM-Spezifikation formuliert und geprüft werden können. Für ersteres haben wir eine graphische Sprache zur Formulierung temporallogischer Eigenschaften zur Verfügung gestellt, die dann mit Model Checking geprüft werden. Für zweiteres ermöglichen wir dem Modellierer das Hinzufügen von Performanceinformationen zu den Modellen, aufgrund dessen dann z.B. der average throughput eines Modells berechnet werden kann.Dynamic Meta Modeling (DMM) is a semantics specification technique targeted at MOF-based modeling languages, where a language's behavior is defined by means of graphical operational rules which change runtime models. The DMM approach has first been suggested by Engels et al. in 2000; Hausmann has then defined the DMM language on a conceptual level within his PhD thesis in 2006. Consequently, the next step was to bring the existing DMM concepts alive, and then to apply them to different modeling languages, making use of the lessons learned to improve the DMM concepts as well as the DMM tooling. The result of this process is the DMM++ method, which is presented within this thesis. Our contributions are three-fold: First, and according to our experiences with the DMM language, we have introduced new concepts such as refinement by means of rule overriding, and we have strengthened existing concepts such as the dealing with universal quantified structures or attributes. Second, we have developed a test-driven process for semantics specification: A set of test models is created, and their expected behavior is fixed. Then, the DMM rules are created incrementally, finally resulting in a DMM ruleset realizing at least the expected behavior of the test models. Additionally, we have defined a set of coverage criteria for DMM rulesets which allow to measure the quality of a set of test models. Third, we have shown how functional as well as non-functional requirements can be formulated against models and their DMM specifications. The former is achieved by providing a visual language for formulating temporal logic properties, which are then verified with model checking techniques, and by allowing for visual debugging of models failing a requirement. For the latter, the modeler can add performance information to models and analyze their performance properties, e.g. average throughput.Tag der Verteidigung: 04.07.2013Paderborn, Univ., Diss., 201

    Studying the effects of adding spatiality to a process algebra model

    No full text
    We use NetLogo to create simulations of two models of disease transmission originally expressed in WSCCS. This allows us to introduce spatiality into the models and explore the consequences of having different contact structures among the agents. In previous work, mean field equations were derived from the WSCCS models, giving a description of the aggregate behaviour of the overall population of agents. These results turned out to differ from results obtained by another team using cellular automata models, which differ from process algebra by being inherently spatial. By using NetLogo we are able to explore whether spatiality, and resulting differences in the contact structures in the two kinds of models, are the reason for this different results. Our tentative conclusions, based at this point on informal observations of simulation results, are that space does indeed make a big difference. If space is ignored and individuals are allowed to mix randomly, then the simulations yield results that closely match the mean field equations, and consequently also match the associated global transmission terms (explained below). At the opposite extreme, if individuals can only contact their immediate neighbours, the simulation results are very different from the mean field equations (and also do not match the global transmission terms). These results are not surprising, and are consistent with other cellular automata-based approaches. We found that it was easy and convenient to implement and simulate the WSCCS models within NetLogo, and we recommend this approach to anyone wishing to explore the effects of introducing spatiality into a process algebra model

    Verification and validation of UML and SysML based systems engineering design models

    Get PDF
    In this thesis, we address the issue of model-based verification and validation of systems engineering design models expressed using UML/SysML. The main objectives are to assess the design from its structural and behavioral perspectives and to enable a qualitative as well as a quantitative appraisal of its conformance with respect to its requirements and a set of desired properties. To this end, we elaborate a heretofore unattempted unified approach composed of three well-established techniques that are model-checking, static analysis, and software engineering metrics. These techniques are synergistically combined so that they yield a comprehensive and enhanced assessment. Furthermore, we propose to extend this approach with performance analysis and probabilistic assessment of SysML activity diagrams. Thus, we devise an algorithm that systematically maps these diagrams into their corresponding probabilistic models encoded using the specification language of the probabilistic symbolic model-checker PRISM. Moreover, we define a first of its kind probabilistic calculus, namely activity calculus, dedicated to capture the essence of SysML activity diagrams and its underlying operational semantics in terms of Markov decision processes. Furthermore, we propose a formal syntax and operational semantics for the input language of PRISM. Finally, we mathematically prove the soundness of our translation algorithm with respect to the devised operational semantics using a simulation preorder defined upon Markov decision processes

    Scalable analysis of stochastic process algebra models

    Get PDF
    The performance modelling of large-scale systems using discrete-state approaches is fundamentally hampered by the well-known problem of state-space explosion, which causes exponential growth of the reachable state space as a function of the number of the components which constitute the model. Because they are mapped onto continuous-time Markov chains (CTMCs), models described in the stochastic process algebra PEPA are no exception. This thesis presents a deterministic continuous-state semantics of PEPA which employs ordinary differential equations (ODEs) as the underlying mathematics for the performance evaluation. This is suitable for models consisting of large numbers of replicated components, as the ODE problem size is insensitive to the actual population levels of the system under study. Furthermore, the ODE is given an interpretation as the fluid limit of a properly defined CTMC model when the initial population levels go to infinity. This framework allows the use of existing results which give error bounds to assess the quality of the differential approximation. The computation of performance indices such as throughput, utilisation, and average response time are interpreted deterministically as functions of the ODE solution and are related to corresponding reward structures in the Markovian setting. The differential interpretation of PEPA provides a framework that is conceptually analogous to established approximation methods in queueing networks based on meanvalue analysis, as both approaches aim at reducing the computational cost of the analysis by providing estimates for the expected values of the performance metrics of interest. The relationship between these two techniques is examined in more detail in a comparison between PEPA and the Layered Queueing Network (LQN) model. General patterns of translation of LQN elements into corresponding PEPA components are applied to a substantial case study of a distributed computer system. This model is analysed using stochastic simulation to gauge the soundness of the translation. Furthermore, it is subjected to a series of numerical tests to compare execution runtimes and accuracy of the PEPA differential analysis against the LQN mean-value approximation method. Finally, this thesis discusses the major elements concerning the development of a software toolkit, the PEPA Eclipse Plug-in, which offers a comprehensive modelling environment for PEPA, including modules for static analysis, explicit state-space exploration, numerical solution of the steady-state equilibrium of the Markov chain, stochastic simulation, the differential analysis approach herein presented, and a graphical framework for model editing and visualisation of performance evaluation results

    Research Article Novel Security Conscious Evaluation Criteria for Web Service Composition

    Get PDF
    Abstract: This study aims to present a new mathematical based evaluation method for service composition with respects to security aspects. Web service composition as complex problem solver in service computing has become one of the recent challenging issues in today's web environment. It makes a new added value service through combination of available basic services to address the problem requirements. Despite the importance of service composition in service computing, security issues have not been addressed in this area. Considering the dazzling growth of number of service based transactions, making a secure composite service from candidate services with different security concerns is a demanding task. To deal with this challenge, different techniques have been employed which have direct impacts on secure service composition efficiency. Nonetheless, little work has been dedicated to deeply investigate those impacts on service composition outperformance. Therefore, the focus of this study is to evaluate the existing approaches based on their applied techniques and QoS aspects. A mathematicalbased security-aware evaluation framework is proposed wherein Analytic Hierarchy Process (AHP), a multiple criteria decision making technique, is adopted. The proposed framework is tested on state-of-the-art approaches and the statistical analysis of the results presents the efficiency and correctness of the proposed work
    corecore