1,689 research outputs found

    Generating a Performance Stochastic Model from UML Specifications

    Full text link
    Since its initiation by Connie Smith, the process of Software Performance Engineering (SPE) is becoming a growing concern. The idea is to bring performance evaluation into the software design process. This suitable methodology allows software designers to determine the performance of software during design. Several approaches have been proposed to provide such techniques. Some of them propose to derive from a UML (Unified Modeling Language) model a performance model such as Stochastic Petri Net (SPN) or Stochastic process Algebra (SPA) models. Our work belongs to the same category. We propose to derive from a UML model a Stochastic Automata Network (SAN) in order to obtain performance predictions. Our approach is more flexible due to the SAN modularity and its high resemblance to UML' state-chart diagram

    Performance Modeling and Analysis of Software Architectures Specified Through Graph Transformations

    Get PDF
    Software architecture plays an important role in the success of modern, large and distributed software systems. For many of the software systems -- especially safety-critical ones -- it is important to specify their architectures using formal modeling notations. In this case, it is possible to assess different functional and non-functional properties on the designed models. Graph Transformation System (GTS) is a formal yet understandable language which is suitable for architectural modeling. Most of the existing works done on architectural modeling and analysis by GTS are concentrated on functional aspects, while for many systems it is crucial to consider non-functional aspects for modeling and analysis at the architectural level. In this paper, we present an approach to performance analysis of software architectures specified through GTS. To do so, we first enrich the existing architectural style -- specified through GTS - with performance information. Then, the performance models are generated in PEPA (Performance Evaluation Process Algebra) -- a formal language based on the stochastic process algebra -- using the enriched GTS models. Finally, we analyze different features like throughput, utilization of different software components, etc. on the generated performance models. All the main concepts are illustrated through a case study

    Modelling Security of Critical Infrastructures: A Survivability Assessment

    Get PDF
    Critical infrastructures, usually designed to handle disruptions caused by human errors or random acts of nature, define assets whose normal operation must be guaranteed to maintain its essential services for human daily living. Malicious intended attacks to these targets need to be considered during system design. To face these situations, defence plans must be developed in advance. In this paper, we present a Unified Modelling Language profile, named SecAM, that enables the modelling and security specification for critical infrastructures during the early phases (requirements, design) of system development life cycle. SecAM enables security assessment, through survivability analysis, of different security solutions before system deployment. As a case study, we evaluate the survivability of the Saudi Arabia crude-oil network under two different attack scenarios. The stochastic analysis, carried out with Generalized Stochastic Petri nets, quantitatively estimates the minimization of attack damages on the crude-oil network

    06161 Abstracts Collection -- Simulation and Verification of Dynamic Systems

    Get PDF
    From 17.04.06 to 22.04.06, the Dagstuhl Seminar 06161 ``Simulation and Verification of Dynamic Systems\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Performance requirements verification during software systems development

    Get PDF
    Requirements verification refers to the assurance that the implemented system reflects the specified requirements. Requirement verification is a process that continues through the life cycle of the software system. When the software crisis hit in 1960, a great deal of attention was placed on the verification of functional requirements, which were considered to be of crucial importance. Over the last decade, researchers have addressed the importance of integrating non-functional requirement in the verification process. An important non-functional requirement for software is performance. Performance requirement verification is known as Software Performance Evaluation. This thesis will look at performance evaluation of software systems. The performance evaluation of software systems is a hugely valuable task, especially in the early stages of a software project development. Many methods for integrating performance analysis into the software development process have been proposed. These methodologies work by utilising the software architectural models known in the software engineering field by transforming these into performance models, which can be analysed to gain the expected performance characteristics of the projected system. This thesis aims to bridge the knowledge gap between performance and software engineering domains by introducing semi-automated transformation methodologies. These are designed to be generic in order for them to be integrated into any software engineering development process. The goal of these methodologies is to provide performance related design guidance during the system development. This thesis introduces two model transformation methodologies. These are the improved state marking methodology and the UML-EQN methodology. It will also introduce the UML-JMT tool which was built to realise the UML-EQN methodology. With the help of automatic design models to performance model algorithms introduced in the UML-EQN methodology, a software engineer with basic knowledge of performance modelling paradigm can conduct a performance study on a software system design. This was proved in a qualitative study where the methodology and the tool deploying this methodology were tested by software engineers with varying levels of background, experience and from different sectors of the software development industry. The study results showed an acceptance for this methodology and the UML-JMT tool. As performance verification is a part of any software engineering methodology, we have to define frame works that would deploy performance requirements validation in the context of software engineering. Agile development paradigm was the result of changes in the overall environment of the IT and business worlds. These techniques are based on iterative development, where requirements, designs and developed programmes evolve continually. At present, the majority of literature discussing the role of requirements engineering in agile development processes seems to indicate that non-functional requirements verification is an unchartered territory. CPASA (Continuous Performance Assessment of Software Architecture) was designed to work in software projects where the performance can be affected by changes in the requirements and matches the main practices of agile modelling and development. The UML-JMT tool was designed to deploy the CPASA Performance evaluation tests

    WS-Pro: a Petri net based performance-driven service composition framework

    Get PDF
    As an emerging area gaining prevalence in the industry, Web Services was established to satisfy the needs for better flexibility and higher reliability in web applications. However, due to the lack of reliable frameworks and difficulties in constructing versatile service composition platform, web developers encountered major obstacles in large-scale deployment of web services. Meanwhile, performance has been one of the major concerns and a largely unexplored area in Web Services research. There is high demand for researchers to conceive and develop feasible solutions to design, monitor, and deploy web service systems that can adapt to failures, especially performance failures. Though many techniques have been proposed to solve this problem, none of them offers a comprehensive solution to overcome the difficulties that challenge practitioners. Central to the performance-engineering studies, performance analysis and performance adaptation are of paramount importance to the success of a software project. The industry learned through many hard lessons the significance of well-founded and well-executed performance engineering plans. An important fact is that it is too expensive to tackle performance evaluation, mostly through performance testing, after the software is developed. This is especially true in recent decades when software complexity has risen sharply. After the system is deployed, performance adaptation is essential to maintaining and improving software system reliability. Performance adaptation provides techniques to mitigate the consequence of performance failures and therefore is an important research issue. Performance adaptation is particularly meaningful for mission-critical software systems and software systems with inevitable frequent performance failures, such as Web Services. This dissertation focuses on Web Services framework and proposes a performance-driven service composition scheme, called WS-Pro, to support both performance analysis and performance adaptation. A formalism of transformation from WS-BPEL to Petri net is first defined to enable the analysis of system properties and facilitate quality prediction. A state-transition based proof is presented to show that the transformed Petri net model correctly simulates the behavior of the WS-BPEL process. The generated Petri net model was augmented using performance data supplied by both historical data and runtime data. Results of executing the Petri nets suggest that optimal composition plans can be achieved based on the proposed method. The performance of service composition procedure is an important research issue which has not been sufficiently treated by researchers. However, such an issue is critical for dynamic service composition, where re-planning must be done in a timely manner. In order to improve the performance of service composition procedure and enhance performance adaptation, this dissertation presents an algorithm to remove loops in the reachability graphs so that a large portion of the computation time of service composition can be moved to a pre-processing unit; hence the response time is shortened during runtime. We also extended the WS-Pro to the ubiquitous computing area to improve fault-tolerance

    A template-based methodology for the specification and automated composition of performability models

    Get PDF
    Dependability and performance analysis of modern systems is facing great challenges: their scale is growing, they are becoming massively distributed, interconnected, and evolving. Such complexity makes model-based assessment a difficult and time-consuming task. For the evaluation of large systems, reusable submodels are typically adopted as an effective way to address the complexity and to improve the maintainability of models. When using state-based models, a common approach is to define libraries of generic submodels, and then compose concrete instances by state sharing, following predefined “patterns” that depend on the class of systems being modeled. However, such composition patterns are rarely formalized, or not even documented at all. In this paper, we address this problem using a model-driven approach, which combines a language to specify reusable submodels and composition patterns, and an automated composition algorithm. Clearly defining libraries of reusable submodels, together with patterns for their composition, allows complex models to be automatically assembled, based on a high-level description of the scenario to be evaluated. This paper provides a solution to this problem focusing on: formally defining the concept of model templates, defining a specification language for model templates, defining an automated instantiation and composition algorithm, and applying the approach to a case study of a large-scale distributed system69129330

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling
    corecore