3,230 research outputs found

    Modelling multi-tier enterprise applications behaviour with design of experiments technique

    Get PDF
    Queueing network models are commonly used for performance modelling. However, through application development stage analytical models might not be able to continuously reflect performance, for example due to performance bugs or minor changes in the application code that cannot be readily reflected in the queueing model. To cope with this problem, a measurement-based approach adopting Design of Experiments (DoE) technique is proposed. The applicability of the proposed method is demonstrated on a complex 3-tier e-commerce application that is difficult to model with queueing networks

    A bibliography on formal methods for system specification, design and validation

    Get PDF
    Literature on the specification, design, verification, testing, and evaluation of avionics systems was surveyed, providing 655 citations. Journal papers, conference papers, and technical reports are included. Manual and computer-based methods were employed. Keywords used in the online search are listed

    A model-driven approach to broaden the detection of software performance antipatterns at runtime

    Full text link
    Performance antipatterns document bad design patterns that have negative influence on system performance. In our previous work we formalized such antipatterns as logical predicates that predicate on four views: (i) the static view that captures the software elements (e.g. classes, components) and the static relationships among them; (ii) the dynamic view that represents the interaction (e.g. messages) that occurs between the software entities elements to provide the system functionalities; (iii) the deployment view that describes the hardware elements (e.g. processing nodes) and the mapping of the software entities onto the hardware platform; (iv) the performance view that collects specific performance indices. In this paper we present a lightweight infrastructure that is able to detect performance antipatterns at runtime through monitoring. The proposed approach precalculates such predicates and identifies antipatterns whose static, dynamic and deployment sub-predicates are validated by the current system configuration and brings at runtime the verification of performance sub-predicates. The proposed infrastructure leverages model-driven techniques to generate probes for monitoring the performance sub-predicates and detecting antipatterns at runtime.Comment: In Proceedings FESCA 2014, arXiv:1404.043

    Discrete events: Perspectives from system theory

    Get PDF
    Systems Theory;differentiaal/ integraal-vergelijkingen

    Performance Evaluation for Hybrid Architectures

    Get PDF
    In this dissertation we discuss methologies for estimating the performance of applications on hybrid architectures, systems that include various types of computing resources (e.g. traditional general-purpose processors, chip multiprocessors, reconfigurable hardware). A common use of hybrid architectures will be to deploy coarse pipeline stages of application on suitable compute units with communication path for transferring data. The first problem we focus on relates to the sizing the data queues between the different processing elements of an hybrid system. Much of the discussion centers on our analytical models that can be used to derive performance metrics of interest such as, throughput and stalling probability for networks of processing elements with finite data buffering between them. We then discuss to the reliability of performance models. There we start by presenting scenarios where our analytical model is reliable, and introduce tests that can detect their inapplicability. As we transition into the question of reliability of performance models, we access the accuracy and applicability of various evaluation methods. We present results from our experiments to show the need for measuring and accounting for operating system effects in architectural modeling and estimation

    Performance by Unified Model Analysis (PUMA)

    Get PDF
    Evaluation of non-functional properties of a design (such as performance, dependability, security, etc.) can be enabled by design annotations specific to the property to be evaluated. Performance properties, for instance, can be annotated on UML designs by using the UML Profile for Schedulability, Performance and Time (SPT) . However the communication between the design description in UML and the tools used for non-functional properties evaluation requires support, particularly for performance where there are many alternative performance analysis tools that might be applied. This paper describes a tool architecture called PUMA, which provides a unified interface between different kinds of design information and different kinds of performance models, for example Markov models, stochastic Petri nets and process algebras, queues and layered queues. The paper concentrates on the creation of performance models. The unified interface of PUMA is centered on an intermediate model called Core Scenario Model (CSM), which is extracted from the annotated design model. Experience shows that CSM is also necessary for cleaning and auditing the design information, and providing default interpretations in case it is incomplete, before creating a performance model
    corecore