16 research outputs found

    Software Performance Engineering for Cloud Applications – A Survey

    Get PDF
    Cloud computing enables application service providers to lease their computing capabilities for deploying applications depending on user QoS (Quality of Service) requirements.Cloud applications have different composition, configuration and deployment requirements.Quantifying the performance of applications in Cloud computing environments is a challenging task. Software performance engineering(SPE) techniques enable us to assess performance requirements of software applications at the early stages of development. This assessment helps the developers to fine tune their design needs so that the targeted performance goals can be met. In this paper, we try to analyseperformance related issues of cloud applications and identify any SPE techniques currently available for cloud applications

    Performance by Unified Model Analysis (PUMA)

    Get PDF
    Evaluation of non-functional properties of a design (such as performance, dependability, security, etc.) can be enabled by design annotations specific to the property to be evaluated. Performance properties, for instance, can be annotated on UML designs by using the UML Profile for Schedulability, Performance and Time (SPT) . However the communication between the design description in UML and the tools used for non-functional properties evaluation requires support, particularly for performance where there are many alternative performance analysis tools that might be applied. This paper describes a tool architecture called PUMA, which provides a unified interface between different kinds of design information and different kinds of performance models, for example Markov models, stochastic Petri nets and process algebras, queues and layered queues. The paper concentrates on the creation of performance models. The unified interface of PUMA is centered on an intermediate model called Core Scenario Model (CSM), which is extracted from the annotated design model. Experience shows that CSM is also necessary for cleaning and auditing the design information, and providing default interpretations in case it is incomplete, before creating a performance model

    Scaling Size and Parameter Spaces in Variability-Aware Software Performance Models (T)

    Get PDF
    In software performance engineering, what-if scenarios, architecture optimization, capacity planning, run-time adaptation, and uncertainty management of realistic models typically require the evaluation of many instances. Effective analysis is however hindered by two orthogonal sources of complexity. The first is the infamous problem of state space explosion — the analysis of a single model becomes intractable with its size. The second is due to massive parameter spaces to be explored, but such that computations cannot be reused across model instances. In this paper, we efficiently analyze many queuing models with the distinctive feature of more accurately capturing variability and uncertainty of execution rates by incorporating general (i.e., non-exponential) distributions. Applying product-line engineering methods, we consider a family of models generated by a core that evolves into concrete instances by applying simple delta operations affecting both the topology and the model's parameters. State explosion is tackled by turning to a scalable approximation based on ordinary differential equations. The entire model space is analyzed in a family-based fashion, i.e., at once using an efficient symbolic solution of a super-model that subsumes every concrete instance. Extensive numerical tests show that this is orders of magnitude faster than a naive instance-by-instance analysis

    From UML to LQN by XML algebra-based model transformations

    Full text link

    TransformaciĂłn de diagramas de comportamiento a modelos de concurrencia

    Full text link
    Este Trabajo Fin de Grado explora el uso de transformaciones de modelos de máquinas de estados a redes de Petri estocásticas generalizadas en el entorno de desarrollo de ingeniería de rendimiento software, y detalla la creación de una implementación del estudio teórico en una plataforma de desarrollo de software popular a través de ingeniería orientada a modelos. El estudio teórico se centrará en el tipo de modelos de concurrencia conocido como redes de Petri. Este modelo permite describir los modelos de comportamiento de sistemas comunes mediante diagramas abiertos a extracción de métricas y análisis, ampliando los horizontes de diseño de autómatas de estados finitos, una de las representaciones de comportamiento de sistemas más comúnmente utilizadas. Dichas transformaciones pues, conservarán todas propiedades del diagrama de modelado de la máquina de estados, añadiendo las ventajas del nuevo formalismo, así como útiles adicionales como medidas temporales y de control de recursos. La implementación buscará la maximización de la accesibilidad y agilidad de descripción de los sistemas diseñados por el usuario y facilitándole acceso a plataformas de análisis más completos sin requerir complejos diseños iniciales. El desarrollo de la aplicación estará basado en herramientas de desarrollo de software establecidas como el sistema de transformaciones de modelos ATL, desarrollada por la compañía vendedora de herramientas de software francesa OBEO y el Instituto Nacional francés de Investigación en Informática y Automática; el generador de código y plantillas Acceleo, desarrollado por la Fundación Eclipse; así como herramientas para el manejo de redes estocásticas de análisis como GreatSPN, desarrollada por la Universidad de Turín. Con estas herramientas estandarizadas, la aplicación marcará como objetivo ser lo más modular posible y compatible con otras, permitiendo expansiones del proyecto o acoplamiento a otros proyectos de mayor escala

    Taguchi approach for performance evaluation of service-oriented software systems.

    Get PDF
    Service-oriented software systems are becoming increasingly common in the world today as big companies such as Microsoft and IBM advocate approaches focusing on assembly of system from distributed services. Although performance of such systems is a big problem, there is surprisingly an obvious lack of attention for evaluating the performance of enterprise-scale, service-oriented software systems. This thesis investigates the application of statistical tools in performance engineering domain for total quality management. In particular, the Taguchi approach is used as an efficient and systematic way to optimize designs for performance, quality, and cost. The aim is to improve the performance of software systems and to reduce application development cost by assembling services from known vendors or intranet services. The focus of this thesis is on the response time of service-oriented systems. Nevertheless, the developed methodology also applies to other performance issues, such as memory management and caching. The interaction problems of those issues are preserved for future work.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .L585. Source: Masters Abstracts International, Volume: 43-01, page: 0240. Adviser: Xiaobu Yuan. Thesis (M.Sc.)--University of Windsor (Canada), 2004

    Scenario-based verification and validation of dynamic UML specifications

    Get PDF
    The Unified Modeling Language (UML) is the result of the unification process of earlier object oriented models and notations. Verification and validation (V&V) tasks, as applied to UML specifications, enable early detection of analysis and design flaws prior to implementation. In this work, we address four V&V analysis methods for UML dynamic specifications, namely: Timing analysis and automatic V&V of timing constraints, automated Architectural-level Risk assessment, Performance Modeling and Fault Injection analysis. For each we present: approaches, methods and/or automated techniques. We use two case studies: a Cardiac Pacemaker and a simplified Automatic Teller Machine (ATM) banking subsystem, for illustrating the developed techniques

    WS-Pro: a Petri net based performance-driven service composition framework

    Get PDF
    As an emerging area gaining prevalence in the industry, Web Services was established to satisfy the needs for better flexibility and higher reliability in web applications. However, due to the lack of reliable frameworks and difficulties in constructing versatile service composition platform, web developers encountered major obstacles in large-scale deployment of web services. Meanwhile, performance has been one of the major concerns and a largely unexplored area in Web Services research. There is high demand for researchers to conceive and develop feasible solutions to design, monitor, and deploy web service systems that can adapt to failures, especially performance failures. Though many techniques have been proposed to solve this problem, none of them offers a comprehensive solution to overcome the difficulties that challenge practitioners. Central to the performance-engineering studies, performance analysis and performance adaptation are of paramount importance to the success of a software project. The industry learned through many hard lessons the significance of well-founded and well-executed performance engineering plans. An important fact is that it is too expensive to tackle performance evaluation, mostly through performance testing, after the software is developed. This is especially true in recent decades when software complexity has risen sharply. After the system is deployed, performance adaptation is essential to maintaining and improving software system reliability. Performance adaptation provides techniques to mitigate the consequence of performance failures and therefore is an important research issue. Performance adaptation is particularly meaningful for mission-critical software systems and software systems with inevitable frequent performance failures, such as Web Services. This dissertation focuses on Web Services framework and proposes a performance-driven service composition scheme, called WS-Pro, to support both performance analysis and performance adaptation. A formalism of transformation from WS-BPEL to Petri net is first defined to enable the analysis of system properties and facilitate quality prediction. A state-transition based proof is presented to show that the transformed Petri net model correctly simulates the behavior of the WS-BPEL process. The generated Petri net model was augmented using performance data supplied by both historical data and runtime data. Results of executing the Petri nets suggest that optimal composition plans can be achieved based on the proposed method. The performance of service composition procedure is an important research issue which has not been sufficiently treated by researchers. However, such an issue is critical for dynamic service composition, where re-planning must be done in a timely manner. In order to improve the performance of service composition procedure and enhance performance adaptation, this dissertation presents an algorithm to remove loops in the reachability graphs so that a large portion of the computation time of service composition can be moved to a pre-processing unit; hence the response time is shortened during runtime. We also extended the WS-Pro to the ubiquitous computing area to improve fault-tolerance

    Deriving a queueing network based performance model from UML diagrams

    No full text

    Prédiction de performance d'algorithmes de traitement d'images sur différentes architectures hardwares

    Get PDF
    In computer vision, the choice of a computing architecture is becoming more difficult for image processing experts. Indeed, the number of architectures allowing the computation of image processing algorithms is increasing. Moreover, the number of computer vision applications constrained by computing capacity, power consumption and size is increasing. Furthermore, selecting an hardware architecture, as CPU, GPU or FPGA is also an important issue when considering computer vision applications.The main goal of this study is to predict the system performance in the beginning of a computer vision project. Indeed, for a manufacturer or even a researcher, selecting the computing architecture should be done as soon as possible to minimize the impact on development.A large variety of methods and tools has been developed to predict the performance of computing systems. However, they do not cover a specific area and they cannot predict the performance without analyzing the code or making some benchmarks on architectures. In this works, we specially focus on the prediction of the performance of computer vision algorithms without the need for benchmarking. This allows splitting the image processing algorithms in primitive blocks.In this context, a new paradigm based on splitting every image processing algorithms in primitive blocks has been developed. Furthermore, we propose a method to model the primitive blocks according to the software and hardware parameters. The decomposition in primitive blocks and their modeling was demonstrated to be possible. Herein, the performed experiences, on different architectures, with real data, using algorithms as convolution and wavelets validated the proposed paradigm. This approach is a first step towards the development of a tool allowing to help choosing hardware architecture and optimizing image processing algorithms.Dans le contexte de la vision par ordinateur, le choix d’une architecture de calcul est devenu de plus en plus complexe pour un spécialiste du traitement d’images. Le nombre d’architectures permettant de résoudre des algorithmes de traitement d’images augmente d’année en année. Ces algorithmes s’intègrent dans des cadres eux-mêmes de plus en plus complexes répondant à de multiples contraintes, que ce soit en terme de capacité de calculs, mais aussi en terme de consommation ou d’encombrement. A ces contraintes s’ajoute le nombre grandissant de types d’architectures de calculs pouvant répondre aux besoins d’une application (CPU, GPU, FPGA). L’enjeu principal de l’étude est la prédiction de la performance d’un système, cette prédiction pouvant être réalisée en phase amont d’un projet de développement dans le domaine de la vision. Dans un cadre de développement, industriel ou de recherche, l’impact en termes de réduction des coûts de développement, est d’autant plus important que le choix de l’architecture de calcul est réalisé tôt. De nombreux outils et méthodes d’évaluation de la performance ont été développés mais ceux-ci, se concentrent rarement sur un domaine précis et ne permettent pas d’évaluer la performance sans une étude complète du code ou sans la réalisation de tests sur l’architecture étudiée. Notre but étant de s’affranchir totalement de benchmark, nous nous sommes concentrés sur le domaine du traitement d’images pour pouvoir décomposer les algorithmes du domaine en éléments simples ici nommées briques élémentaires. Dans cette optique, un nouveau paradigme qui repose sur une décomposition de tout algorithme de traitement d’images en ces briques élémentaires a été conçu. Une méthode est proposée pour modéliser ces briques en fonction de paramètres software et hardwares. L’étude démontre que la décomposition en briques élémentaires est réalisable et que ces briques élémentaires peuvent être modélisées. Les premiers tests sur différentes architectures avec des données réelles et des algorithmes comme la convolution et les ondelettes ont permis de valider l'approche. Ce paradigme est un premier pas vers la réalisation d’un outil qui permettra de proposer des architectures pour le traitement d’images et d’aider à l’optimisation d’un programme dans ce domaine
    corecore