6 research outputs found

    A Review on Software Performance Analysis for Early Detection of Latent Faults in Design Models

    Get PDF
    Organizations and society could face major breakdown if IT strategies do not comply with performance requirements. This is more so in the era of globalization and emergence of technologies caused more issues. Software design models might have latent and potential issues that affect performance of software. Often performance is the neglected area in the industry. Identifying performance issues in the design phase can save time, money and effort. Software engineers need to know the performance requirements so as to ensure quality software to be developed. Software performance engineering a quantitative approach for building software systems that can meet performance requirements. There are many design models based on UML, Petri Nets and Product-Forms. These models can be used to derive performance models that make use of LQN, MSC, QNM and so on. The design models are to be mapped to performance models in order to predict performance of system early and render valuable feedback for improving quality of the system. Due to emerging distributed technologies such as EJB, CORBA, DCOM and SOA applications became very complex with collaboration with other software. The component based software systems, software systems that are embedded, distributed likely need more systematic performance models that can leverage the quality of such systems. Towards this end many techniques came into existence. This paper throws light into software performance analysis and its present state-of-the-art. It reviews different design models and performance models that provide valuable insights to make well informed decisions

    Automatic performance modelling of black boxes targetting self-sizing

    Get PDF
    Modern distributed systems are characterized by a growing complexity of their architecture, functionalities and workload. This complexity, and in particular significant workloads, often lead to quality of service loss, saturation and sometimes unavailability of on-line services. To avoid troubles caused by important workloads and fulfill a given level of quality of service (such as response time), systems need to \emph{self-manage}, for instance by tuning or strengthening one tier through replication. This autonomic feature requires performance modelling of systems. In this objective, we developed an automatic identification process providing a queuing model for a part of distributed system considered as black box. This process is a part of a general approach targetting self-sizing for distributed systems and is based on a theoretical and experimental approach. In this report, we show how to derive automatically the performance model of one black box considered as a constituent of a distributed system, starting from load injection experiments. This model is determined progressively, using self-regulated test injections, from statistical analysis of measured metrics, namely response time. This process is illustrated through experimental results

    Dynamic Resource Provisioning for an Interactive System

    Get PDF
    In a data centre, server clusters are typically used to provide the required processing capacity to provide acceptable response time performance to interactive applications. The workload of each application may be time-varying. Static allocation to meet peak demand is not an efficient usage of resources. Dynamic resource allocation, on the other hand, can result in efficient resource utilization while meeting the performance goals of individual applications. In this thesis, we develop a new interactive system model where the number of logon users changes over time. Our objective is to obtain results that can be used to guide dynamic resource allocation decisions. We obtain approximate analytic results for the response time distribution at steady state for our model. Using numerical examples, we show that these results are acceptable in terms of estimating the steady state probabilities of the number of logon users. We also show by comparison with simulation that our results are acceptable in estimating the response time distribution under a variety of dynamic resource allocation scenarios. More importantly, we show that our results are accurate in terms of predicting the minimum number of processor nodes required to meet the performance goal of an interaction application. Such information is valuable to resource provisioning and we discuss how our results can be used to guide dynamic resource allocation decisions

    Aplicaci贸n de la simulaci贸n en tiempo real para mejorar la calidad de servicio del middleware

    Get PDF
    La utilizaci贸n de aplicaciones de diferente naturaleza dentro de un mismo entorno, entorno heterog茅neo, se est谩 extendiendo gracias a la incorporaci贸n de t茅cnicas de virtualizaci贸n a los servidores. Compartir un servidor ofrece ventajas sobretodo en t茅rminos de eficiencia de energ铆a, utilizaci贸n del espacio o mantenimiento. La virtualizaci贸n a帽ade ventajas en la separaci贸n de las diferentes aplicaciones o entornos. A煤n as铆 los gestores de recursos para entornos heterog茅neos tienen como principal dificultad ofrecer calidad de servicio (QoS) a diferentes aplicaciones, entornos o cargas. Una aplicaci贸n que realice streaming y otra que realice c谩lculo intensivo, normalmente , no colisionaran ya que los recursos utilizados son diferentes. Por el otro lado, colisionaran dos aplicaciones que trabajen con la CPU.Nuestra propuesta ofrece la posibilidad de introducir dentro de estos gestores de recursos la capacidad de predecir este tipo de entornos, en concreto transaccionales y Grid, para aumentar la QoS y el rendimiento. Las predicciones han de utilizar t茅cnicas de simulaci贸n ya que la mayoria de las veces el sistema no ser谩 representable mediante t茅cnicas anal铆ticas, por ser un sistema saturado o tener caracter铆sticas dif铆ciles de representar.La simulaci贸n es una t茅cnica utilizada para predecir el comportamiento de sistemas en multitud de 谩reas. Las simulaciones de componentes hardware son muy comunes, dado el coste de construcci贸n de los sistemas simulados (procesadores, memorias...). Sin embargo, el uso de la simulaci贸n en entornos complejos, como es el middleware, y su aplicaci贸n en gestores de recursos tiene un uso muy bajo. Nosotros proponemos simulaciones ligeras capaces de obtener resultados utilizables en estos entornos.Entre las aportaciones y contribuciones de la tesis tenemos: (i) utilizaci贸n de m茅todos de simulaci贸n para incrementar el rendimiento y la calidad de servicio de estos sistemas. (ii) ampliaci贸n de un sistema de monitorizaci贸n global para aplicaciones mixtas (JAVA y C) que nos ofrece la posibilidad de conseguir informaci贸n de lo que ocurre en el middleware y de relacionarlo con el sistema. (iii) creaci贸n de un gestor de recursos capaz de repartir los recursos en un entorno heterog茅neo utilizando la predicci贸n para tener en cuenta diferentes par谩metros de calidad de servicio.En la tesis se muestran los mecanismos de creaci贸n de los distintos simuladores, las herramientas de obtenci贸n de datos y monitorizaci贸n, as铆 como mecanismos aut贸nomos que pueden alimentarse de la predicci贸n para producir mejores resultados. Los resultados obtenidos, con gran impacto en la QoS en el gestor creado para Globus, demuestran que los m茅todos aplicados en esta tesis pueden ser v谩lidos para crear gestores de recursos inteligentes, alimentados de las predicciones del sistema para tomar decisiones. Finalmente, utilizamos las simulaciones realizadas incorpor谩ndolas dentro de un prototipo de gestor de recursos heterog茅neo capaz de repartir los recursos entre un entorno transaccional y un entorno Grid dentro del mismo servidor.Using different applications inside the same environment, heterogeneous environment, is getting more and more usual due the incorporation of the virtualization inside servers. Sharing a server offer advantages in different levels: energy, space, management. Virtualization helps to separate different applications or environments. On the other hand, resource managers have as principal issue offer Quality of Service for different applications, environments or workloads. A streaming server and a CPU intensive application would not collide; the resources they need are different. However, two applications that need CPU processing power will collide.Our proposal offers the possibility to introduce inside the resource manager the capacity to predict these environments. We will work with transactional and Grid environments, and we will increase the QoS and the performance. We need to use simulation techniques for our predictions because a large number of times the system won't be able to be modelled with analytic techniques, for being a saturated system or having features that are hard to reproduce.Simulation is a technique used to predict the behaviour of multiple systems in a large number of areas. Hardware simulations are very common because the building/testing cost of the simulated system (processor, memory, cache,...) is high. However, using simulation in complex environments, as the middleware, and its use in resource management is low. We propose light simulations that can obtain results that can be used in these environments.We will enumerate our contributions: (i) Use simulations to increase the performance and the QoS of those systems. (ii) Improve a global monitoring system for mixed applications (JAVA and C) that gives us information about what happens in the middleware and in the system. (iii) Build a resource manager that can share the resources in a heterogeneous environment an use the prediction to ensure the different QoS parameters that we provide.In the thesis we show how we built the different simulators, the different tools to obtain information and monitorize the applications, and finally the autonomic mechanisms that can feed with the prediction to obtain better results. Results obtained, with great success in the case of the resource manager created for Globus, show and demonstrate that the applied methods in this thesis are suitable to create intelligent resource managers, fed with predictions of the system to take decisions. Finally, we add the built simulations inside a heterogeneous resource manager that shares resources between a transactional environment and a Grid environment inside the same server

    Architecture-Level Software Performance Models for Online Performance Prediction

    Get PDF
    Proactive performance and resource management of modern IT infrastructures requires the ability to predict at run-time, how the performance of running services would be affected if the workload or the system changes. In this thesis, modeling and prediction facilities that enable online performance prediction during system operation are presented. Analyses about the impact of reconfigurations and workload trends can be conducted on the model level, without executing expensive performance tests

    On The Use Of Performance Models To Design Self-Managing Computer Systems

    No full text
    this paper, we describe an approach in which analytic performance models are combined with combinatorial search techniques to design controllers that run periodically (e.g., every few minutes) to determine the best possible configuration for the system given its workload. We first illustrate and motivate the ideas using a simulated multithreaded server. Then, we provide experimental results, obtained by using the techniques described here, to an actual Web server subject to a workload generated by SURG
    corecore