9 research outputs found

    Model giliran prestasi bagi pelayan portal UUM

    Get PDF
    Despite of the increasing number of Web servers in use, only several people is definitively known about their performance characteristics. Until now, there is no complete model of Web server performance for UUM Web Portal. The main objective of this study is to develop a Generalized System-Level Model at a system-level point of view of Web server performance for UUM Web Portal. A system-level performance model views the system being modeled as a "black box" which only the arrival rate and service time is considered. It is important in order to measure Web server performance metrics such as server utilization, average server throughput, average number of packet in the server and average response time. This study is refers to infinite population and finite queue. It is suitable model because it is easy to define and fast to interpret the result but still represents the real situation. In addition, the complex problem is easily to understand. The developed model can increase the knowledge and understanding about the importance of system-level model in Web server performance. It also offers a basic result for Web server assessment in details. Finally, it can assist the management in making decision about system performance to enhance the server system at UUM Computer Centre as wel

    ARCHITECTURE-BASED RELIABILITY ANALYSIS OF WEB SERVICES

    Get PDF
    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity of the overall reliability and performance to the behavior of the underlying WS architectures and AS components are not well-understood. In other words, the current research on the architecture-based analysis of WSs is limited. This dissertation presents a novel methodology for modeling the reliability and performance of web services. WSs are treated as atomic entities but the AS is broken down into layers. More specifically, interactions of WSs with the underlying layers of an AS are investigated. One important feature of the research is investigating the impact of dynamic parameters that exist at the layers, such as configuration parameters. These parameters may have negative impact on WSs performance if they are not configured properly. WSs are developed in house and the AS considered is JBoss AS. An experimental environment is setup so that controlled service requests can be generated and important performance metrics can be recorded under various configurations of the AS. On the other hand, a simulation model is developed from the source code and run-time behavior of the existing WS and AS implementations. The model mimics the logical behavior of the WSs based on their communication with the AS layers. The simulation results are compared to the experimental results to ensure the correctness of the model. The architecture of the simulation model, which is based on Stochastic Petri Nets (SPN), is modularized in accordance to the layers and their interactions. As the web services are often executed in a complex and distributed environment, the modularized approach enables a user or a designer to observe and investigate the performance of the entire system under various conditions. In contrast, most approaches to WSs analyses are monolithic in that the entire system is treated as a closed box. The results show that 1) the simulation model can be a viable tool for measuring the performance and reliability of WSs under different loads and conditions that may be of great interest to WS designers and the professionals involved; 2) Configuration parameters have big impacts on the overall performance; 3) The simulation model can be tuned to account for various speeds in terms of communication, hardware, and software; 4) As the simulation model is modularized, it may be used as a foundation for aggregating the modules (layers), nullifying modules, or the model can be enhanced to include other aspects of the WS architecture such as network characteristics and the hardware/operating system on which the AS and WSs execute; and 5) The simulation model is beneficial to predict the performance of web services for those cases that are difficult to replicate in a field study

    Provisioning multi-tier cloud applications using statistical bounds on sojourn time

    Full text link
    In this paper we present a simple and effective approach for re-source provisioning to achieve a percentile bound on the end to end response time of a multi-tier application. We, at first, model the multi-tier application as an open tandem network of M/G/1-PS queues and develop a method that produces a near optimal appli-cation configuration, i.e, number of servers at each tier, to meet the percentile bound in a homogeneous server environment – using a single type of server. We then extend our solution to a K-server case and our technique demonstrates a good accuracy, independent of the variability of service-times. Our approach demonstrates a provisioning error of no more than 3 % compared to a 140 % worst case provisioning error obtained by techniques based on anM/M/1-FCFS queue model. In addition, we extend our approach to han-dle a heterogenous server environment, i.e., with multiple types of servers. We find that fewer high-capacity servers are preferable for high percentile provisioning. Finally, we extend our approach to account for the rental cost of each server-type and compute a cost efficient application configuration with savings of over 80%. We demonstrate the applicability of our approach in a real world sys-tem by employing it to provision the two tiers of the java implemen-tation of TPC-W – a multi-tier transactional web benchmark that represents an e-commerce web application, i.e. an online book-store

    Large-scale simulator for global data infrastructure optimization

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, February 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 165-172).Companies depend on information systems to control their operations. During the last decade, Information Technology (IT) infrastructures have grown in scale and complexity. Any large company runs many enterprise applications that serve data to thousands of users which, in turn, consume this information in different locations concurrently and collaboratively. The understanding by the enterprise of its own systems is often limited. No one person in the organization has a complete picture of the way in which applications share and move data files between data centers. In this dissertation an IT infrastructure simulator is developed to evaluate the performance, availability and reliability of large-scale computer systems. The goal is to provide data center operators with a tool to understand the consequences of infrastructure updates. These alterations can include the deployment of new network topologies, hardware configurations or software applications. The simulator was constructed using a multilayered approach and was optimized for multicore scalability. The results produced by the simulator were validated against the real system of a Fortune 500 company. This work pioneers the simulation of large-scale IT infrastructures. It not only reproduces the behavior of data centers at a macroscopic scale, but allows operators to navigate down to the detail of individual elements, such as processors or network links. The combination of queueing networks representing hardware components with message sequences modeling enterprise software enabled reaching a scale and complexity not available in previous research in this area.by Sergio Herrero-López.Ph.D

    Scalable hosting of web applications

    Get PDF
    Modern Web sites have evolved from simple monolithic systems to complex multitiered systems. In contrast to traditional Web sites, these sites do not simply deliver pre-written content but dynamically generate content using (one or more) multi-tiered Web applications. In this thesis, we addressed the question: How to host multi-tiered Web applications in a scalable manner? Scaling up a Web application requires scaling its individual tiers. To this end, various research works have proposed techniques that employ replication or caching solutions at different tiers. However, most of these techniques aim to optimize the performance of individual tiers and not the entire application. A key observation made in our research is that there exists no elixir technique that performs the best for allWeb applications. Effective hosting of a Web application requires careful selection and deployment of several techniques at different tiers. To this end, we present several caching and replication strategies, such as GlobeCBC, GlobeDB and GlobeTP, to improve the scalability of different tiers of a Web application. While these techniques and systems improve the performance of the individual tiers (and eventually the application), an application's administrator is not only interested in the performance of its individual tiers but also in its endto- end performance. To this end, we propose a resource provisioning approach that allows us to choose the best resource configuration for hosting a Web application such that its end-to-end response time can be optimized with minimum usage of resources. The proposed approach is based on an analytical model for multi-tier systems, which allows us to derive expressions for estimating the mean end-to-end response time and its variance.Steen, M.R. van [Promotor]Pierre, G.E.O. [Copromotor

    Функционално и императивно реактивно програмирање употребом генерализације монаде наставка у програмском језику C++.

    Get PDF
    Постоји велики број проблема који захтевају писање програмских система који имају компоненте које се извршавају међусобно асинхроно једне од других...There is a big class of problems that require software systems with asynchronously executed components..

    Adaptive Monitoring of Complex Software Systems using Management Metrics

    Get PDF
    Software systems supporting networked, transaction-oriented services are large and complex; they comprise a multitude of inter-dependent layers and components, and they implement many dynamic optimization mechanisms. In addition, these systems are subject to workload that is hard to predict. These factors make monitoring these systems as well as performing problem determination challenging and costly. In this thesis we tackle these challenges with the goal of lowering the cost and improving the effectiveness of monitoring and problem determination by reducing the dependence on human operators. Specifically, this thesis presents and demonstrates the effectiveness of an efficient, automated monitoring approach which enables detection of errors and failures, and which assists in localizing faults. Software systems expose various types of monitoring data; this thesis focuses on the use of management metrics to monitor a system's health. We devise a system modeling approach which entails modeling stable, statistical correlations among management metrics; these correlations characterize a system's normal behaviour This approach allows a system model to be built automatically and efficiently using the monitoring data alone. In order to control the monitoring overhead, and yet allow a system's health to be assessed reliably, we design an adaptive monitoring approach. This adaptive capability builds on the flexible nature of our system modeling approach, which allows the set of monitored metrics to be altered at runtime. We develop methods to automatically select management metrics to collect at the minimal monitoring level, without any domain knowledge. In addition, we devise an automated fault localization approach, which leverages the ability of the monitoring system to analyze individual metrics. Using a realistic, multi-tier software system, including different applications based on Java Enterprise Edition and industrial-strength products, we evaluate our system modeling approach. We show that stable metric correlations exist in complex software systems and that many of these correlations can be modeled using simple, efficient techniques. We investigate the effect of the collection of management metrics on system performance. We show that the monitoring overhead can be high and thus needs to be controlled. We employ fault injection experiments to evaluate the effectiveness of our adaptive monitoring and fault localization approach. We demonstrate that our approach is cost-effective, has high fault coverage and, in the majority of the cases studied, provides pertinent diagnosis information. The main contribution of this work is to show how to monitor complex software systems and determine problems in them automatically and efficiently. Our solution approach has wide applicability and the techniques we use are simple and yet effective. Our work suggests that the cost of monitoring software systems is not necessarily a function of their complexity, providing hope that the health of increasingly large and complex systems can be tracked with a limited amount of human resources and without sacrificing much system performance

    Envelhecimento de software utilizando ensaios de vida acelerados quantitativos

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia de ProduçãoEste trabalho apresenta uma abordagem sistematizada para acelerar o tempo de vida de sistemas que são acometidos pelos efeitos do envelhecimento de software. Estudos de confiabilidade voltados para estes sistemas necessitam realizar a observação dos tempos de falhas causadas pelo envelhecimento de software, o que exige experimentos de longa duração. Esta exigência cria diversas restrições, principalmente quando o tempo de experimentação implica em prazos e custos proibitivos para o estudo. Neste sentido, este trabalho apresenta uma proposta para acelerar a vida de sistemas que falham por envelhecimento de software, reduzindo o tempo de experimentação necessário para observar as suas falhas, o que reduz os prazos e custos das pesquisas nesta área. A fundamentação teórica deste estudo contou com um arcabouço conceitual envolvendo as áreas de dependabilidade computacional, engenharia de confiabilidade, projeto de experimentos, ensaios de vida acelerados e o estudo da fenomenologia do envelhecimento de software. A técnica de aceleração adotada foi a de ensaios de degradação acelerados, a qual tem sido largamente utilizada em diversas áreas da indústria, mas até o momento não tinha sido usada em estudos envolvendo produtos de software. A elaboração dos meios que permitiram aplicar esta técnica no âmbito da engenharia de software experimental, abordando especialmente o problema do envelhecimento de software, é a principal contribuição desta pesquisa. Em conjunto com a fundamentação teórica foi possível avaliar a aplicabilidade do método proposto a partir de um estudo de caso real, envolvendo a aceleração do envelhecimento de um software servidor web. Dentre os principais resultados obtidos no estudo experimental, destaca-se a identificação dos tratamentos que mais contribuíram para o envelhecimento do software servidor web. A partir destes tratamentos foi possível definir o padrão de carga de trabalho que mais influenciou no envelhecimento do servidor web analisado, sendo que o tipo e tamanho de páginas requisitadas foram os dois fatores mais significativos. Outro resultado importante diz respeito à verificação de que a variação na taxa de requisições do servidor web não influenciou o seu envelhecimento. Com relação à redução no período de experimentação, o método proposto apresentou o menor tempo em comparação aos valores previamente reportados na literatura para experimentos similares, tendo sido 3,18 vezes inferior ao menor tempo encontrado. Em termos de MTBF estimado, com e sem a aceleração do envelhecimento, obteve-se uma redução de aproximadamente 687 vezes no tempo de experimentação aplicando-se o método proposto. This research work presents a systematic approach to accelerate the lifetime of systems that fail due to the software aging effects. Reliability engineering studies applied to systems that require the observation of time to failures caused by software aging normally require a long observation period. This requirement introduces several practical constraints, mainly when the experiment duration demands prohibitive time and cost. The present work shows a proposal to accelerate the lifetime of systems that fail due to software aging, reducing the experimentation time to observe their failures, which means smaller time and costs for research works in this area. The theoretical fundamentals used by the proposed method were based on concepts of the following areas: computing dependability, reliability engineering, design of experiments, accelerated life tests and the software aging phenomenology. The lifetime acceleration technique adopted was the quantitative accelerated degradation test. This technique is largely used in several industry areas, however until the moment it hadn't been used in the software engineering field. The specification of means that allowed applying this technique to the experimental software engineering area, especially to approach the software aging problem, it is considered the main contribution of this research work. Also, it was possible to evaluate the applicability of the proposed method in a case study related to the software aging acceleration of a real web server. An important result was the identification of treatments that contributed to the web server aging. Based on these treatments was possible to define a workload standard that most influenced the aging effects on the web server analyzed, where the page size and page type were two significant factors. Another important result of this case study is regarding the request rate variability, that hadn't influence on the aging of the investigated web server software. Regarding the reduction of the experimentation period, the proposed method showed a shorter duration than values from similar experiments previously published, being 3.18 times less than the shorter experimentation time found in the literature. In terms of MTBF estimates, obtained with and without the aging acceleration, it was possible to achieve a reduction of approximately 687 times of the experimentation time using the proposed method

    Web Server Software Architectures

    No full text
    h thread of any process handling one request at a time (see Figure 1c). The Apache 2.0 Worker MPM implements an example of this type of approach (see http://httpd. apache.org/docs-2.0/mod/worker.html). One advantage of a process-based architecture is stability. The crash of any process generally does not affect the others, so the Web server continues to operate and serve other requests even when one of its processes must be killed and restarted. The architecture's drawbacks relate to performance: creating and killing processes overloads the Web server, mainly because of address-space management operations. Moreover, high-volume Web sites require many processes, which leads to non-negligible memory requirements and increased contextswitching overhead (see http://httpd.apache.org/ docs/misc/perf-tuning.html#preforking). A thread-based architecture is not as stable as a process-based one. A single malfunctioning thread can bring the entire Web server down because all threads share the
    corecore