123 research outputs found

    WAS control center: an autonomic performance-triggered tracing environment for WebSphere

    Get PDF
    Studying any aspect of an application server with high availability requirements can become a tedious task when a continuous monitoring of the server status is necessary. The creation of performance-driven autonomic systems can hurry up the analysis of this kind of complex systems. In this paper we present an autonomic performance-driven environment for WebSphere application server that can be used as the basis to construct systems that must monitor the performance of the system. As an applied use of this infrastructure, we present the WAS Control Center which is a deep tracing tool-set for 24Ă—7 environments. It exploits the benefits of autonomic computing to lighten the costs of highly detailed system tracing on a J2EE application server. The WAS Control Center is helping us in the creation of performance models of the WebSphere application server.Peer ReviewedPostprint (author's final draft

    Complete instrumentation requirements for performance analysis of web based technologies

    Get PDF
    In this paper we present the eDragon environment, a research platform created to perform complete performance analysis of new Web-based technologies. eDragon enables the understanding of how application servers work in both sequential and parallel platforms offering a new insight in the usage of system resources. The environment is composed of a set of instrumentation modules, a performance analysis and visualization tool and a set of experimental methodologies to perform complete performance analysis of Web-based technologies. This paper describes the design and implementation of this research platform and highlights some of its main functionalities. We will also show how a detailed analytical view can be obtained through the application of a bottom-up strategy, starting with a group of system events and advancing to more complex performance metrics using a continuous derivation process.We acknowledge the European Center for Parallelism of Barcelona (CEPBA) and CEPBA-IBM Research Institute (CIRI) for supplying the computing resources for our experiments. This work is supported by the Ministry of Science and Technology of Spain and the European Union (FEDER funds) under contract TIC2001–0995-C02–0 I and by Direcció General de Recerca of the Generalitat de Catalunya under grant 2001FI 00694 UPC APTIND.Peer ReviewedPostprint (author's final draft

    Performance impact of the grid middleware

    Get PDF
    The Open Grid Services Architecture (OGSA) defines a new vision of the Grid based on the use of Web Services (Grid Services). The standard interfaces, behaviors and schemes that are consistent with the OGSA specification are defined by the Open Grid Service Infrastructure (OGSI). Grid Services, as an extension of the Web Services, run on top of rich execution frameworks that make them accessible and interoperable with other applications. Two examples of these frameworks are Sun’s J2EE platform and Microsoft’s .NET. The Globus Project implements the OGSI Specification for the J2EE framework in the Globus Toolkit. As any J2EE application, the performance of the Globus Toolkit is constrained by the performance obtained by the J2EE execution stack This performance can be influenced by many points of the execution stack: operating system, JVM, middleware or the same grid service, without forgetting the processing overheads related to the parsing of the communication protocols. In the scope of this chapter, all this levels together will be referred to as the grid middleware. In order to avoid the grid middleware to become a performance bottleneck for a distributed grid-enabled application, grid nodes have to be tuned for an efficient execution of I/O intensive applications because they can receive a high volume of requests every second and have to deal with a big amount of invocations, message parsing operations and a continuous task of marshaling and unmarshalling service parameters. All the parameters of the system affecting these operations have to be tuned according with the expected system load intensity. A Grid node is connected to to other nodes through a network connection which is also a decisive factor to obtain a high performance for a grid application. If the inter-node data transmission time overlaps completely the processing time for a computational task, the benefits of the grid architecture will be lost. Additionally, in many situations the content exchanged between grid nodes can be considered confidential and should be protected from curious sights. But the cost of data encryption/decryption can be an important performance weak that must be taken into account. In this chapter we will study the process of receiving and executing a Grid job from the perspective of the underlying levels existing below the Grid application. We will analyze the different performance parameters that can influence in the performance of the Grid middleware and will show the general schema of tasks involved in the service of an execution request.Postprint (author’s final draft

    High Availability and Scalability of Mainframe Environments using System z and z/OS as example

    Get PDF
    Mainframe computers are the backbone of industrial and commercial computing, hosting the most relevant and critical data of businesses. One of the most important mainframe environments is IBM System z with the operating system z/OS. This book introduces mainframe technology of System z and z/OS with respect to high availability and scalability. It highlights their presence on different levels within the hardware and software stack to satisfy the needs for large IT organizations

    Optimizing the resource utilization of enterprise content management workloads through measured performance baselines and dynamic topology adaptation

    Get PDF
    To oblige with the legal requirements, organizations have to keep data up to a certain amount of time.They are creating a huge amount of data on daily basis therefore it is very difficult for them to manage and store this data due to the legal requirements. This is where Enterprise Content Management (ECM) system comes into picture. ECM is a means of organizing and storing an organization's documents and other content that relates to the organization's processes. With ECM being offered as a service, thanks to cloud computing it makes sense to offer this functionality as a shared service. There are various benefits of offering it as a shared service one of which is that it is a cheaper method to meet the needs of large organizations with different requirements for ECM functionality. ECM systems use resources like memory, central processing unit (CPU) and disk which are shared among different clients (organizations). With every client, a service level agreement is there which describes the performance criteria a provider promises to meet while delivering the ECM service. To improve the performance of the ECM by optimizing the use of resources and match the Service level agreements various techniques are used. In this thesis, heuristics technique is used. Performance baselines and utilization of resources are measured for different workloads of the clients and on the basis of that, resources of the ECM can be dynamically provisioned or assigned to different clients get the optimized resource utilization and better performance. First of all typical workload is designed which is similar to the work being performed by various banks and insurance companies using IBM ECM systems and which consists of interactive and batch type of operations. Performance baselines are being measured for these workloads by monitoring the key performance indicators (KPIs) with variable number of users performing operations on the system at the same time. After getting the results for KPIs and resource utilization, resources are being assigned dynamically according to their utilization in a way that the use of resources is optimized and clients are satisfied with better service at the same time

    A Scalable Cluster-based Infrastructure for Edge-computing Services

    Get PDF
    In this paper we present a scalable and dynamic intermediary infrastruc- ture, SEcS (acronym of BScalable Edge computing Services’’), for developing and deploying advanced Edge computing services, by using a cluster of heterogeneous machines. Our goal is to address the challenges of the next-generation Internet services: scalability, high availability, fault-tolerance and robustness, as well as programmability and quick prototyping. The system is written in Java and is based on IBM’s Web Based Intermediaries (WBI) [71] developed at IBM Almaden Research Center

    Measurements based performance analysis of Web services

    Get PDF
    Web services are increasingly used to enable interoperability and flexible integration of software systems. In this thesis we focus on measurement-based performance analysis of an e-commerce application which uses Web services components to execute business operations. In our experiments we use a session-oriented workload generated by a tool developed accordingly to TPC-W specification. The empirical results are obtained for two different user profiles, Browsing and Ordering, under different workload intensities. In addition to variation in workloads we also study the applications performance when Web services are implemented using .NET and J2EE. Unlike the previous work which was focused on the overall server response time and throughput, we present Web interaction, software architecture, and hardware resource level analysis of the system performance. In particular, we propose a method for extracting component level response times from the application server logs and study the impact of Web services and other components on the server performance. The results show that the response times of Web services components increase significantly under higher workload intensities when compared to other components. (Abstract shortened by UMI.)

    Ecotopia: An Ecological Framework for Change Management in Distributed Systems

    Full text link
    Abstract. Dynamic change management in an autonomic, service-oriented infrastructure is likely to disrupt the critical services delivered by the infrastructure. Furthermore, change management must accommodate complex real-world systems, where dependability and performance objectives are managed across multiple distributed service components and have specific criticality/value models. In this paper, we present Ecotopia, a framework for change management in complex service-oriented architectures (SOA) that is ecological in its intent: it schedules change operations with the goal of minimizing the service-delivery disruptions by accounting for their impact on the SOA environment. The change-planning functionality of Ecotopia is split between multiple objective-advisors and a system-level change-orchestrator component. The objective advisors assess the change-impact on service delivery by estimating the expected values of the Key Performance Indicators (KPIs), during and after change. The orchestrator uses the KPI estimations to assess the per-objective and overall business-value changes over a long time-horizon and to identify the scheduling plan that maximizes the overall business value. Ecotopia handles both external change requests, like software upgrades, and internal changes requests, like fault-recovery actions. We evaluate the Ecotopia framework using two realistic change-management scenarios in distributed enterprise systems
    • …
    corecore