16 research outputs found

    Inital Starting Point Analysis for K-Means Clustering: A Case Study

    Get PDF
    Workload characterization is an important part of systems performance modeling. Clustering is a method used to find classes of jobs within workloads. K-Means is one of the most popular clustering algorithms. Initial starting point values are needed as input parameters when performing k-means clustering. This paper shows that the results of the running the k-means algorithm on the same workload will vary depending on the values chosen as initial starting points. Fourteen methods of composing initial starting point values are compared in a case study. The results indicate that a synthetic method, scrambled midpoints, is an effective starting point method for k-means clustering

    A Case Study on Grid Performance Modeling

    Get PDF
    The purpose of this case study is to develop a performance model for an enterprise grid for performance management and capacity planning1. The target environment includes grid applications such as health-care and financial services where the data is located primarily within the resources of a worldwide corporation. The approach is to build a discrete event simulation model for a representative work-flow grid. Five work-flow classes, found using a customized k-means clustering algorithm characterize the workload of the grid. Analyzing the gap between the simulation and measurement data validates the model. The case study demonstrates that the simulation model can be used to predict the grid system performance given a workload forecast. The model is also used to evaluate alternative scheduling strategies. The simulation model is flexible and easily incorporates several system details

    A Capacity Planning Process for Performance Assurance of Component-based Distributed Systems

    Full text link
    For service providers of multi-tiered component-based appli-cations, such as web portals, assuring high performance and availability to their customers without impacting revenue requires effective and careful capacity planning that aims at minimizing the number of resources, and utilizing them ef-ficiently while simultaneously supporting a large customer base and meeting their service level agreements. This paper presents a novel, hybrid capacity planning process that re-sults from a systematic blending of 1) analytical modeling, where traditional modeling techniques are enhanced to over-come their limitations in providing accurate performance estimates; 2) profile-based techniques, which determine per-formance profiles of individual software components for use in resource allocation and balancing resource usage; and 3) allocation heuristics that determine minimum number of re-sources to allocate software components. Our results illustrate that using our technique, perfor-mance (i.e., bounded response time) can be assured while reducing operating costs by using 25 % less resources and in-creasing revenues by handling 20 % more clients compared to traditional approaches

    Capacity Planning of a Commodity Cluster in an Academic Environment: A Case Study

    Get PDF
    In this paper, the design of a simulation model for evaluating two alternative supercomputer configurations in an academic environment is presented. The workload is analyzed and modeled, and its effect on the relative performance of both systems is studied. The Integrated Capacity Planning Environment (ICPE) toolkit, developed for commodity cluster capacity planning, is successfully applied to the target environment. The ICPE is a tool for workload modeling, simulation modeling, and what-if analysis. A new characterization strategy is applied to the workload to more accurately model commodity cluster work- loads. Through what-if analysis, the sensitivity of the baseline system performance to workload change, and also the relative performance of the two proposed alternative systems are compared and evaluated. This case study demonstrates the usefulness of the methodology and the applicability of the tools in gauging system capacity and making design decisions

    Capacity Planning of a Commodity Cluster in an Academic Environment: A Case Study

    Get PDF
    Abstract. In this paper, the design of a simulation model for evaluating two alternative supercomputer configurations in an academic environment is presented. The workload is analyzed and modeled, and its effect on the relative performance of both systems is studied. The Integrated Capacity Planning Environment (ICPE) toolkit, developed for commodity cluster capacity planning, is successfully applied to the target environment. The ICPE is a tool for workload modeling, simulation modeling, and what-if analysis. A new characterization strategy is applied to the workload to more accurately model commodity cluster workloads. Through "what-if" analysis, the sensitivity of the baseline system performance to workload change, and also the relative performance of the two proposed alternative systems are compared and evaluated. This case study demonstrates the usefulness of the methodology and the applicability of the tools in gauging system capacity and making design decisions

    Petri Net Model of a Dynamically Partitioned Multiprocessor System

    No full text
    A multiprocessor system can be subdivided into partitions of processors, each of which can be dedicated to the execution of a parallel program. The partitioning of the system can be done statically at system configuration time, adaptively prior to the execution time, or dynamically during execution time. Since, in a dynamically partitioned multiprocessor system, partitioning can occur anytime during the execution of a program, designing an analytical model for such a system is a difficult task. In this paper a Petri net model of a dynamically partitioned multiprocessor system is presented. The workload consists of parallel programs which are characterized by their execution signatures. Repartitioning overhead is an important parameter and is modeled explicitly. The model is used to perform a series of sensitivity analysis experiments which give insight into the behavior of such systems. Several dynamic processor allocation policies have been implemented. Equal Slope and Shortest Job Fi..

    Quick Performance Bounds for Computer and Storage Systems with Parallel Resources

    No full text
    This paper presents a quick single-step performance bounding technique for parallel computer and storage systems
    corecore