11 research outputs found

    Modeling response times in the Google ROADEF/EURO Challenge

    Get PDF
    In this paper, we extend the machine reassignment model proposed by Google for the ROADEF/EURO Challenge. The aim of the challenge is to develop algorithms for the efficient solutions of data-center consolidation problems. The problem stated in the challenge mainly focus on dependability requirements and does not take into account performance requirements (end-to-end response times). We extend the Google problem definition by modeling and constraining end-to-end response times. We provide experimental results to show the effectiveness of this extension. Copyright is held by author/owner(s)

    On the Cooling-aware Workload Placement Problem

    Get PDF

    Indirect estimation of service demands in the presence of structural changes

    No full text
    According to the utilization law, throughput and utilization are linearly related and their measurements can be used for the indirect estimation of service demands. In practice, however, hardware and software modifications as well as non-modeled loads due to periodic maintenance activities make the estimation process difficult and often impossible without manual intervention to analyze the data. Due to configuration changes, real world datasets show that workload and utilization measurements tend to group themselves into multiple linear clusters. To estimate the service demands of the underlying performance models, the different configurations have to be identified. In this paper, we present an algorithm that, exploiting the timestamps associated with each throughput and utilization observation, identifies the different configurations of the system and estimates the corresponding service demands. Our proposal is based on robust estimation and inference techniques and is therefore suitable to analyze contaminated datasets. Moreover, not only sudden and occasional changes of the system, but also recurring patterns in the system's behavior, due for instance to scheduled maintenance tasks, are detected. An efficient implementation of the algorithm has been made publicly available and, in this paper, its performance is assessed on synthetic as well as on experimental data. © 2013 Elsevier B.V. All rights reserved

    Indirect estimation of service demands in the presence of structural changes

    No full text
    According to the utilization law, throughput and utilization are linearly related and their measurements can be used for the indirect estimation of service demands. In practice, however, hardware and software modifications as well as non-modeled loads due to periodic maintenance activities make the estimation process difficult and often impossible without manual intervention to analyze the data. Due to configuration changes, real world data sets show that workload and utilization measurements tend to group themselves into multiple linear clusters. To estimate the service demands of the underlying performance models, the different configurations have to be identified. In this paper, we present an algorithm that, exploiting the timestamps associated with each throughput and utilization observation, identifies the different configurations of the system and estimates the corresponding service demands. Our proposal is based on robust estimation and inference techniques and is therefore suitable to analyze contaminated data sets. Moreover, not only sudden and occasional changes of the system, but also recurring patterns in the system's behavior, due for instance to scheduled maintenance tasks, are detected. An efficient implementation of the algorithm has been made publicly available and, in this paper, its performance is assessed on synthetic as well as on experimental data. © 2012 IEEE

    Optimal virtual machine scheduling with Anvik

    No full text
    In Infrastructure-as-a-Server (IaaS) clouds, the provider has to decide on which server the virtual machines (VMs) requested by the users should be provisioned. This is an online scheduling problem, in which incoming VMs, typically with different resource requirements, have to be scheduled to one of several heterogeneous servers with limited capacity. For the provider of a public cloud, the objective is to maximize profit, the difference between the operating cost of the servers and the revenue due to running the VMs. In some cases, it might be advantageous to perform VM admission control and reject low-profit VM if low-cost servers are unavailable. We model this problem as a continuous-time Markov decision process and present a tool, anvik, for the computation of the optimal scheduling and admission control policy. Anvik is released as open-source

    Optimizing Cooling and Server Power Consumption

    No full text
    none3P. Cremonesi; S. Gualandi; A. SansotteraCremonesi, Paolo; Gualandi, Stefano; Sansottera, Andre

    Fitting Second-Order Acyclic Marked Markovian Arrival Processes

    No full text
    Markovian Arrival Processes (MAPs) are a tractable class of point-processes useful to model correlated time series, such as those commonly found in network traces and system logs used in performance analysis and reliability evaluation. Marked MAPs (MMAPs) generalize MAPs by further allowing the modeling of multi-class traces, possibly with cross-correlation between multi-class arrivals. In this paper, we present analytical formulas to fit second-order acyclic MMAPs with an arbitrary number of classes. We initially define closed-form formulas to fit second-order MMAPs with two classes, where the underlying MAP is in canonical form. Our approach leverages forward and backward moments, which have recently been defined, but never exploited jointly for fitting. Then, we show how to sequentially apply these formulas to fit an arbitrary number of classes. Representative examples and trace-driven simulation using storage traces show the effectiveness of our approach for fitting empirical datasets

    Consolidation of multi-tier workloads with performance and reliability constraints

    No full text
    Server consolidation leverages hardware virtualization to reduce the operational cost of data centers through the intelligent placement of existing workloads. This work proposes a consolidation model which considers power, performance and reliability aspects simultaneously. There are two main innovative contributions in the model, focused on performance and reliability requirements. The first contribution is the possibility to guarantee average response time constraints for multi-tier workloads. The second contribution is the possibility to model active/active clusters of servers, with enough spare capacity on the fail-over servers to manage the load of the failed ones. At the heart of the proposal is a non-linear optimization model that has been linearized using two different and exact techniques. Moreover, a heuristic method that allows for the fast computation of near optimal solutions has been developed and validated

    Modeling response times in the Google ROADEF/EURO challenge

    No full text
    corecore