5 research outputs found

    Modeling Data Center Co-Tenancy Performance Interference

    Get PDF
    A multi-core machine allows executing several applications simultaneously. Those jobs are scheduled on different cores and compete for shared resources such as the last level cache and memory bandwidth. Such competitions might cause performance degradation. Data centers often utilize virtualization to provide a certain level of performance isolation. However, some of the shared resources cannot be divided, even in a virtualized system, to ensure complete isolation. If the performance degradation of co-tenancy is not known to the cloud administrator, a data center often has to dedicate a whole machine for a latency-sensitive application to guarantee its quality of service. Co-run scheduling attempts to make good utilization of resources by scheduling compatible jobs into one machine while maintaining their service level agreements. An ideal co-run scheduling scheme requires accurate contention modeling. Recent studies for co-run modeling and scheduling have made steady progress to predict performance for two co-run applications sharing a specific system. This thesis advances co-tenancy modeling in three aspects. First, with an accurate co-run modeling for one system at hand, we propose a regression model to transfer the knowledge and create a model for a new system with different hardware configuration. Second, by examining those programs that yield high prediction errors, we further leverage clustering techniques to create a model for each group of applications that show similar behavior. Clustering helps improve the prediction accuracy of those pathological cases. Third, existing research is typically focused on modeling two application co-run cases. We extend a two-core model to a three- and four-core model by introducing a light-weight micro-kernel that emulates a complicated benchmark through program instrumentation. Our experimental evaluation shows that our cross-architecture model achieves an average prediction error less than 2% for pairwise co-runs across the SPECCPU2006 benchmark suite. For more than two application co-tenancy modeling, we show that our model is more scalable and can achieve an average prediction error of 2-3%

    Performance-aware task scheduling in multi-core computers

    Get PDF
    Multi-core systems become more and more popular as they can satisfy the increasing computation capacity requirements of complex applications. Task scheduling strategy plays a key role in this vision and ensures that the task processing is both Quality-of-Service (QoS, in this thesis, refers to deadline) satisfied and energy-efficient. In this thesis, we develop task scheduling strategies for multi-core computing systems. We start by looking at two objectives of a multi-core computing system. The first objective aims at ensuring all tasks can satisfy their time constraints (i.e. deadline), while the second strives to minimize the overall energy consumption of the platform. We develop three power-aware scheduling strategies in virtualized systems managed by Xen. Comparing with the original scheduling strategy in Xen, these scheduling algorithms are able to reduce energy consumption without reducing the performance for the jobs. Then, we find that modelling the makespan of a task (before execution) accurately is very important for making scheduling decisions. Our studies show that the discrepancy between the assumption of (commonly used) sequential execution and the reality of time sharing execution may lead to inaccurate calculation of the task makespan. Thus, we investigate the impact of the time sharing execution on the task makespan, and propose the method to model and determine the makespan with the time-sharing execution. Thereafter, we extend our work to a more complex scenario: scheduling DAG applications for time sharing systems. Based on our time-sharing makespan model, we further develop the scheduling strategies for DAG jobs in time-sharing execution, which achieves more effective at task execution. Finally, as the resource interference also makes a big difference to the performance of co-running tasks in multi-core computers (which may further influence the scheduling decision making), we investigate the influential factors that impact on the performance when the tasks ii are co-running on a multicore computer and propose the machine learning-based prediction frameworks to predict the performance of the co-running tasks. The experimental results show that the techniques proposed in this thesis is effective

    On Improving The Performance And Resource Utilization of Consolidated Virtual Machines: Measurement, Modeling, Analysis, and Prediction

    Get PDF
    This dissertation addresses the performance related issues of consolidated \emph{Virtual Machines} (VMs). \emph{Virtualization} is an important technology for the \emph{Cloud} and data centers. Essential features of a data center like the fault tolerance, high-availability, and \emph{pay-as-you-go} model of services are implemented with the help of VMs. Cloud had become one of the significant innovations over the past decade. Research has been going on the deployment of newer and diverse set of applications like the \emph{High-Performance Computing} (HPC), and parallel applications on the Cloud. The primary method to increase the server resource utilization is VM consolidation, running as many VMs as possible on a server is the key to improving the resource utilization. On the other hand, consolidating too many VMs on a server can degrade the performance of all VMs. Therefore, it is necessary to measure, analyze and find ways to predict the performance variation of consolidated VMs. This dissertation investigates the causes of performance variation of consolidated VMs; the relationship between the resource contention and consolidation performance, and ways to predict the performance variation. Experiments have been conducted with real virtualized servers without using any simulation. All the results presented here are real system data. In this dissertation, a methodology is introduced to do the experiments with a large number of tasks and VMs; it is called the \emph{Incremental Consolidation Benchmarking Method} (ICBM). The experiments have been done with different types of resource-intensive tasks, parallel workflow, and VMs. Furthermore, to experiment with a large number of VMs and collect the data; a scheduling framework is also designed and implemented. Experimental results are presented to demonstrate the efficiency of the ICBM and framework

    Modeling cross-architecture co-tenancy performance interference

    No full text
    Cloud computing has become a dominant computing paradigm to provide elastic, affordable computing resources to end users. Due to the increased computing power of modern machines powered by multi/many-core computing, data centers often co-locate multiple virtual machines (VMs) into one physical machine, resulting in co-tenancy, and resource sharing and competition. Applications or VMs co-locating in one physical machine can interfere with each other despite of the promise of performance isolation through virtualization. Modelling and predicting co-run interference therefore becomes critical for data center job scheduling and QoS (Quality of Service) assurance. Co-run interference can be categorized into two metrics, sensitivity and pressure, where the former denotes how an application\u27s performance is affected by its co-run applications, and the latter measures how it impacts the performance of its co-run applications. This paper shows that sensitivity and pressure are both application-and architecture dependent. Further, we propose a regression model that predicts an application\u27s sensitivity and pressure across architectures with high accuracy. This regression model enables a data center scheduler to guarantee the QoS of a VM/application when it is scheduled to co-locate with another VMs/applications
    corecore