450 research outputs found

    Limitations and Solutions for Real-Time Local Inter-Domain Communication in Xen

    Get PDF
    As computer hardware becomes increasingly powerful, there is an ongoing trend towards integrating complex, legacy real-time systems using fewer hosts through virtualization. Especially in embedded systems domains such as avionics and automotive engineering, this kind of system integration can greatly reduce system weight, cost, and power requirements. When systems are integrated in this manner, network communication may become local inter-domain communication (IDC) within the same host. This paper examines the limitations of inter-domain communication in Xen, a widely used open-source virtual machine monitor (VMM) that recently has been extended to support real-time domain scheduling. We find that both the VMM scheduler and the manager domain can significantly impact real-time IDC performance under different conditions, and show that improving the VMM scheduler alone cannot deliver real-time performance for local IDC. To address those limitations, we present the RTCA, a Real-Time Communication Architecture within the manager domain in Xen, along with empirical evaluations whose results demonstrate that the latency of communication tasks can be improved dramatically from ms to Îźs by a combination of the RTCA and a real-time VMM scheduler

    On the Virtualization of CUDA Based GPU Remoting on ARM and X86 Machines in the GVirtuS Framework

    Get PDF
    The astonishing development of diverse and different hardware platforms is twofold: on one side, the challenge for the exascale performance for big data processing and management; on the other side, the mobile and embedded devices for data collection and human machine interaction. This drove to a highly hierarchical evolution of programming models. GVirtuS is the general virtualization system developed in 2009 and firstly introduced in 2010 enabling a completely transparent layer among GPUs and VMs. This paper shows the latest achievements and developments of GVirtuS, now supporting CUDA 6.5, memory management and scheduling. Thanks to the new and improved remoting capabilities, GVirtus now enables GPU sharing among physical and virtual machines based on x86 and ARM CPUs on local workstations, computing clusters and distributed cloud appliances

    Optimizing Virtual Machine I/O Performance in Virtualized Cloud by Differenciated-frequency Scheduling and Functionality Offloading

    Get PDF
    Many enterprises are increasingly moving their applications to private cloud environments or public cloud platforms. A key technology driving cloud computing is virtualization which can serve multiple VMs in one physical machine hence providing better management flexibility and significant savings in operational costs. However, one important consequence of virtualized hosts in the cloud is the negative impact it has on the I/O performance of the applications running in the VMs

    I/O Workload in Virtualized Data Center Using Hypervisor

    Get PDF
    Cloud computing [10] is gaining popularity as it’s the way to virtualize the datacenter and increase flexibility in the use of computation resources. This virtual machine approach can dramatically improve the efficiency, power utilization and availability of costly hardware resources, such as CPU and memory. Virtualization in datacenter had been done in the back end of Eucalyptus software and Front end was installed on another CPU. The operation of performance measurement had been done in network I/O applications environment of virtualized cloud. Then measurement was analyzed based on performance impact of co-locating applications in a virtualized cloud in terms of throughput and resource sharing effectiveness, including the impact of idle instances on applications that are running concurrently on the same physical host. This project proposes the virtualization technology which uses the hypervisor to install the Eucalyptus software in single physical machine for setting up a cloud computing environment. By using the hypervisor, the front end and back end of eucalyptus software will be installed in the same machine. The performance will be measured based on the interference in parallel processing of CPU and network intensive workloads by using the Xen Virtual Machine Monitors. The main motivation of this project is to provide the scalable virtualized datacenter

    Developing power‐aware scheduling mechanisms for computing systems virtualized by Xen

    Get PDF
    Cloud computing emerges as one of the most important technologies for interconnecting people and building the so‐called Internet of People (IoP). In such a cloud‐based IoP, the virtualization technique provides the key supporting environments for running the IoP jobs such as performing data analysis and mining personal information. Nowadays, energy consumption in such a system is a critical metric to measure the sustainability and eco‐friendliness of the system. This paper develops three power‐aware scheduling strategies in virtualized systems managed by Xen, which is a popular virtualization technique. These three strategies are the Least performance Loss Scheduling strategy, the No performance Loss Scheduling strategy, and the Best Frequency Match scheduling strategy. These power‐aware strategies are developed by identifying the limitation of Xen in scaling the CPU frequency and aim to reduce the energy waste without sacrificing the jobs running performance in the computing systems virtualized by Xen. Least performance Loss Scheduling works by re‐arranging the execution order of the virtual machines (VMs). No performance Loss Scheduling works by setting a proper initial CPU frequency for running the VMs. Best Frequency Match reduces energy waste and performance loss by allowing the VMs to jump the queue so that the VM that is put into execution best matches the current CPU frequency. Scheduling for both single core and multicore processors is considered in this paper. The evaluation experiments have been conducted, and the results show that compared with the original scheduling strategy in Xen, the developed power‐aware scheduling algorithm is able to reduce energy consumption without reducing the performance for the jobs running in Xen

    Optimization of Composite Cloud Service Processing with Virtual Machines

    Get PDF
    By leveraging virtual machine (VM) technology, we optimize cloud system performance based on refined resource allocation, in processing user requests with composite services. Our contribution is three-fold. (1) We devise a VM resource allocation scheme with a minimized processing overhead for task execution. (2) We comprehensively investigate the best-suited task scheduling policy with different design parameters. (3) We also explore the best-suited resource sharing scheme with adjusted divisible resource fractions on running tasks in terms of Proportional-Share Model (PSM), which can be split into absolute mode (called AAPSM) and relative mode (RAPSM). We implement a prototype system over a cluster environment deployed with 56 real VM instances, and summarized valuable experience from our evaluation. As the system runs in short supply, Lightest Workload First (LWF) is mostly recommended because it can minimize the overall response extension ratio (RER) for both sequential-mode tasks and parallel-mode tasks. In a competitive situation with over-commitment of resources, the best one is combining LWF with both AAPSM and RAPSM. It outperforms other solutions in the competitive situation, by 16+% w.r.t. the worst-case response time and by 7.4+% w.r.t. the fairness.published_or_final_versio

    Scheduling Data Intensive Workloads through Virtualization on MapReduce based Clouds

    Full text link
    MapReduce has become a popular programming model for running data intensive applications on the cloud. Completion time goals or deadlines of MapReduce jobs set by users are becoming crucial in existing cloud-based data processing environments like Hadoop. There is a conflict between the scheduling MR jobs to meet deadlines and "data locality" (assigning tasks to nodes that contain their input data). To meet the deadline a task may be scheduled on a node without local input data for that task causing expensive data transfer from a remote node. In this paper, a novel scheduler is proposed to address the above problem which is primarily based on the dynamic resource reconfiguration approach. It has two components: 1) Resource Predictor: which dynamically determines the required number of Map/Reduce slots for every job to meet completion time guarantee; 2) Resource Reconfigurator: that adjusts the CPU resources while not violating completion time goals of the users by dynamically increasing or decreasing individual VMs to maximize data locality and also to maximize the use of resources within the system among the active jobs. The proposed scheduler has been evaluated against Fair Scheduler on virtual cluster built on a physical cluster of 20 machines. The results demonstrate a gain of about 12% increase in throughput of Job

    Improving Accuracy of Virtual Machine Power Model by Relative-PMC Based Heuristic Scheduling

    Get PDF
    Conventional utilization-based power model is effective for measuring the power consumption of physical machines. However, in virtualized environments its accuracy cannot be guaranteed because of the recursive resource accessing among multiple virtual machines. In this paper, we present a novel virtual machine scheduling algorithm, which uses Performance-Monitor-Counter as heuristic information to compensate the recursive power consumption. Theoretical analysis indicates that the error of virtual machine power model can be quantitative bounded when using the proposed scheduling algorithm. Extensive experiments based on standard benchmarks show that the error of virtual machine power measurements can be significantly reduced comparing with the classic credit-based scheduling algorithm
    • …
    corecore