8 research outputs found

    Efficient data traverse paths for both I/O and computation intensive workloads

    Get PDF
    Virtualization has accomplished standard status in big business IT industry. Regardless of its across the board reception, it is realized that virtualization likewise presents non-minor overhead when executing errands on a virtual machine (VM). Specifically, a consolidated impact from gadget virtualization overhead and CPU planning idleness can cause execution debasement when computation concentrated undertakings and I/O escalated errands are co-situated on a VM. Such impedance causes additional vitality utilization, too. Right now, present Hylics, a novel network ment that empowers proficient data  navigate ways for both I/O and computation escalated remaining tasks at hand. This is accomplished with the network ment of in-memory document framework and system administration at the hypervisor level. A few significant structure issues are pinpointed and tended to during our model execution, including proficient transitional data sharing, network administration offloading, and QoS-mindful memory use the executives. In light of our genuine sending on KVM, Hylics can fundamentally improve computation and I/O execution for hybrid workloads

    End-to-end informed VM selection in compute clouds

    Full text link
    The selection of resources, particularly VMs, in current public IaaS clouds is usually done in a blind fashion, as cloud users do not have much information about resource consumption by co-tenant third-party tasks. In particular, communication patterns can play a significant part in cloud application performance and responsiveness, specially in the case of novel latencysensitive applications, increasingly common in today’s clouds. Thus, herein we propose an end-to-end approach to the VM allocation problem using policies based uniquely on round-trip time measurements between VMs. Those become part of a userlevel ‘Recommender Service’ that receives VM allocation requests with certain network-related demands and matches them to a suitable subset of VMs available to the user within the cloud. We propose and implement end-to-end algorithms for VM selection that cover desirable profiles of communications between VMs in distributed applications in a cloud setting, such as profiles with prevailing pair-wise, hub-and-spokes, or clustered communication patterns between constituent VMs. We quantify the expected benefits from deploying our Recommender Service by comparing our informed VM allocation approaches to conventional, random allocation methods, based on real measurements of latencies between Amazon EC2 instances. We also show that our approach is completely independent from cloud architecture details, is adaptable to different types of applications and workloads, and is lightweight and transparent to cloud providers.This work is supported in part by the National Science Foundation under grant CNS-0963974

    Task Runtime Prediction in Scientific Workflows Using an Online Incremental Learning Approach

    Full text link
    Many algorithms in workflow scheduling and resource provisioning rely on the performance estimation of tasks to produce a scheduling plan. A profiler that is capable of modeling the execution of tasks and predicting their runtime accurately, therefore, becomes an essential part of any Workflow Management System (WMS). With the emergence of multi-tenant Workflow as a Service (WaaS) platforms that use clouds for deploying scientific workflows, task runtime prediction becomes more challenging because it requires the processing of a significant amount of data in a near real-time scenario while dealing with the performance variability of cloud resources. Hence, relying on methods such as profiling tasks' execution data using basic statistical description (e.g., mean, standard deviation) or batch offline regression techniques to estimate the runtime may not be suitable for such environments. In this paper, we propose an online incremental learning approach to predict the runtime of tasks in scientific workflows in clouds. To improve the performance of the predictions, we harness fine-grained resources monitoring data in the form of time-series records of CPU utilization, memory usage, and I/O activities that are reflecting the unique characteristics of a task's execution. We compare our solution to a state-of-the-art approach that exploits the resources monitoring data based on regression machine learning technique. From our experiments, the proposed strategy improves the performance, in terms of the error, up to 29.89%, compared to the state-of-the-art solutions.Comment: Accepted for presentation at main conference track of 11th IEEE/ACM International Conference on Utility and Cloud Computin

    An adaptive scaling mechanism for managing performance variations in network functions virtualization: A case study in an NFV-based EPC

    Get PDF
    The scaling is a fundamental task that allows addressing performance variations in Network Functions Virtualization (NFV). In the literature, several approaches propose scaling mechanisms that differ in the utilized technique (e.g., reactive, predictive and machine learning-based). The scaling in NFV must be accurate both at the time and the number of instances to be scaled, aiming at avoiding unnecessary procedures of provisioning and releasing of resources; however, achieving a high accuracy is a non-trivial task. In this paper, we propose for NFV an adaptive scaling mechanism based on Q-Learning and Gaussian Processes that are utilized by an agent to carry out an improvement strategy of a scaling policy, and therefore, to make better decisions for managing performance variations. We evaluate our mechanism by simulations, in a case study in a virtualized Evolved Packet Core, corroborating that it is more accurate than approaches based on static threshold rules and Q-Learning without a policy improvement strategy

    Profile-based Resource Allocation for Virtualized Network Functions

    Get PDF
    Accepted in IEEE TNSM Journalhttps://ieeexplore.ieee.org/document/8848599International audienceThe virtualization of compute and network resources enables an unseen flexibility for deploying network services. A wide spectrum of emerging technologies allows an ever-growing range of orchestration possibilities in cloud-based environments. But in this context it remains challenging to rhyme dynamic cloud configurations with deterministic performance. The service operator must somehow map the performance specification in the Service Level Agreement (SLA) to an adequate resource allocation in the virtualized infrastructure. We propose the use of a VNF profile to alleviate this process. This is illustrated by profiling the performance of four example network functions (a virtual router, switch, firewall and cache server) under varying workloads and resource configurations. We then compare several methods to derive a model from the profiled datasets. We select the most accurate method to further train a model which predicts the services' performance, in function of incoming workload and allocated resources. Our presented method can offer the service operator a recommended resource allocation for the targeted service, in function of the targeted performance and maximum workload specified in the SLA. This helps to deploy the softwarized service with an optimal amount of resources to meet the SLA requirements, thereby avoiding unnecessary scaling steps

    Exposing Inter-Virtual Machine Networking Traffic to External Applications

    Get PDF
    Virtualization is a powerful and fast growing technology that is widely accepted throughout the computing industry. The Department of Defense has moved its focus to virtualization and looks to take advantage of virtualized hardware, software, and networks. Virtual environments provide many benefits but create both administrative and security challenges. The challenge of monitoring virtual networks is having visibility of inter-virtual machine (VM) traffic that is passed within a single virtual host. This thesis attempts to gain visibility and evaluate performance of inter-VM traffic in a virtual environment. Separate virtual networks are produced using VMWare ESXi and Citrix XenServer platforms. The networks are comprised of three virtual hosts containing a Domain Controller VM, a Dynamic Host Configuration Protocol server VM, two management VMs, and four testing VMs. Configuration of virtual hosts, VMs, and networking components are identical on each network for a consistent comparison. Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) traffic is generated to test each network using custom batch files, Powershell scripts, and Python code. Results show standard virtual networks require additional resources (e.g., local Intrusion Detection System) and more hands-on administration for real-time traffic visibility than a virtual network using a distributed switch. Traffic visibility within a standard network is limited to using a local packet capture program such as pktcap-uw, tcpdump, or windump. However, distributed networks offer advanced options, such as port mirroring and NetFlow, that deliver higher visibility but come at a higher latency for both TCP and UDP inter-VM traffic

    A deep investigation into network performance in virtual machine based cloud environments

    No full text
    corecore