5 research outputs found

    Analysis Model in the Cloud Optimization Consumption in Pricing the Internet Bandwidt

    Get PDF
    The problem of internet pricing is a problem that is often a major problem in optimization. In this study, the internet pricing scheme focuses on optimizing the use of bandwidth consumption. This research utilizes modification of cloud model in finding optimal solution in network. Cloud computing is computational model which is like network, server, storage and service that is utilizing internet connection. As ISP's Internet service provider requires appropriate pricing schemes in order to maximize revenue and provide quality of service (Quality on Service) or QoS so as to satisfy internet users or users. The model used will be completed with the help of LINGO software program to get optimal solution and accurate result. Based on the optimal solution obtained from the modification of the cloud model can be utilized ISP to maximize revenue and provide services in accordance with needs and requests

    Developing resource consolidation frameworks for moldable virtual machines in clouds

    Get PDF
    This paper considers the scenario where multiple clusters of Virtual Machines (i.e., termed Virtual Clusters) are hosted in a Cloud system consisting of a cluster of physical nodes. Multiple Virtual Clusters (VCs) cohabit in the physical cluster, with each VC offering a particular type of service for the incoming requests. In this context, VM consolidation, which strives to use a minimal number of nodes to accommodate all VMs in the system, plays an important role in saving resource consumption. Most existing consolidation methods proposed in the literature regard VMs as “rigid” during consolidation, i.e., VMs’ resource capacities remain unchanged. In VC environments, QoS is usually delivered by a VC as a single entity. Therefore, there is no reason why VMs’ resource capacity cannot be adjusted as long as the whole VC is still able to maintain the desired QoS. Treating VMs as “moldable” during consolidation may be able to further consolidate VMs into an even fewer number of nodes. This paper investigates this issue and develops a Genetic Algorithm (GA) to consolidate moldable VMs. The GA is able to evolve an optimized system state, which represents the VM-to-node mapping and the resource capacity allocated to each VM. After the new system state is calculated by the GA, the Cloud will transit from the current system state to the new one. The transition time represents overhead and should be minimized. In this paper, a cost model is formalized to capture the transition overhead, and a reconfiguration algorithm is developed to transit the Cloud to the optimized system state with low transition overhead. Experiments have been conducted to evaluate the performance of the GA and the reconfiguration algorithm

    Research Opportunities in an Intercloud Environment Using MOSt in SLA4CLOUD Project

    Get PDF
    International audienceActually, Internet services are becoming essential for different types of users. This evolution impacts how data connections , network routes and resources are configured and used. In this context, the way in which distributed applications and services is becoming more difficult to manage. Cloud computing allows interactions between cloud providers and cloud service providers, and cloud providers can offer deployment services in different datacenters located in different world regions. Much development effort is needed for deploying scalable solutions. One of the these challenges is how to design, develop and deploy cloud solutions that could meet the policies and security requirements of multiple environments needs. The SLA4CLOUD project intends to build an environment where a user can request the deployment of its services anywhere in the underlying infrastructure, using the MOSt platform and its services. This work aims to report some opportunities and research challenges resulting from SLA4CLOUD project in the context of MOSt platform, and the promotion of new projects and partnerships

    Modeling virtualized application performance from hypervisor counters

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 61-64).Managing a virtualized datacenter has grown more challenging, as each virtual machine's service level agreement (SLA) must be satisfied, when the service levels are generally inaccessible to the hypervisor. To aid in VM consolidation and service level assurance, we develop a modeling technique that generates accurate models of service level. Using only hypervisor counters as inputs, we train models to predict application response times and predict SLA violations. To collect training data, we conduct a simulation phase which stresses the application across many workloads levels, and collects each response time. Simultaneously, hypervisor performance counters are collected. Afterwards, the data is synchronized and used as training data in ensemble-based genetic programming for symbolic regression. This modeling technique is quite efficient at dealing with high-dimensional datasets, and it also generates interpretable models. After training models for web servers and virtual desktops, we test generalization across different content. In our experiments, we found that our technique could distill small subsets of important hypervisor counters from over 700 counters. This was tested for both Apache web servers and Windows-based virtual desktop infrastructures. For the web servers, we accurately modeled the breakdown points and also the service levels. Our models could predict service levels with 90.5% accuracy on a test set. On a untrained scenario with completely different contending content, our models predict service levels with 70% accuracy, but predict SLA violation with 92.7% accuracy. For the virtual desktops, on test scenarios similar to training scenarios, model accuracy was 97.6%. Our main contribution is demonstrating that a completely data-driven approach to application performance modeling can be successful. In contrast to many other works, our models do not use workload level or response times as inputs to the models, but nevertheless predicts service level accurately. Our approach also lets the models determine which inputs are important to a particular model's performance, rather than hand choosing a few inputs to train on.by Lawrence L. Chan.M.Eng
    corecore