238 research outputs found

    An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform

    Full text link
    Cloud Computing is becoming a viable computing solution for services oriented computing. Several open-source cloud solutions are available to these supports. Open-source software stacks offer a huge amount of customizability without huge licensing fees. As a result, open source software are widely used for designing cloud, and private clouds are being built increasingly in the open source way. Numerous contributions have been made by the open-source community related to private-IaaS-cloud. OpenNebula - a cloud platform is one of the popular private cloud management software. However, little has been done to systematically investigate the performance evaluation of this open-source cloud solution in the existing literature. The performance evaluation aids new and existing research, industry and international projects when selecting OpenNebula software to their work. The objective of this paper is to evaluate the load-balancing performance of the OpenNebula cloud management software. For the performance evaluation, the OpenNebula cloud management software is installed and configured as a prototype implementation and tested on the DIU Cloud Lab. In this paper, two set of experiments are conducted to identify the load balancing performance of the OpenNebula cloud management platform- (1) Delete and Add Virtual Machine (VM) from OpenNebula cloud platform; (2) Mapping Physical Hosts to Virtual Machines (VMs) in the OpenNebula cloud platform.Comment: 6 page

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    Technical Report on Deploying a highly secured OpenStack Cloud Infrastructure using BradStack as a Case Study

    Full text link
    Cloud computing has emerged as a popular paradigm and an attractive model for providing a reliable distributed computing model.it is increasing attracting huge attention both in academic research and industrial initiatives. Cloud deployments are paramount for institution and organizations of all scales. The availability of a flexible, free open source cloud platform designed with no propriety software and the ability of its integration with legacy systems and third-party applications are fundamental. Open stack is a free and opensource software released under the terms of Apache license with a fragmented and distributed architecture making it highly flexible. This project was initiated and aimed at designing a secured cloud infrastructure called BradStack, which is built on OpenStack in the Computing Laboratory at the University of Bradford. In this report, we present and discuss the steps required in deploying a secured BradStack Multi-node cloud infrastructure and conducting Penetration testing on OpenStack Services to validate the effectiveness of the security controls on the BradStack platform. This report serves as a practical guideline, focusing on security and practical infrastructure related issues. It also serves as a reference for institutions looking at the possibilities of implementing a secured cloud solution.Comment: 38 pages, 19 figures

    Open Source Solutions for Building IaaS Clouds

    Get PDF
    Cloud Computing is not only a pool of resources and services offered through the internet, but also a technology solution that allows optimization of resources use, costs minimization and energy consumption reduction. Enterprises moving towards cloud technologies have to choose between public cloud services, such as: Amazon Web Services, Microsoft Cloud and Google Cloud services, or private self built clouds. While the firsts are offered with affordable fees, the others provide more privacy and control. In this context, many open source softwares approach the buiding of private, public or hybrid clouds depending on the users need and on the available capabilities. To choose among the different open source solutions, an analysis is necessary in order to select the most suitable according with the enterprise’s goals and requirements. In this paper, we present a depth study and comparison of five open source frameworks that are gaining more attention recently and growing fast: CloudStack, OpenStack, Eucalyptus, OpenNebula and Nimbus. We present their architectures and discuss different properties, features, useful information and our own insights on these frameworks

    OneCloud: A Study of Dynamic Networking in an OpenFlow Cloud

    Get PDF
    Cloud computing is a popular paradigm for accessing computing resources. It provides elastic, on-demand and pay-per-use models that help reduce costs and maintain a flexible infrastructure. Infrastructure as a Service (IaaS) clouds are becoming increasingly popular because users do not have to purchase the hardware for a private cloud, which significantly reduces costs. However, IaaS presents networking challenges to cloud providers because cloud users want the ability to customize the cloud to match their business needs. This requires providers to offer dynamic networking capabilities, such as dynamic IP addressing. Providers must expose a method by which users can reconfigure the networking infrastructure for their private cloud without disrupting the private clouds of other users. Such capabilities have often been provided in the form of virtualized network overlay topologies. In our work, we present a virtualized networking solution for the cloud using the OpenFlow protocol. OpenFlow is a software defined networking approach for centralized control of a network\u27s data flows. In an OpenFlow network, packets not matching a flow entry are sent to a centralized controller(s) that makes forwarding decisions. The controller then installs flow entries on the network switches, which in turn process further network traffic at line-rate. Since the OpenFlow controller can manage traffic on all of the switches in a network, it is ideal for enabling the dynamic networking needs of cloud users. This work analyzes the potential of OpenFlow to enable dynamic networking in cloud computing and presents reference implementations of Amazon EC2\u27s Elastic IP Addresses and Security Groups using the NOX OpenFlow controller and the OpenNebula cloud provisioning engine

    Service Level Agreement Driven Adaptive Resource Management For Web Applications on Heterogeneous Compute Clouds

    Get PDF
    Cloud computing is an emerging topic in the field of parallel and distributed computing. Many IT giants such as IBM, Sun, Amazon, Google, and Microsoft are promoting and offering various storage and compute clouds. Clouds provide services such as high performance computing, storage, and application hosting. Cloud providers are expected to ensure Quality of Service (QoS) through a Service Level Agreement (SLA) between the provider and the consumer. In this research, I develop a heterogeneous testbed compute cloud and investigate adaptive management of resources for Web applications to satisfy a SLA that enforces specific response time requirements. I develop a system on top of EUCALYTPUS framework that actively monitors the response time of the compute resources assign to a Web application and dynamically allocates the resources required by the application to satisfy the specific response time requirements

    Building an Emulation Environment for Cyber Security Analyses of Complex Networked Systems

    Full text link
    Computer networks are undergoing a phenomenal growth, driven by the rapidly increasing number of nodes constituting the networks. At the same time, the number of security threats on Internet and intranet networks is constantly growing, and the testing and experimentation of cyber defense solutions requires the availability of separate, test environments that best emulate the complexity of a real system. Such environments support the deployment and monitoring of complex mission-driven network scenarios, thus enabling the study of cyber defense strategies under real and controllable traffic and attack scenarios. In this paper, we propose a methodology that makes use of a combination of techniques of network and security assessment, and the use of cloud technologies to build an emulation environment with adjustable degree of affinity with respect to actual reference networks or planned systems. As a byproduct, starting from a specific study case, we collected a dataset consisting of complete network traces comprising benign and malicious traffic, which is feature-rich and publicly available

    Dynamic Scaling for Service Oriented Applications: Implications of Virtual Machine Placement on IaaS Clouds

    Get PDF
    Abstraction of physical hardware using infrastructure-as-a-service (IaaS) clouds leads to the simplistic view that resources are homogeneous and that infinite scaling is possible with linear increases in performance. Support for autonomic scaling of multi-tier service oriented applications requires determination of when, what, and where to scale. \u27When\u27 is addressed by hotspot detection schemes using techniques including performance modeling and time series analysis. \u27What\u27 relates to determining the quantity and size of new resources to provision. \u27Where\u27 involves identification of the best location(s) to provision new resources. In this paper we investigate primarily \u27where\u27 new infrastructure should be provisioned, and secondly \u27what\u27 the infrastructure should be. Dynamic scaling of infrastructure for service oriented applications requires rapid response to changes in demand to meet application quality-of-service requirements. We investigate the performance and resource cost implications of VM placement when dynamically scaling server infrastructure of service oriented applications . We evaluate dynamic scaling in the context of providing modeling-as-a-service for two environmental science models
    • …
    corecore