7,909 research outputs found
Online VNF Scaling in Datacenters
Network Function Virtualization (NFV) is a promising technology that promises
to significantly reduce the operational costs of network services by deploying
virtualized network functions (VNFs) to commodity servers in place of dedicated
hardware middleboxes. The VNFs are typically running on virtual machine
instances in a cloud infrastructure, where the virtualization technology
enables dynamic provisioning of VNF instances, to process the fluctuating
traffic that needs to go through the network functions in a network service. In
this paper, we target dynamic provisioning of enterprise network services -
expressed as one or multiple service chains - in cloud datacenters, and design
efficient online algorithms without requiring any information on future traffic
rates. The key is to decide the number of instances of each VNF type to
provision at each time, taking into consideration the server resource
capacities and traffic rates between adjacent VNFs in a service chain. In the
case of a single service chain, we discover an elegant structure of the problem
and design an efficient randomized algorithm achieving a e/(e-1) competitive
ratio. For multiple concurrent service chains, an online heuristic algorithm is
proposed, which is O(1)-competitive. We demonstrate the effectiveness of our
algorithms using solid theoretical analysis and trace-driven simulations.Comment: 9 pages, 4 figure
InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services
Cloud computing providers have setup several data centers at different
geographical locations over the Internet in order to optimally serve needs of
their customers around the world. However, existing systems do not support
mechanisms and policies for dynamically coordinating load distribution among
different Cloud-based data centers in order to determine optimal location for
hosting application services to achieve reasonable QoS levels. Further, the
Cloud computing providers are unable to predict geographic distribution of
users consuming their services, hence the load coordination must happen
automatically, and distribution of services must change in response to changes
in the load. To counter this problem, we advocate creation of federated Cloud
computing environment (InterCloud) that facilitates just-in-time,
opportunistic, and scalable provisioning of application services, consistently
achieving QoS targets under variable workload, resource and network conditions.
The overall goal is to create a computing environment that supports dynamic
expansion or contraction of capabilities (VMs, services, storage, and database)
for handling sudden variations in service demands.
This paper presents vision, challenges, and architectural elements of
InterCloud for utility-oriented federation of Cloud computing environments. The
proposed InterCloud environment supports scaling of applications across
multiple vendor clouds. We have validated our approach by conducting a set of
rigorous performance evaluation study using the CloudSim toolkit. The results
demonstrate that federated Cloud computing model has immense potential as it
offers significant performance gains as regards to response time and cost
saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
Performance-oriented Cloud Provisioning: Taxonomy and Survey
Cloud computing is being viewed as the technology of today and the future.
Through this paradigm, the customers gain access to shared computing resources
located in remote data centers that are hosted by cloud providers (CP). This
technology allows for provisioning of various resources such as virtual
machines (VM), physical machines, processors, memory, network, storage and
software as per the needs of customers. Application providers (AP), who are
customers of the CP, deploy applications on the cloud infrastructure and then
these applications are used by the end-users. To meet the fluctuating
application workload demands, dynamic provisioning is essential and this
article provides a detailed literature survey of dynamic provisioning within
cloud systems with focus on application performance. The well-known types of
provisioning and the associated problems are clearly and pictorially explained
and the provisioning terminology is clarified. A very detailed and general
cloud provisioning classification is presented, which views provisioning from
different perspectives, aiding in understanding the process inside-out. Cloud
dynamic provisioning is explained by considering resources, stakeholders,
techniques, technologies, algorithms, problems, goals and more.Comment: 14 pages, 3 figures, 3 table
Towards a Swiss National Research Infrastructure
In this position paper we describe the current status and plans for a Swiss
National Research Infrastructure. Swiss academic and research institutions are
very autonomous. While being loosely coupled, they do not rely on any
centralized management entities. Therefore, a coordinated national research
infrastructure can only be established by federating the various resources
available locally at the individual institutions. The Swiss Multi-Science
Computing Grid and the Swiss Academic Compute Cloud projects serve already a
large number of diverse user communities. These projects also allow us to test
the operational setup of such a heterogeneous federated infrastructure
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Historically, high energy physics computing has been performed on large
purpose-built computing systems. These began as single-site compute facilities,
but have evolved into the distributed computing grids used today. Recently,
there has been an exponential increase in the capacity and capability of
commercial clouds. Cloud resources are highly virtualized and intended to be
able to be flexibly deployed for a variety of computing tasks. There is a
growing nterest among the cloud providers to demonstrate the capability to
perform large-scale scientific computing. In this paper, we discuss results
from the CMS experiment using the Fermilab HEPCloud facility, which utilized
both local Fermilab resources and virtual machines in the Amazon Web Services
Elastic Compute Cloud. We discuss the planning, technical challenges, and
lessons learned involved in performing physics workflows on a large-scale set
of virtualized resources. In addition, we will discuss the economics and
operational efficiencies when executing workflows both in the cloud and on
dedicated resources.Comment: 15 pages, 9 figure
Application-centric Resource Provisioning for Amazon EC2 Spot Instances
In late 2009, Amazon introduced spot instances to offer their unused
resources at lower cost with reduced reliability. Amazon's spot instances allow
customers to bid on unused Amazon EC2 capacity and run those instances for as
long as their bid exceeds the current spot price. The spot price changes
periodically based on supply and demand, and customers whose bids exceed it
gain access to the available spot instances. Customers may expect their
services at lower cost with spot instances compared to on-demand or reserved.
However the reliability is compromised since the instances(IaaS) providing the
service(SaaS) may become unavailable at any time without any notice to the
customer. Checkpointing and migration schemes are of great use to cope with
such situation. In this paper we study various checkpointing schemes that can
be used with spot instances. Also we device some algorithms for checkpointing
scheme on top of application-centric resource provisioning framework that
increase the reliability while reducing the cost significantly
Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges
Cloud computing is offering utility-oriented IT services to users worldwide.
Based on a pay-as-you-go model, it enables hosting of pervasive applications
from consumer, scientific, and business domains. However, data centers hosting
Cloud applications consume huge amounts of energy, contributing to high
operational costs and carbon footprints to the environment. Therefore, we need
Green Cloud computing solutions that can not only save energy for the
environment but also reduce operational costs. This paper presents vision,
challenges, and architectural elements for energy-efficient management of Cloud
computing environments. We focus on the development of dynamic resource
provisioning and allocation algorithms that consider the synergy between
various data center infrastructures (i.e., the hardware, power units, cooling
and software), and holistically work to boost data center energy efficiency and
performance. In particular, this paper proposes (a) architectural principles
for energy-efficient management of Clouds; (b) energy-efficient resource
allocation policies and scheduling algorithms considering quality-of-service
expectations, and devices power usage characteristics; and (c) a novel software
technology for energy-efficient management of Clouds. We have validated our
approach by conducting a set of rigorous performance evaluation study using the
CloudSim toolkit. The results demonstrate that Cloud computing model has
immense potential as it offers significant performance gains as regards to
response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference
on Parallel and Distributed Processing Techniques and Applications (PDPTA
2010), Las Vegas, USA, July 12-15, 201
On the Economics of Cloud Markets
Cloud computing is a paradigm that has the potential to transform and
revolutionalize the next generation IT industry by making software available to
end-users as a service. A cloud, also commonly known as a cloud network,
typically comprises of hardware (network of servers) and a collection of
softwares that is made available to end-users in a pay-as-you-go manner.
Multiple public cloud providers (ex., Amazon) co-existing in a cloud computing
market provide similar services (software as a service) to its clients, both in
terms of the nature of an application, as well as in quality of service (QoS)
provision. The decision of whether a cloud hosts (or finds it profitable to
host) a service in the long-term would depend jointly on the price it sets, the
QoS guarantees it provides to its customers, and the satisfaction of the
advertised guarantees. In this paper, we devise and analyze three
inter-organizational economic models relevant to cloud networks. We formulate
our problems as non co-operative price and QoS games between multiple cloud
providers existing in a cloud market. We prove that a unique pure strategy Nash
equilibrium (NE) exists in two of the three models. Our analysis paves the path
for each cloud provider to 1) know what prices and QoS level to set for
end-users of a given service type, such that the provider could exist in the
cloud market, and 2) practically and dynamically provision appropriate capacity
for satisfying advertised QoS guarantees.Comment: 7 pages, 2 figure
- …