5,815 research outputs found
Cost Minimization of Virtual Machine Allocation in Public Clouds Considering Multiple Applications
International Conference, GECON 2017 (14. 2017. Biarritz)This paper presents a virtual machine (VM) allocation strategy to optimize the cost of VM deployments in public clouds. It can simultaneously deal with multiple applications and it is formulated as an optimization problem that takes the level of performance to be reached by a set of applications as inputs. It considers real characteristics of infrastructure providers such as VM types, limits on the number VMs that can be deployed, and pricing schemes. As output, it generates a VM allocation to support the performance requirements of all the applications. The strategy combines short-term and long-term allocation phases in order to take advantage of VMs belonging to two different pricing categories: on-demand and reserved. A quantization technique is introduced to reduce the size of the allocation problem and, thus, significantly decrease the computational complexity. The experiments show that the strategy can optimize costs for problems that could not be solved with previous approache
InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services
Cloud computing providers have setup several data centers at different
geographical locations over the Internet in order to optimally serve needs of
their customers around the world. However, existing systems do not support
mechanisms and policies for dynamically coordinating load distribution among
different Cloud-based data centers in order to determine optimal location for
hosting application services to achieve reasonable QoS levels. Further, the
Cloud computing providers are unable to predict geographic distribution of
users consuming their services, hence the load coordination must happen
automatically, and distribution of services must change in response to changes
in the load. To counter this problem, we advocate creation of federated Cloud
computing environment (InterCloud) that facilitates just-in-time,
opportunistic, and scalable provisioning of application services, consistently
achieving QoS targets under variable workload, resource and network conditions.
The overall goal is to create a computing environment that supports dynamic
expansion or contraction of capabilities (VMs, services, storage, and database)
for handling sudden variations in service demands.
This paper presents vision, challenges, and architectural elements of
InterCloud for utility-oriented federation of Cloud computing environments. The
proposed InterCloud environment supports scaling of applications across
multiple vendor clouds. We have validated our approach by conducting a set of
rigorous performance evaluation study using the CloudSim toolkit. The results
demonstrate that federated Cloud computing model has immense potential as it
offers significant performance gains as regards to response time and cost
saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
DYVERSE: DYnamic VERtical Scaling in Multi-tenant Edge Environments
Multi-tenancy in resource-constrained environments is a key challenge in Edge
computing. In this paper, we develop 'DYVERSE: DYnamic VERtical Scaling in
Edge' environments, which is the first light-weight and dynamic vertical
scaling mechanism for managing resources allocated to applications for
facilitating multi-tenancy in Edge environments. To enable dynamic vertical
scaling, one static and three dynamic priority management approaches that are
workload-aware, community-aware and system-aware, respectively are proposed.
This research advocates that dynamic vertical scaling and priority management
approaches reduce Service Level Objective (SLO) violation rates. An online-game
and a face detection workload in a Cloud-Edge test-bed are used to validate the
research. The merits of DYVERSE is that there is only a sub-second overhead per
Edge server when 32 Edge servers are deployed on a single Edge node. When
compared to executing applications on the Edge servers without dynamic vertical
scaling, static priorities and dynamic priorities reduce SLO violation rates of
requests by up to 4% and 12% for the online game, respectively, and in both
cases 6% for the face detection workload. Moreover, for both workloads, the
system-aware dynamic vertical scaling method effectively reduces the latency of
non-violated requests, when compared to other methods
Resource management in a containerized cloud : status and challenges
Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research
Datacenter Traffic Control: Understanding Techniques and Trade-offs
Datacenters provide cost-effective and flexible access to scalable compute
and storage resources necessary for today's cloud computing needs. A typical
datacenter is made up of thousands of servers connected with a large network
and usually managed by one operator. To provide quality access to the variety
of applications and services hosted on datacenters and maximize performance, it
deems necessary to use datacenter networks effectively and efficiently.
Datacenter traffic is often a mix of several classes with different priorities
and requirements. This includes user-generated interactive traffic, traffic
with deadlines, and long-running traffic. To this end, custom transport
protocols and traffic management techniques have been developed to improve
datacenter network performance.
In this tutorial paper, we review the general architecture of datacenter
networks, various topologies proposed for them, their traffic properties,
general traffic control challenges in datacenters and general traffic control
objectives. The purpose of this paper is to bring out the important
characteristics of traffic control in datacenters and not to survey all
existing solutions (as it is virtually impossible due to massive body of
existing research). We hope to provide readers with a wide range of options and
factors while considering a variety of traffic control mechanisms. We discuss
various characteristics of datacenter traffic control including management
schemes, transmission control, traffic shaping, prioritization, load balancing,
multipathing, and traffic scheduling. Next, we point to several open challenges
as well as new and interesting networking paradigms. At the end of this paper,
we briefly review inter-datacenter networks that connect geographically
dispersed datacenters which have been receiving increasing attention recently
and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial
Performance optimization of big data computing workflows for batch and stream data processing in multi-clouds
Workflow techniques have been widely used as a major computing solution in many science domains. With the rapid deployment of cloud infrastructures around the globe and the economic benefits of cloud-based computing and storage services, an increasing number of scientific workflows have migrated or are in active transition to clouds. As the scale of scientific applications continues to grow, it is now common to deploy various data- and network-intensive computing workflows such as serial computing workflows, MapReduce/Spark-based workflows, and Storm-based stream data processing workflows in multi-cloud environments, where inter-cloud data transfer oftentimes plays a significant role in both workflow performance and financial cost. Rigorous mathematical models are constructed to analyze the intra- and inter-cloud execution process of scientific workflows and a class of budget-constrained workflow mapping problems are formulated to optimize the network performance of big data workflows in multi-cloud environments. Research shows that these problems are all NP-complete and a heuristic solution is designed for each that takes into consideration module execution, data transfer, and I/O operations. The performance superiority of the proposed solutions over existing methods are illustrated through extensive simulations and further verified by real-life workflow experiments deployed in public clouds
- …