33,968 research outputs found
InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services
Cloud computing providers have setup several data centers at different
geographical locations over the Internet in order to optimally serve needs of
their customers around the world. However, existing systems do not support
mechanisms and policies for dynamically coordinating load distribution among
different Cloud-based data centers in order to determine optimal location for
hosting application services to achieve reasonable QoS levels. Further, the
Cloud computing providers are unable to predict geographic distribution of
users consuming their services, hence the load coordination must happen
automatically, and distribution of services must change in response to changes
in the load. To counter this problem, we advocate creation of federated Cloud
computing environment (InterCloud) that facilitates just-in-time,
opportunistic, and scalable provisioning of application services, consistently
achieving QoS targets under variable workload, resource and network conditions.
The overall goal is to create a computing environment that supports dynamic
expansion or contraction of capabilities (VMs, services, storage, and database)
for handling sudden variations in service demands.
This paper presents vision, challenges, and architectural elements of
InterCloud for utility-oriented federation of Cloud computing environments. The
proposed InterCloud environment supports scaling of applications across
multiple vendor clouds. We have validated our approach by conducting a set of
rigorous performance evaluation study using the CloudSim toolkit. The results
demonstrate that federated Cloud computing model has immense potential as it
offers significant performance gains as regards to response time and cost
saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
Software-Defined Cloud Computing: Architectural Elements and Open Challenges
The variety of existing cloud services creates a challenge for service
providers to enforce reasonable Software Level Agreements (SLA) stating the
Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid
such penalties at the same time that the infrastructure operates with minimum
energy and resource wastage, constant monitoring and adaptation of the
infrastructure is needed. We refer to Software-Defined Cloud Computing, or
simply Software-Defined Clouds (SDC), as an approach for automating the process
of optimal cloud configuration by extending virtualization concept to all
resources in a data center. An SDC enables easy reconfiguration and adaptation
of physical resources in a cloud infrastructure, to better accommodate the
demand on QoS through a software that can describe and manage various aspects
comprising the cloud environment. In this paper, we present an architecture for
SDCs on data centers with emphasis on mobile cloud applications. We present an
evaluation, showcasing the potential of SDC in two use cases-QoS-aware
bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and
discuss the research challenges and opportunities in this emerging area.Comment: Keynote Paper, 3rd International Conference on Advances in Computing,
Communications and Informatics (ICACCI 2014), September 24-27, 2014, Delhi,
Indi
Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World
This report documents the program and the outcomes of GI-Dagstuhl Seminar
16394 "Software Performance Engineering in the DevOps World".
The seminar addressed the problem of performance-aware DevOps. Both, DevOps
and performance engineering have been growing trends over the past one to two
years, in no small part due to the rise in importance of identifying
performance anomalies in the operations (Ops) of cloud and big data systems and
feeding these back to the development (Dev). However, so far, the research
community has treated software engineering, performance engineering, and cloud
computing mostly as individual research areas. We aimed to identify
cross-community collaboration, and to set the path for long-lasting
collaborations towards performance-aware DevOps.
The main goal of the seminar was to bring together young researchers (PhD
students in a later stage of their PhD, as well as PostDocs or Junior
Professors) in the areas of (i) software engineering, (ii) performance
engineering, and (iii) cloud computing and big data to present their current
research projects, to exchange experience and expertise, to discuss research
challenges, and to develop ideas for future collaborations
Managing Service-Heterogeneity using Osmotic Computing
Computational resource provisioning that is closer to a user is becoming
increasingly important, with a rise in the number of devices making continuous
service requests and with the significant recent take up of latency-sensitive
applications, such as streaming and real-time data processing. Fog computing
provides a solution to such types of applications by bridging the gap between
the user and public/private cloud infrastructure via the inclusion of a "fog"
layer. Such approach is capable of reducing the overall processing latency, but
the issues of redundancy, cost-effectiveness in utilizing such computing
infrastructure and handling services on the basis of a difference in their
characteristics remain. This difference in characteristics of services because
of variations in the requirement of computational resources and processes is
termed as service heterogeneity. A potential solution to these issues is the
use of Osmotic Computing -- a recently introduced paradigm that allows division
of services on the basis of their resource usage, based on parameters such as
energy, load, processing time on a data center vs. a network edge resource.
Service provisioning can then be divided across different layers of a
computational infrastructure, from edge devices, in-transit nodes, and a data
center, and supported through an Osmotic software layer. In this paper, a
fitness-based Osmosis algorithm is proposed to provide support for osmotic
computing by making more effective use of existing Fog server resources. The
proposed approach is capable of efficiently distributing and allocating
services by following the principle of osmosis. The results are presented using
numerical simulations demonstrating gains in terms of lower allocation time and
a higher probability of services being handled with high resource utilization.Comment: 7 pages, 4 Figures, International Conference on Communication,
Management and Information Technology (ICCMIT 2017), At Warsaw, Poland, 3-5
April 2017, http://www.iccmit.net/ (Best Paper Award
- …