2,122 research outputs found
Autonomic Cloud Computing: Open Challenges and Architectural Elements
As Clouds are complex, large-scale, and heterogeneous distributed systems,
management of their resources is a challenging task. They need automated and
integrated intelligent strategies for provisioning of resources to offer
services that are secure, reliable, and cost-efficient. Hence, effective
management of services becomes fundamental in software platforms that
constitute the fabric of computing Clouds. In this direction, this paper
identifies open issues in autonomic resource provisioning and presents
innovative management techniques for supporting SaaS applications hosted on
Clouds. We present a conceptual architecture and early results evidencing the
benefits of autonomic management of Clouds.Comment: 8 pages, 6 figures, conference keynote pape
A Reliable and Cost-Efficient Auto-Scaling System for Web Applications Using Heterogeneous Spot Instances
Cloud providers sell their idle capacity on markets through an auction-like
mechanism to increase their return on investment. The instances sold in this
way are called spot instances. In spite that spot instances are usually 90%
cheaper than on-demand instances, they can be terminated by provider when their
bidding prices are lower than market prices. Thus, they are largely used to
provision fault-tolerant applications only. In this paper, we explore how to
utilize spot instances to provision web applications, which are usually
considered availability-critical. The idea is to take advantage of differences
in price among various types of spot instances to reach both high availability
and significant cost saving. We first propose a fault-tolerant model for web
applications provisioned by spot instances. Based on that, we devise novel
auto-scaling polices for hourly billed cloud markets. We implemented the
proposed model and policies both on a simulation testbed for repeatable
validation and Amazon EC2. The experiments on the simulation testbed and the
real platform against the benchmarks show that the proposed approach can
greatly reduce resource cost and still achieve satisfactory Quality of Service
(QoS) in terms of response time and availability
SLA-Oriented Resource Provisioning for Cloud Computing: Challenges, Architecture, and Solutions
Cloud computing systems promise to offer subscription-oriented,
enterprise-quality computing services to users worldwide. With the increased
demand for delivering services to a large number of users, they need to offer
differentiated services to users and meet their quality expectations. Existing
resource management systems in data centers are yet to support Service Level
Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to
realize cloud computing and utility computing. In addition, no work has been
done to collectively incorporate customer-driven service management,
computational risk management, and autonomic resource management into a
market-based resource management system to target the rapidly changing
enterprise requirements of Cloud computing. This paper presents vision,
challenges, and architectural elements of SLA-oriented resource management. The
proposed architecture supports integration of marketbased provisioning policies
and virtualisation technologies for flexible allocation of resources to
applications. The performance results obtained from our working prototype
system shows the feasibility and effectiveness of SLA-based resource
provisioning in Clouds.Comment: 10 pages, 7 figures, Conference Keynote Paper: 2011 IEEE
International Conference on Cloud and Service Computing (CSC 2011, IEEE
Press, USA), Hong Kong, China, December 12-14, 201
An Algorithm for Network and Data-aware Placement of Multi-Tier Applications in Cloud Data Centers
Today's Cloud applications are dominated by composite applications comprising
multiple computing and data components with strong communication correlations
among them. Although Cloud providers are deploying large number of computing
and storage devices to address the ever increasing demand for computing and
storage resources, network resource demands are emerging as one of the key
areas of performance bottleneck. This paper addresses network-aware placement
of virtual components (computing and data) of multi-tier applications in data
centers and formally defines the placement as an optimization problem. The
simultaneous placement of Virtual Machines and data blocks aims at reducing the
network overhead of the data center network infrastructure. A greedy heuristic
is proposed for the on-demand application components placement that localizes
network traffic in the data center interconnect. Such optimization helps
reducing communication overhead in upper layer network switches that will
eventually reduce the overall traffic volume across the data center. This, in
turn, will help reducing packet transmission delay, increasing network
performance, and minimizing the energy consumption of network components.
Experimental results demonstrate performance superiority of the proposed
algorithm over other approaches where it outperforms the state-of-the-art
network-aware application placement algorithm across all performance metrics by
reducing the average network cost up to 67% and network usage at core switches
up to 84%, as well as increasing the average number of application deployments
up to 18%.Comment: Submitted for publication consideration for the Journal of Network
and Computer Applications (JNCA). Total page: 28. Number of figures: 15
figure
Software-Defined Cloud Computing: Architectural Elements and Open Challenges
The variety of existing cloud services creates a challenge for service
providers to enforce reasonable Software Level Agreements (SLA) stating the
Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid
such penalties at the same time that the infrastructure operates with minimum
energy and resource wastage, constant monitoring and adaptation of the
infrastructure is needed. We refer to Software-Defined Cloud Computing, or
simply Software-Defined Clouds (SDC), as an approach for automating the process
of optimal cloud configuration by extending virtualization concept to all
resources in a data center. An SDC enables easy reconfiguration and adaptation
of physical resources in a cloud infrastructure, to better accommodate the
demand on QoS through a software that can describe and manage various aspects
comprising the cloud environment. In this paper, we present an architecture for
SDCs on data centers with emphasis on mobile cloud applications. We present an
evaluation, showcasing the potential of SDC in two use cases-QoS-aware
bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and
discuss the research challenges and opportunities in this emerging area.Comment: Keynote Paper, 3rd International Conference on Advances in Computing,
Communications and Informatics (ICACCI 2014), September 24-27, 2014, Delhi,
Indi
Structural, optical, and magnetic properties of Zn1-xMnxs (x = 0.00, 0.01, and 0.03) nanoparticles
Zn1 β xMnxS (x = 0.00, 0.01, and 0.03) nanoparticles of size ~2 nm were synthesized using a co-precipitation method. Structural characterization by X-Ray diffraction (XRD) measurement revealed that all the synthesized samples were crystallize in the monophasic cubic zinc blende structure and monotonous decrement in the lattice constants with increasing Mn content, due to the effective Mn doping. The quantum confinement nature of nanoparticles was tested from UV-Vis absorbance measurement and the particle sizes were calculated and compared with XRD. Two emission bands observed in doped nanoparticles were attributed to defect related emission (blue) of ZnS and Mn2+ (orange) respectively, where the intensity of emission peak increases without any significant shift for increase in Mn2+ concentration. Magnetic measurements showed that all the samples exhibit antiferromagnetic with diamagnetic behavior, at room temperature.
When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/2791
- β¦