11,011 research outputs found
An SLA Support System for Cloud Computing
Abstract. Nowadays, even with the existence of many Cloud Providers (CP) in the market, it is still impossible to see CPs who guarantee, or at least offer, an SLA specification to Cloud Users (CU) interests: not just offering percentage of availability, but also guaranteeing specific performance parameters for a certain Cloud application. Due to (1) the huge size of CPs' IT infrastructures and (2) the high complexity with multiple inter-dependencies of resources (physical or virtual), the estimation of specific SLA parameters to compose Service Level Objectives (SLOs) with trustful Key Performance Indicators (KPIs) tends to be inaccurate. This paper proposes the initial design and preliminary approach for an SLA Support System for CC (SLACC) in order to estimate in a formalized methodology -based on available CC infrastructure parameters -what CPs will be able to offer/accept as SLOs or KPIs and, as a consequence, which increasing levels of SLA specificity for their customers can be reached
Investigations of an SLA Support System for Cloud Computing (SLACC)
Cloud Providers (CP) and Cloud Users (CU) need to agree on a set of parameters expressed through Service Level Agreements (SLA) for a given Cloud service. However, even with the existence of many CPs in the market, it is still impossible today to see CPs who guarantee, or at least offer, an SLA specification tailored to CU's interests: not just offering percentage of availability, but also guaranteeing, for example, specific performance parameters for a certain Cloud application. Due to (1) the huge size of CPs' IT infrastructures and (2) the high complexity with multiple inter-dependencies of resources (physical or virtual), the estimation of specific SLA parameters to compose Service Level Objectives (SLOs) with trustful Key Performance Indicators (KPIs) tends to be inaccurate. This paper investigates an SLA Support System for CC (SLACC) which aims to estimate in a formalized methodology - based on available Cloud Computing infrastructure parameters - what CPs will be able to offer/accept as SLOs or KPIs and, as a consequence, which increasing levels of SLA specificity for their customers can be reache
A hierarchical self-X SLA for cloud computing
In cloud computing, service level agreement (SLA) is a mutual contract between service provider
and consumer upon quality of service (QoS). Currently, there is no an effective relation among
different SLAs while some of them affect each other. Moreover, different systems are employed to
operate SLA based functions for instance SLA monitoring system, SLA based resource adapting system
and etc. heterogeneous SLA based systems and lack of hierarchical relations have deducted the
efficiency of SLA based systems. In this paper, the SLA structure is extended to support the
hierarchical relation of SLAs in cloud computing. Additionally, a self-X ability is also proposed and
added inside the SLA structure to supply the SLA based operations. A self-healing SLA is developed by
extended SLA and the experiment results have presented its effectivenes
An approach to failure prediction in a cloud based environment
yesFailure in a cloud system is defined as an even that occurs when the delivered service deviates from the correct intended behavior. As the cloud computing systems continue to grow in scale and complexity, there is an urgent need for cloud service providers (CSP) to guarantee a reliable on-demand resource to their customers in the presence of faults thereby fulfilling their service level agreement (SLA). Component failures in cloud systems are very familiar phenomena. However, large cloud service providers’ data centers should be designed to provide a certain level of availability to the business system. Infrastructure-as-a-service (Iaas) cloud delivery model presents computational resources (CPU and memory), storage resources and networking capacity that ensures high availability in the presence of such failures. The data in-production-faults recorded within a 2 years period has been studied and analyzed from the National Energy Research Scientific computing center (NERSC). Using the real-time data collected from the Computer Failure Data Repository (CFDR), this paper presents the performance of two machine learning (ML) algorithms, Linear Regression (LR) Model and Support Vector Machine (SVM) with a Linear Gaussian kernel for predicting hardware failures in a real-time cloud environment to improve system availability. The performance of the two algorithms have been rigorously evaluated using K-folds cross-validation technique. Furthermore, steps and procedure for future studies has been presented. This research will aid computer hardware companies and cloud service providers (CSP) in designing a reliable fault-tolerant system by providing a better device selection, thereby improving system availability and minimizing unscheduled system downtime
SLA-Oriented Resource Provisioning for Cloud Computing: Challenges, Architecture, and Solutions
Cloud computing systems promise to offer subscription-oriented,
enterprise-quality computing services to users worldwide. With the increased
demand for delivering services to a large number of users, they need to offer
differentiated services to users and meet their quality expectations. Existing
resource management systems in data centers are yet to support Service Level
Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to
realize cloud computing and utility computing. In addition, no work has been
done to collectively incorporate customer-driven service management,
computational risk management, and autonomic resource management into a
market-based resource management system to target the rapidly changing
enterprise requirements of Cloud computing. This paper presents vision,
challenges, and architectural elements of SLA-oriented resource management. The
proposed architecture supports integration of marketbased provisioning policies
and virtualisation technologies for flexible allocation of resources to
applications. The performance results obtained from our working prototype
system shows the feasibility and effectiveness of SLA-based resource
provisioning in Clouds.Comment: 10 pages, 7 figures, Conference Keynote Paper: 2011 IEEE
International Conference on Cloud and Service Computing (CSC 2011, IEEE
Press, USA), Hong Kong, China, December 12-14, 201
Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges
Cloud computing is offering utility-oriented IT services to users worldwide.
Based on a pay-as-you-go model, it enables hosting of pervasive applications
from consumer, scientific, and business domains. However, data centers hosting
Cloud applications consume huge amounts of energy, contributing to high
operational costs and carbon footprints to the environment. Therefore, we need
Green Cloud computing solutions that can not only save energy for the
environment but also reduce operational costs. This paper presents vision,
challenges, and architectural elements for energy-efficient management of Cloud
computing environments. We focus on the development of dynamic resource
provisioning and allocation algorithms that consider the synergy between
various data center infrastructures (i.e., the hardware, power units, cooling
and software), and holistically work to boost data center energy efficiency and
performance. In particular, this paper proposes (a) architectural principles
for energy-efficient management of Clouds; (b) energy-efficient resource
allocation policies and scheduling algorithms considering quality-of-service
expectations, and devices power usage characteristics; and (c) a novel software
technology for energy-efficient management of Clouds. We have validated our
approach by conducting a set of rigorous performance evaluation study using the
CloudSim toolkit. The results demonstrate that Cloud computing model has
immense potential as it offers significant performance gains as regards to
response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference
on Parallel and Distributed Processing Techniques and Applications (PDPTA
2010), Las Vegas, USA, July 12-15, 201
- …