7 research outputs found

    Autonomic Cloud Computing: Open Challenges and Architectural Elements

    Full text link
    As Clouds are complex, large-scale, and heterogeneous distributed systems, management of their resources is a challenging task. They need automated and integrated intelligent strategies for provisioning of resources to offer services that are secure, reliable, and cost-efficient. Hence, effective management of services becomes fundamental in software platforms that constitute the fabric of computing Clouds. In this direction, this paper identifies open issues in autonomic resource provisioning and presents innovative management techniques for supporting SaaS applications hosted on Clouds. We present a conceptual architecture and early results evidencing the benefits of autonomic management of Clouds.Comment: 8 pages, 6 figures, conference keynote pape

    SLA-Oriented Resource Provisioning for Cloud Computing: Challenges, Architecture, and Solutions

    Full text link
    Cloud computing systems promise to offer subscription-oriented, enterprise-quality computing services to users worldwide. With the increased demand for delivering services to a large number of users, they need to offer differentiated services to users and meet their quality expectations. Existing resource management systems in data centers are yet to support Service Level Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to realize cloud computing and utility computing. In addition, no work has been done to collectively incorporate customer-driven service management, computational risk management, and autonomic resource management into a market-based resource management system to target the rapidly changing enterprise requirements of Cloud computing. This paper presents vision, challenges, and architectural elements of SLA-oriented resource management. The proposed architecture supports integration of marketbased provisioning policies and virtualisation technologies for flexible allocation of resources to applications. The performance results obtained from our working prototype system shows the feasibility and effectiveness of SLA-based resource provisioning in Clouds.Comment: 10 pages, 7 figures, Conference Keynote Paper: 2011 IEEE International Conference on Cloud and Service Computing (CSC 2011, IEEE Press, USA), Hong Kong, China, December 12-14, 201

    Dynamic Scaling for Service Oriented Applications: Implications of Virtual Machine Placement on IaaS Clouds

    Get PDF
    Abstraction of physical hardware using infrastructure-as-a-service (IaaS) clouds leads to the simplistic view that resources are homogeneous and that infinite scaling is possible with linear increases in performance. Support for autonomic scaling of multi-tier service oriented applications requires determination of when, what, and where to scale. \u27When\u27 is addressed by hotspot detection schemes using techniques including performance modeling and time series analysis. \u27What\u27 relates to determining the quantity and size of new resources to provision. \u27Where\u27 involves identification of the best location(s) to provision new resources. In this paper we investigate primarily \u27where\u27 new infrastructure should be provisioned, and secondly \u27what\u27 the infrastructure should be. Dynamic scaling of infrastructure for service oriented applications requires rapid response to changes in demand to meet application quality-of-service requirements. We investigate the performance and resource cost implications of VM placement when dynamically scaling server infrastructure of service oriented applications . We evaluate dynamic scaling in the context of providing modeling-as-a-service for two environmental science models

    Capacity Management for Cloud Computing: A System Dynamics Approach

    Get PDF
    As the demand for cloud computing as a preferred computing architecture grows, the need for effective capacity planning by cloud providers becomes crucial for their long term viability. Situations involving under-capacity and over-capacity represent lost opportunities and increased overhead. Economic conditions play a critical role in determining the capacity, cost, and revenue of cloud-based services. Using a system dynamics approach, this study evaluates the different conditions in cloud ecosystem from a capacity planning and management perspective, with a view to providing cloud service providers guidance for cloud capacity building strategies

    Service Level Agreements in Cloud Computing and Big Data

    Get PDF
    Now-a-days Most of the industries are having large volumes of data. Data has range of Tera bytes to Peta byte. Organizations are looking to handle the growth of data. Enterprises are using cloud deployments to address the big data and analytics with respect to the interaction between cloud and big data. This paper presents big data issues and research directions towards the ongoing work of processing of big data in the distributed environments

    Elastic Scalable Cloud Computing Using Application-Level Migration

    Full text link
    middleware framework to support autonomous workload elas-ticity and scalability based on application-level migration as a reconfiguration strategy. While other scalable frameworks (e.g., MapReduce or Google App Engine) force application developers to write programs following specific APIs, COS provides scal-ability in a general-purpose programming framework based on an actor-oriented programming language. When all executing VMs are highly utilized, COS scales a workload up by migrating mobile actors over to newly dynamically created VMs. When VM utilization drops, COS scales the workload down by consolidating actors and terminating idle VMs. Application-level migration is advantageous compared to VM migration especially in hybrid clouds in which migration costs over the Internet are critical to scale out the workloads. We demonstrate the general purpose programming approach using a tightly-coupled computation. We compare the performance of autonomous (i.e., COS-driven) versus ideal reconfiguration, as well as the impact of granularity of reconfiguration, i.e., VM migration versus application-level migration. Our results show promise for future fully automated cloud computing resource management systems that efficiently enable truly elastic and scalable general-purpose workloads. I

    GreeDi: Energy Efficient Routing Algorithm for Big Data on Cloud

    Get PDF
    The ever-increasing density in cloud computing parties, i.e. users, services, providers and data centres, has led to a significant exponential growth in: data produced and transferred among the cloud computing parties; network traffic; and the energy consumed by the cloud computing massive infrastructure, which is required to respond quickly and effectively to users requests. Transferring big data volume among the aforementioned parties requires a high bandwidth connection, which consumes larger amounts of energy than just processing and storing big data on cloud data centres, and hence producing high carbon dioxide emissions. This power consumption is highly significant when transferring big data into a data centre located relatively far from the users geographical location. Thus, it became high-necessity to locate the lowest energy consumption route between the user and the designated data centre, while making sure the users requirements, e.g. response time, are met. The main contribution of this paper is GreeDi, a network-based routing algorithm to find the most energy efficient path to the cloud data centre for processing and storing big data. The algorithm is, first, formalised by the situation calculus. The linear, goal and dynamic programming approaches used to model the algorithm. The algorithm is then evaluated against the baseline shortest path algorithm with minimum number of nodes traversed, using a real Italian ISP physical network topology
    corecore