53 research outputs found

    Energy Efficient Service Delivery in Clouds in Compliance with the Kyoto Protocol

    Full text link
    Cloud computing is revolutionizing the ICT landscape by providing scalable and efficient computing resources on demand. The ICT industry - especially data centers, are responsible for considerable amounts of CO2 emissions and will very soon be faced with legislative restrictions, such as the Kyoto protocol, defining caps at different organizational levels (country, industry branch etc.) A lot has been done around energy efficient data centers, yet there is very little work done in defining flexible models considering CO2. In this paper we present a first attempt of modeling data centers in compliance with the Kyoto protocol. We discuss a novel approach for trading credits for emission reductions across data centers to comply with their constraints. CO2 caps can be integrated with Service Level Agreements and juxtaposed to other computing commodities (e.g. computational power, storage), setting a foundation for implementing next-generation schedulers and pricing models that support Kyoto-compliant CO2 trading schemes

    Towards Model-Driven Provisioning, Deployment, Monitoring, and Adaptation of Multi-cloud Systems

    Full text link

    A Survey and Taxonomy of Self-Aware and Self-Adaptive Cloud Autoscaling Systems

    Get PDF
    Autoscaling system can reconfigure cloud-based services and applications, through various configurations of cloud software and provisions of hardware resources, to adapt to the changing environment at runtime. Such a behavior offers the foundation for achieving elasticity in a modern cloud computing paradigm. Given the dynamic and uncertain nature of the shared cloud infrastructure, the cloud autoscaling system has been engineered as one of the most complex, sophisticated, and intelligent artifacts created by humans, aiming to achieve self-aware, self-adaptive, and dependable runtime scaling. Yet the existing Self-aware and Self-adaptive Cloud Autoscaling System (SSCAS) is not at a state where it can be reliably exploited in the cloud. In this article, we survey the state-of-the-art research studies on SSCAS and provide a comprehensive taxonomy for this field. We present detailed analysis of the results and provide insights on open challenges, as well as the promising directions that are worth investigated in the future work of this area of research. Our survey and taxonomy contribute to the fundamentals of engineering more intelligent autoscaling systems in the cloud

    SLA-Driven Cloud Computing Domain Representation and Management

    Full text link
    The assurance of Quality of Service (QoS) to the applications, although identified as a key feature since long ago [1], is one of the fundamental challenges that remain unsolved. In the Cloud Computing context, Quality of Service is defined as the measure of the compliance of certain user requirement in the delivery of a cloud resource, such as CPU or memory load for a virtual machine, or more abstract and higher level concepts such as response time or availability. Several research groups, both from academia and industry, have started working on describing the QoS levels that define the conditions under which the service need to be delivered, as well as on developing the necessary means to effectively manage and evaluate the state of these conditions. [2] propose Service Level Agreements (SLAs) as the vehicle for the definition of QoS guarantees, and the provision and management of resources. A Service Level Agreement (SLA) is a formal contract between providers and consumers, which defines the quality of service, the obligations and the guarantees in the delivery of a specific good. In the context of Cloud computing, SLAs are considered to be machine readable documents, which are automatically managed by the provider's platform. SLAs need to be dynamically adapted to the variable conditions of resources and applications. In a multilayer architecture, different parts of an SLA may refer to different resources. SLAs may therefore express complex relationship between entities in a changing environment, and be applied to resource selection to implement intelligent scheduling algorithms. Therefore SLAs are widely regarded as a key feature for the future development of Cloud platforms. However, the application of SLAs for Grid and Cloud systems has many open research lines. One of these challenges, the modeling of the landscape, lies at the core of the objectives of the Ph. D. Thesis.García García, A. (2014). SLA-Driven Cloud Computing Domain Representation and Management [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/36579TESI

    Autonomous management of cost, performance, and resource uncertainty for migration of applications to infrastructure-as-a-service (IaaS) clouds

    Get PDF
    2014 Fall.Includes bibliographical references.Infrastructure-as-a-Service (IaaS) clouds abstract physical hardware to provide computing resources on demand as a software service. This abstraction leads to the simplistic view that computing resources are homogeneous and infinite scaling potential exists to easily resolve all performance challenges. Adoption of cloud computing, in practice however, presents many resource management challenges forcing practitioners to balance cost and performance tradeoffs to successfully migrate applications. These challenges can be broken down into three primary concerns that involve determining what, where, and when infrastructure should be provisioned. In this dissertation we address these challenges including: (1) performance variance from resource heterogeneity, virtualization overhead, and the plethora of vaguely defined resource types; (2) virtual machine (VM) placement, component composition, service isolation, provisioning variation, and resource contention for multitenancy; and (3) dynamic scaling and resource elasticity to alleviate performance bottlenecks. These resource management challenges are addressed through the development and evaluation of autonomous algorithms and methodologies that result in demonstrably better performance and lower monetary costs for application deployments to both public and private IaaS clouds. This dissertation makes three primary contributions to advance cloud infrastructure management for application hosting. First, it includes design of resource utilization models based on step-wise multiple linear regression and artificial neural networks that support prediction of better performing component compositions. The total number of possible compositions is governed by Bell's Number that results in a combinatorially explosive search space. Second, it includes algorithms to improve VM placements to mitigate resource heterogeneity and contention using a load-aware VM placement scheduler, and autonomous detection of under-performing VMs to spur replacement. Third, it describes a workload cost prediction methodology that harnesses regression models and heuristics to support determination of infrastructure alternatives that reduce hosting costs. Our methodology achieves infrastructure predictions with an average mean absolute error of only 0.3125 VMs for multiple workloads

    Simulation of the performance of complex data-intensive workflows

    Get PDF
    PhD ThesisRecently, cloud computing has been used for analytical and data-intensive processes as it offers many attractive features, including resource pooling, on-demand capability and rapid elasticity. Scientific workflows use these features to tackle the problems of complex data-intensive applications. Data-intensive workflows are composed of many tasks that may involve large input data sets and produce large amounts of data as output, which typically runs in highly dynamic environments. However, the resources should be allocated dynamically depending on the demand changes of the work flow, as over-provisioning increases the cost and under-provisioning causes Service Level Agreement (SLA) violation and poor Quality of Service (QoS). Performance prediction of complex workflows is a necessary step prior to the deployment of the workflow. Performance analysis of complex data-intensive workflows is a challenging task due to the complexity of their structure, diversity of big data, and data dependencies, in addition to the required examination to the performance and challenges associated with running their workflows in the real cloud. In this thesis, a solution is explored to address these challenges, using a Next Generation Sequencing (NGS) workflow pipeline as a case study, which may require hundreds/ thousands of CPU hours to process a terabyte of data. We propose a methodology to model, simulate and predict runtime and the number of resources used by the complex data-intensive workflows. One contribution of our simulation methodology is that it provides an ability to extract the simulation parameters (e.g., MIPs and BW values) that are required for constructing a training set and a fairly accurate prediction of the run time for input for cluster sizes much larger than ones used in training of the prediction model. The proposed methodology permits the derivation of run time prediction based on historical data from the provenance fi les. We present the run time prediction of the complex workflow by considering different cases of its running in the cloud such as execution failure and library deployment time. In case of failure, the framework can apply the prediction only partially considering the successful parts of the pipeline, in the other case the framework can predict with or without considering the time to deploy libraries. To further improve the accuracy of prediction, we propose a simulation model that handles I/O contention

    CLOUD COMPUTING MADE EASY

    Get PDF
    Cloud computing is the delivery of computing as a service rather than as a product, where by shared resources software and information are provided to computer are other device as a utility over a network. In a cloud computing system, there is a significant workload shift. Local computers no longer have to do all the heavy lifting, when it comes to running applications. The network of computers that makeup the cloud handles them instead. Hardware and software demand on the users side decrease. The only thing the users’ computer needs to be able to run is the cloud computing systems interface software, which can be as simple as a web browser and the clouds network take care of the rest. This article is prepared based on the Author’s teaching the subject for M.Tech level recent years, keeping in view of VTU Syllabus in particular

    Architectural stability of self-adaptive software systems

    Get PDF
    This thesis studies the notion of stability in software engineering with the aim of understanding its dimensions, facets and aspects, as well as characterising it. The thesis further investigates the aspect of behavioural stability at the architectural level, as a property concerned with the architecture's capability in maintaining the achievement of expected quality of service and accommodating runtime changes, in order to delay the architecture drifting and phasing-out as a consequence of the continuous unsuccessful provision of quality requirements. The research aims to provide a systematic and methodological support for analysing, modelling, designing and evaluating architectural stability. The novelty of this research is the consideration of stability during runtime operation, by focusing on the stable provision of quality of service without violations. As the runtime dimension is associated with adaptations, the research investigates stability in the context of self-adaptive software architectures, where runtime stability is challenged by the quality of adaptation, which in turn affects the quality of service. The research evaluation focuses on the effectiveness, scale and accuracy in handling runtime dynamics, using the self-adaptive cloud architectures
    • …
    corecore