31,675 research outputs found

    AUTONOMIC MANAGEMENT OF SOFTWARE AS A SERVICE SYSTEMS WITH MULTIPLE QUALITY OF SERVICE CLASSES

    Get PDF
    In recent years the emergence of Software as a Service (SaaS) provision and cloud computing in general had a tremendous impact on corporate information technology. While the implementation and successful operation of powerful information systems continues to be a cornerstone of success in modern enterprises, the ability to acquire IT infrastructure, software, or platforms on a pay-as-you-go basis has opened a new avenue for optimizing operational costs and processes. In this context we target elastic SaaS systems with on-demand cloud resource provisioning and implement an autonomic management artifact. Our framework forecasts future user behavior based on historic data, analyzes the impact of different workload levels on system performance based on a non-linear performance model, analyzes the economic impact of different provisioning strategies, derives an optimal operation strategy, and automatically assigns requests from users belonging to different Quality of Service (QoS) classes to the appropriate server instances. More generally, our artifact optimizes IT system operation based on a holistic evaluation of key aspects of service operation (e.g., system usage patterns, system performance, Service Level Agreements). The evaluation of our prototype, based on a real production system workload trace, indicates a cost-of-operation reduction by up to 60 percent without compromising QoS requirements

    Review of the environmental and organisational implications of cloud computing: final report.

    Get PDF
    Cloud computing – where elastic computing resources are delivered over the Internet by external service providers – is generating significant interest within HE and FE. In the cloud computing business model, organisations or individuals contract with a cloud computing service provider on a pay-per-use basis to access data centres, application software or web services from any location. This provides an elasticity of provision which the customer can scale up or down to meet demand. This form of utility computing potentially opens up a new paradigm in the provision of IT to support administrative and educational functions within HE and FE. Further, the economies of scale and increasingly energy efficient data centre technologies which underpin cloud services means that cloud solutions may also have a positive impact on carbon footprints. In response to the growing interest in cloud computing within UK HE and FE, JISC commissioned the University of Strathclyde to undertake a Review of the Environmental and Organisational Implications of Cloud Computing in Higher and Further Education [19]

    On the Economics of Cloud Markets

    Full text link
    Cloud computing is a paradigm that has the potential to transform and revolutionalize the next generation IT industry by making software available to end-users as a service. A cloud, also commonly known as a cloud network, typically comprises of hardware (network of servers) and a collection of softwares that is made available to end-users in a pay-as-you-go manner. Multiple public cloud providers (ex., Amazon) co-existing in a cloud computing market provide similar services (software as a service) to its clients, both in terms of the nature of an application, as well as in quality of service (QoS) provision. The decision of whether a cloud hosts (or finds it profitable to host) a service in the long-term would depend jointly on the price it sets, the QoS guarantees it provides to its customers, and the satisfaction of the advertised guarantees. In this paper, we devise and analyze three inter-organizational economic models relevant to cloud networks. We formulate our problems as non co-operative price and QoS games between multiple cloud providers existing in a cloud market. We prove that a unique pure strategy Nash equilibrium (NE) exists in two of the three models. Our analysis paves the path for each cloud provider to 1) know what prices and QoS level to set for end-users of a given service type, such that the provider could exist in the cloud market, and 2) practically and dynamically provision appropriate capacity for satisfying advertised QoS guarantees.Comment: 7 pages, 2 figure

    HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Full text link
    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.Comment: 15 pages, 9 figure

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Notes on Cloud computing principles

    Get PDF
    This letter provides a review of fundamental distributed systems and economic Cloud computing principles. These principles are frequently deployed in their respective fields, but their inter-dependencies are often neglected. Given that Cloud Computing first and foremost is a new business model, a new model to sell computational resources, the understanding of these concepts is facilitated by treating them in unison. Here, we review some of the most important concepts and how they relate to each other
    • …
    corecore