1,659 research outputs found

    Resource provisioning in Science Clouds: Requirements and challenges

    Full text link
    Cloud computing has permeated into the information technology industry in the last few years, and it is emerging nowadays in scientific environments. Science user communities are demanding a broad range of computing power to satisfy the needs of high-performance applications, such as local clusters, high-performance computing systems, and computing grids. Different workloads are needed from different computational models, and the cloud is already considered as a promising paradigm. The scheduling and allocation of resources is always a challenging matter in any form of computation and clouds are not an exception. Science applications have unique features that differentiate their workloads, hence, their requirements have to be taken into consideration to be fulfilled when building a Science Cloud. This paper will discuss what are the main scheduling and resource allocation challenges for any Infrastructure as a Service provider supporting scientific applications

    Service Provisioning through Opportunistic Computing in Mobile Clouds

    Get PDF
    Mobile clouds are a new paradigm enabling mobile users to access the heterogeneous services present in a pervasive mobile environment together with the rich service offers of the cloud infrastructures. In mobile computing environments mobile devices can also act as service providers, using approaches conceptually similar to service-oriented models. Many approaches implement service provisioning between mobile devices with the intervention of cloud-based handlers, with mobility playing a disruptive role to the functionality offered by of the system. In our approach, we exploit the opportunistic computing model, whereby mobile devices exploit direct contacts to provide services to each other, without necessarily go through conventional cloud services residing in the Internet. Conventional cloud services are therefore complemented by a mobile cloud formed directly by the mobile devices. This paper exploits an algorithm for service selection and composition in this type of mobile cloud environments able to estimate the execution time of a service composition. The model enables the system to produce an estimate of the execution time of the alternative compositions that can be exploited to solve a user's request and then choose the best one among them. We compare the performance of our algorithm with alternative strategies, showing its superior performance from a number of standpoints. In particular, we show how our algorithm can manage a higher load of requests without causing instability in the system conversely to the other strategies. When the load of requests is manageable for all strategies, our algorithm can achieve up to 75% less time spent in average to solve requests

    A Self-adaptive Agent-based System for Cloud Platforms

    Full text link
    Cloud computing is a model for enabling on-demand network access to a shared pool of computing resources, that can be dynamically allocated and released with minimal effort. However, this task can be complex in highly dynamic environments with various resources to allocate for an increasing number of different users requirements. In this work, we propose a Cloud architecture based on a multi-agent system exhibiting a self-adaptive behavior to address the dynamic resource allocation. This self-adaptive system follows a MAPE-K approach to reason and act, according to QoS, Cloud service information, and propagated run-time information, to detect QoS degradation and make better resource allocation decisions. We validate our proposed Cloud architecture by simulation. Results show that it can properly allocate resources to reduce energy consumption, while satisfying the users demanded QoS

    Service Level Agreement as an Instrument to Enhance Trust in Cloud Computing – An Analysis of Infrastructure-as-a-Service Providers

    Get PDF
    We analyze service level agreements (SLAs) for cloud computing services, in particular SLAs published by infrastructure-as-a-service (IaaS) providers on their websites. The rationale is to investigate the potential and actual roles of SLAs as trust-enhancing instruments. Cloud computing is still not as widespread as it could be, because many decision makers do not sufficiently trust the technology or the providers, and hence are skeptical about adopting it. Enhancing trust could significantly advance cloud computing. We discuss the main aspects of trust as well as typical characteristics described in SLAs. Following this, a study of actual service level agreements offered by IaaS providers and published on their websites is presented. One of the findings is that at present only a few providers exploit the full potential of SLAs as trust-enhancing instruments

    HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Full text link
    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.Comment: 15 pages, 9 figure
    • …
    corecore