82 research outputs found

    Predicting Scheduling Failures in the Cloud

    Full text link
    Cloud Computing has emerged as a key technology to deliver and manage computing, platform, and software services over the Internet. Task scheduling algorithms play an important role in the efficiency of cloud computing services as they aim to reduce the turnaround time of tasks and improve resource utilization. Several task scheduling algorithms have been proposed in the literature for cloud computing systems, the majority relying on the computational complexity of tasks and the distribution of resources. However, several tasks scheduled following these algorithms still fail because of unforeseen changes in the cloud environments. In this paper, using tasks execution and resource utilization data extracted from the execution traces of real world applications at Google, we explore the possibility of predicting the scheduling outcome of a task using statistical models. If we can successfully predict tasks failures, we may be able to reduce the execution time of jobs by rescheduling failed tasks earlier (i.e., before their actual failing time). Our results show that statistical models can predict task failures with a precision up to 97.4%, and a recall up to 96.2%. We simulate the potential benefits of such predictions using the tool kit GloudSim and found that they can improve the number of finished tasks by up to 40%. We also perform a case study using the Hadoop framework of Amazon Elastic MapReduce (EMR) and the jobs of a gene expression correlations analysis study from breast cancer research. We find that when extending the scheduler of Hadoop with our predictive models, the percentage of failed jobs can be reduced by up to 45%, with an overhead of less than 5 minutes

    Extending Demand Response to Tenants in Cloud Data Centers via Non-intrusive Workload Flexibility Pricing

    Full text link
    Participating in demand response programs is a promising tool for reducing energy costs in data centers by modulating energy consumption. Towards this end, data centers can employ a rich set of resource management knobs, such as workload shifting and dynamic server provisioning. Nonetheless, these knobs may not be readily available in a cloud data center (CDC) that serves cloud tenants/users, because workloads in CDCs are managed by tenants themselves who are typically charged based on a usage-based or flat-rate pricing and often have no incentive to cooperate with the CDC operator for demand response and cost saving. Towards breaking such "split incentive" hurdle, a few recent studies have tried market-based mechanisms, such as dynamic pricing, inside CDCs. However, such mechanisms often rely on complex designs that are hard to implement and difficult to cope with by tenants. To address this limitation, we propose a novel incentive mechanism that is not dynamic, i.e., it keeps pricing for cloud resources unchanged for a long period. While it charges tenants based on a Usage-based Pricing (UP) as used by today's major cloud operators, it rewards tenants proportionally based on the time length that tenants set as deadlines for completing their workloads. This new mechanism is called Usage-based Pricing with Monetary Reward (UPMR). We demonstrate the effectiveness of UPMR both analytically and empirically. We show that UPMR can reduce the CDC operator's energy cost by 12.9% while increasing its profit by 4.9%, compared to the state-of-the-art approaches used by today's CDC operators to charge their tenants

    Reducing the operational cost of cloud data centers through renewable energy

    Get PDF
    The success of cloud computing services has led to big computing infrastructures that are complex to manage and very costly to operate. In particular, power supply dominates the operational costs of big infrastructures, and several solutions have to be put in place to alleviate these operational costs and make the whole infrastructure more sustainable. In this paper, we investigate the case of a complex infrastructure composed of data centers (DCs) located in different geographical areas in which renewable energy generators are installed, co-located with the data centers, to reduce the amount of energy that must be purchased by the power grid. Since renewable energy generators are intermittent, the load management strategies of the infrastructure have to be adapted to the intermittent nature of the sources. In particular, we consider EcoMultiCloud, a load management strategy already proposed in the literature for multi-objective load management strategies, and we adapt it to the presence of renewable energy sources. Hence, cost reduction is achieved in the load allocation process, when virtual machines (VMs) are assigned to a data center of the considered infrastructure, by considering both energy cost variations and the presence of renewable energy production. Performance is analyzed for a specific infrastructure composed of four data centers. Results show that, despite being intermittent and highly variable, renewable energy can be effectively exploited in geographical data centers when a smart load allocation strategy is implemented. In addition, the results confirm that EcoMultiCloud is very flexible and is suited to the considered scenario

    Optimized Contract-based Model for Resource Allocation in Federated Geo-distributed Clouds

    Get PDF
    In the era of Big Data, with data growing massively in scale and velocity, cloud computing and its pay-as-you-go modelcontinues to provide significant cost benefits and a seamless service delivery model for cloud consumers. The evolution of small-scaleand large-scale geo-distributed datacenters operated and managed by individual Cloud Service Providers (CSPs) raises newchallenges in terms of effective global resource sharing and management of autonomously-controlled individual datacenter resourcestowards a globally efficient resource allocation model. Earlier solutions for geo-distributed clouds have focused primarily on achievingglobal efficiency in resource sharing, that although tries to maximize the global resource allocation, results in significant inefficiencies inlocal resource allocation for individual datacenters and individual cloud provi ders leading to unfairness in their revenue and profitearned. In this paper, we propose a new contracts-based resource sharing model for federated geo-distributed clouds that allows CSPsto establish resource sharing contracts with individual datacentersapriorifor defined time intervals during a 24 hour time period. Based on the established contracts, individual CSPs employ a contracts cost and duration aware job scheduling and provisioning algorithm that enables jobs to complete and meet their response time requirements while achieving both global resource allocation efficiency and local fairness in the profit earned. The proposed techniques are evaluated through extensive experiments using realistic workloads generated using the SHARCNET cluster trace. The experiments demonstrate the effectiveness, scalability and resource sharing fairness of the proposed model

    DEFINITE OUTLAY OPTIMALITY BY SERVING VOLATILE REQUESTS

    Get PDF
    Several projects were emerged inside the yesteryear couple of years that explore migration of services into cloud platform. More novel programs were created on cloud platform while numerous traditional programs are in addition considering cloud-ward move including programs of content distribution programs. Two important tasks are concerned for just about any go to transfer contents towards cloud storage, also to allocate web service load towards cloud-based web services. Inside our work we design dynamic control formula to place contents and dispatch demands in the hybrid cloud system spanning geo-distributed data centres that reduces general operational expenditure ultimately, prone to the limitations and services information response time

    Efficient Resource Management for Cloud Computing Environments

    Get PDF
    Cloud computing has recently gained popularity as a cost-effective model for hosting and delivering services over the Internet. In a cloud computing environment, a cloud provider packages its physical resources in data centers into virtual resources and offers them to service providers using a pay-as-you-go pricing model. Meanwhile, a service provider uses the rented virtual resources to host its services. This large-scale multi-tenant architecture of cloud computing systems raises key challenges regarding how data centers resources should be controlled and managed by both service and cloud providers. This thesis addresses several key challenges pertaining to resource management in cloud environments. From the perspective of service providers, we address the problem of selecting appropriate data centers for service hosting with consideration of resource price, service quality as well as dynamic reconfiguration costs. From the perspective of cloud providers, as it has been reported that workload in real data centers can be typically divided into server-based applications and MapReduce applications with different performance and scheduling criteria, we provide separate resource management solutions for each type of workloads. For server-based applications, we provide a dynamic capacity provisioning scheme that dynamically adjusts the number of active servers to achieve the best trade-off between energy savings and scheduling delay, while considering heterogeneous resource characteristics of both workload and physical machines. For MapReduce applications, we first analyzed task run-time resource consumption of a large variety of MapReduce jobs and discovered it can vary significantly over-time, depending on the phase the task is currently executing. We then present a novel scheduling algorithm that controls task execution at the level of phases with the aim of improving both job running time and resource utilization. Through detailed simulations and experiments using real cloud clusters, we have found our proposed solutions achieve substantial gain compared to current state-of-art resource management solutions, and therefore have strong implications in the design of real cloud resource management systems in practice
    • …
    corecore