11 research outputs found

    A Truthful Dynamic Workflow Scheduling Mechanism for Commercial Multicloud Environments

    Full text link

    A Review of Workflow Scheduling in Cloud Computing Environment

    Get PDF
    Abstract Over the years, distributed environments have evolved from shared community platforms to utility-based models; the latest of these being Cloud computing. This technology enables the delivery of IT resources over the Internet and follows a pay-as-you-go model where users are charged based on their usage. There are various types of Cloud providers each of which has different product offerings. They are classified into a hierarchy of as-a-service terms: Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). There are a mass of researches on the issue of scheduling in cloud computing, most of them, however, are about workflow and job scheduling. A cloud workflow system is a type of platform service which facilitates the automation of distributed applications based on the novel cloud infrastructure. Many scheduling policies have been proposed till now which aim to maximize the amount of work completed while meeting QoS constraints such as deadline and budget. However many of them are not optimal to incorporate some basic principles of Cloud Computing such as the elasticity and heterogeneity of the computing resources. Therefore our work focuses on studying various problems and issues related to workflow scheduling

    An efficient resource sharing technique for multi-tenant databases

    Get PDF
    Multi-tenancy is one of the key components of cloud computing environment. Multi-tenant database system in SaaS (Software as a Service) has gained a lot of attention in academics, research and business arena. These database systems provide scalability and economic benefits for both cloud service providers and customers(organizations/companies referred as tenants) by sharing same resources and infrastructure in isolation of shared databases, network and computing resources with Service level agreement (SLA) compliances. In a multitenant scenario, active tenants compete for resources in order to access the database. If one tenant blocks up the resources, the performance of all the other tenants may be restricted and a fair sharing of the resources may be compromised. The performance of tenants must not be affected by resource-intensive activities and volatile workloads of other tenants. Moreover, the prime goal of providers is to accomplish low cost of operation, satisfying specific schemas/SLAs of each tenant. Consequently, there is a need to design and develop effective and dynamic resource sharing algorithms which can handle above mentioned issues. This work presents a model embracing a query classification and worker sorting technique to efficiently share I/O, CPU and Memory thus enhancing dynamic resource sharing and improvising the utilization of idle instances proficiently. The model is referred as Multi-Tenant Dynamic Resource Scheduling Model (MTDRSM) .The MTDRSM support workload execution of different benchmark such as TPC-C(Transaction Processing Performance Council), YCSB(The Yahoo! Cloud Serving Benchmark)etc. and on different database such as MySQL, Oracle, H2 database etc. Experiments are conducted for different benchmarks with and without SLA compliances to evaluate the performance of MTDRSM in terms of latency and throughput achieved. The experiments show significant performance improvement over existing Mute Bench model in terms of latency and throughput

    A Chaotic Particle Swarm Optimization-Based Heuristic for Market-Oriented Task-Level Scheduling in Cloud Workflow Systems

    Get PDF
    Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts

    Cost-Efficient Scheduling for Deadline Constrained Grid Workflows

    Get PDF
    Cost optimization for workflow scheduling while meeting deadline is one of the fundamental problems in utility computing. In this paper, a two-phase cost-efficient scheduling algorithm called critical chain is presented. The proposed algorithm uses the concept of slack time in both phases. The first phase is deadline distribution over all tasks existing in the workflow which is done considering critical path properties of workflow graphs. Critical chain uses slack time to iteratively select most critical sequence of tasks and then assigns sub-deadlines to those tasks. In the second phase named mapping step, it tries to allocate a server to each task considering task's sub-deadline. In the mapping step, slack time priority in selecting ready task is used to reduce deadline violation. Furthermore, the algorithm tries to locally optimize the computation and communication costs of sequential tasks exploiting dynamic programming. After proposing the scheduling algorithm, three measures for the superiority of a scheduling algorithm are introduced, and the proposed algorithm is compared with other existing algorithms considering the measures. Results obtained from simulating various systems show that the proposed algorithm outperforms four well-known existing workflow scheduling algorithms

    Dynamic Multiobjectives Optimization with a Changing Number of Objectives

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.Existing studies on dynamic multiobjective optimization (DMO) focus on problems with time-dependent objective functions, while the ones with a changing number of objectives have rarely been considered in the literature. Instead of changing the shape or position of the Pareto-optimal front/set (PF/PS) when having time-dependent objective functions, increasing or decreasing the number of objectives usually leads to the expansion or contraction of the dimension of the PF/PS manifold. Unfortunately, most existing dynamic handling techniques can hardly be adapted to this type of dynamics. In this paper, we report our attempt toward tackling the DMO problems with a changing number of objectives. We implement a dynamic two-archive evolutionary algorithm which maintains two co-evolving populations simultaneously. In particular, these two populations are complementary to each other: one concerns more about the convergence while the other concerns more about the diversity. The compositions of these two populations are adaptively reconstructed once the environment changes. In addition, these two populations interact with each other via a mating selection mechanism. Comprehensive experiments are conducted on various benchmark problems with a time-dependent number of objectives. Empirical results fully demonstrate the effectiveness of our proposed algorithm.Engineering and Physical Sciences Research Council (EPSRC)NSF

    A truthful dynamic workflow scheduling mechanism for commercial multicloud environments

    No full text
    Abstract—The ultimate goal of cloud providers by providing resources is increasing their revenues. This goal leads to a selfish behavior that negatively affects the users of a commercial multicloud environment. In this paper, we introduce a pricing model and a truthful mechanism for scheduling single tasks considering two objectives: monetary cost and completion time. With respect to the social cost of the mechanism, i.e., minimizing the completion time and monetary cost, we extend the mechanism for dynamic scheduling of scientific workflows. We theoretically analyze the truthfulness and the efficiency of the mechanism and present extensive experimental results showing significant impact of the selfish behavior of the cloud providers on the efficiency of the whole system. The experiments conducted using real-world and synthetic workflow applications demonstrate that our solutions dominate in most cases the Pareto-optimal solutions estimated by two classical multiobjective evolutionary algorithms. Index Terms—Workflow scheduling, multicloud environment, game theory, reverse auction, truthful mechanism Ç

    Dynamic Multi-Objectives Optimization with a Changing Number of Objectives

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.Existing studies on dynamic multiobjective optimization (DMO) focus on problems with time-dependent objective functions, while the ones with a changing number of objectives have rarely been considered in the literature. Instead of changing the shape or position of the Pareto-optimal front/set (PF/PS) when having time-dependent objective functions, increasing or decreasing the number of objectives usually leads to the expansion or contraction of the dimension of the PF/PS manifold. Unfortunately, most existing dynamic handling techniques can hardly be adapted to this type of dynamics. In this paper, we report our attempt toward tackling the DMO problems with a changing number of objectives. We implement a dynamic two-archive evolutionary algorithm which maintains two co-evolving populations simultaneously. In particular, these two populations are complementary to each other: one concerns more about the convergence while the other concerns more about the diversity. The compositions of these two populations are adaptively reconstructed once the environment changes. In addition, these two populations interact with each other via a mating selection mechanism. Comprehensive experiments are conducted on various benchmark problems with a time-dependent number of objectives. Empirical results fully demonstrate the effectiveness of our proposed algorithm.Engineering and Physical Sciences Research Council (EPSRC)NSF

    Partitioning workflow applications over federated clouds to meet non-functional requirements

    Get PDF
    PhD ThesisWith cloud computing, users can acquire computer resources when they need them on a pay-as-you-go business model. Because of this, many applications are now being deployed in the cloud, and there are many di erent cloud providers worldwide. Importantly, all these various infrastructure providers o er services with di erent levels of quality. For example, cloud data centres are governed by the privacy and security policies of the country where the centre is located, while many organisations have created their own internal \private cloud" to meet security needs. With all this varieties and uncertainties, application developers who decide to host their system in the cloud face the issue of which cloud to choose to get the best operational conditions in terms of price, reliability and security. And the decision becomes even more complicated if their application consists of a number of distributed components, each with slightly di erent requirements. Rather than trying to identify the single best cloud for an application, this thesis considers an alternative approach, that is, combining di erent clouds to meet users' non-functional requirements. Cloud federation o ers the ability to distribute a single application across two or more clouds, so that the application can bene t from the advantages of each one of them. The key challenge for this approach is how to nd the distribution (or deployment) of application components, which can yield the greatest bene ts. In this thesis, we tackle this problem and propose a set of algorithms, and a framework, to partition a work ow-based application over federated clouds in order to exploit the strengths of each cloud. The speci c goal is to split a distributed application structured as a work ow such that the security and reliability requirements of each component are met, whilst the overall cost of execution is minimised. To achieve this, we propose and evaluate a cloud broker for partitioning a work ow application over federated clouds. The broker integrates with the e-Science Central cloud platform to automatically deploy a work ow over public and private clouds. We developed a deployment planning algorithm to partition a large work ow appli- - i - cation across federated clouds so as to meet security requirements and minimise the monetary cost. A more generic framework is then proposed to model, quantify and guide the partitioning and deployment of work ows over federated clouds. This framework considers the situation where changes in cloud availability (including cloud failure) arise during work ow execution
    corecore