2,574 research outputs found

    Idle block based methods for cloud workflow scheduling with preemptive and non-preemptive tasks

    Full text link
    [EN] Complex workflow applications are widely used in scientific computing and economic analysis, which commonly include both preemptive and non-preemptive tasks. Cloud computing provides a convenient way for users to access different resources based on the ¿pay-as-you-go¿ model. However, different resource renting alternatives (reserved, on-demand or spot) are usually provided by the service provider. The spot instances provide a dynamic and cheaper alternative comparing to the on-demand one. However, failures often occur due to the fluctuations of the price of the instance. It is a big challenge to determine the appropriate amount of spot and on-demand resources for workflow applications with both preemptive and non-preemptive tasks. In this paper, the workflow scheduling problem with both spot and on-demand instances is considered. The objective is to minimize the total renting cost under deadline constrains. An idle time block-based method is proposed for the considered problem. Different idle time block-based searing and improving strategies are developed to construct schedules for workflow applications. Schedules are improved by a forward and backward moving mechanism. Experimental and statistical results demonstrate the effectiveness of the proposed algorithm over a lot of tests with different sizes.This work is supported by the National Natural Science Foundation of China (No. 61572127, 61272377), the National Key Research and Development Program of China (No. 2017YFB1400800). Ruben Ruiz is partially supported by the Spanish Ministry of Economy and Competitiveness, under the project "SCHEYARD - Optimization of Scheduling Problems in Container Yards" (No. DPI2015-65895-R) financed by FEDER funds.Chen, L.; Li, X.; Ruiz García, R. (2018). Idle block based methods for cloud workflow scheduling with preemptive and non-preemptive tasks. Future Generation Computer Systems. 89:659-669. https://doi.org/10.1016/j.future.2018.07.037S6596698

    Constructing Reliable Computing Environments on Top of Amazon EC2 Spot Instances

    Get PDF
    Cloud provider Amazon Elastic Compute Cloud (EC2) gives access to resources in the form of virtual servers, also known as instances. EC2 spot instances (SIs) offer spare computational capacity at steep discounts compared to reliable and fixed price on-demand instances. The drawback, however, is that the delay in acquiring spots can be incredible high. Moreover, SIs may not always be available as they can be reclaimed by EC2 at any given time, with a two-minute interruption notice. In this paper, we propose a multi-workflow scheduling algorithm, allied with a container migration-based mechanism, to dynamically construct and readjust virtual clusters on top of non-reserved EC2 pricing model instances. Our solution leverages recent findings on performance and behavior characteristics of EC2 spots. We conducted simulations by submitting real-life workflow applications, constrained by user-defined deadline and budget quality of service (QoS) parameters. The results indicate that our solution improves the rate of completed tasks by almost 20%, and the rate of completed workflows by at least 30%, compared with other state-of-the-art algorithms, for a worse-case scenarioinfo:eu-repo/semantics/publishedVersio

    Employing the Powered Hybridized Darts Game with BWO Optimization for Effective Job Scheduling and Distributing Load in the Cloud-Based Environment

    Get PDF
    One of the most frequent issues in cloud computing systems is job scheduling, which is designed to efficiently reduce installation time and cost while concurrently enhancing resource utilisation. Limitations such as accessible implementation costs, high resource utilisation, insufficient make-span, and fast scheduling response lead to the Nondeterministic Polynomial (NP)-hard optimisation problem. As the number of combinations along with processing power increases, job allocation becomes NP-hard. This study employs a hybrid heuristic optimisation technique that incorporates load balancing to achieve optimal job scheduling and boost service provider performance within the cloud architecture. As a result, there are many less problems with the scheduling process. The suggested work scheduling approach successfully resolves the load balancing issue. The suggested Hybridised Darts Game-Based Beluga Whale Optimisation Algorithm (HDG-BWOA) assists in assigning jobs to the machines according to workload. When assigning jobs to virtual machines, factors such as reduced energy usage, minimised mean reaction time, enhanced job assurance ratio, and higher Cloud Data Centre (CDC) resource consumption are taken into account. By ensuring flexibility among virtual computers, this job scheduling strategy keeps them from overloading or underloading. Additionally, by employing this method, more activities are effectively finished before the deadline. The effectiveness of the proposed configuration is guaranteed using traditional heuristic-based job scheduling techniques in compliance with multiple assessment metrics

    Hybridized Darts Game with Beluga Whale Optimization Strategy for Efficient Task Scheduling with Optimal Load Balancing in Cloud Computing

    Get PDF
    A cloud computing technology permits clients to use hardware and software technology virtually on a subscription basis. The task scheduling process is planned to effectively minimize implementation time and cost while simultaneously increasing resource utilization, and it is one of the most common problems in cloud computing systems. The Nondeterministic Polynomial (NP)-hard optimization problem occurs due to limitations like an insufficient make-span, excessive resource utilization, low implementation costs, and immediate response for scheduling. The task allocation is NP-hard because of the increase in the amount of combinations and computing resources. In this work, a hybrid heuristic optimization technique with load balancing is implemented for optimal task scheduling to increase the performance of service providers in the cloud infrastructure. Thus, the issues that occur in the scheduling process is greatly reduced. The load balancing problem is effectively solved with the help of the proposed task scheduling scheme. The allocation of tasks to the machines based on the workload is done with the help of the proposed Hybridized Darts Game-Based Beluga Whale Optimization Algorithm (HDG-BWOA). The objective functions like higher Cloud Data Center (CDC) resource consumption, increased task assurance ratio, minimized mean reaction time, and reduced energy utilization are considered while allocating the tasks to the virtual machines. This task scheduling approach ensures flexibility among virtual machines, preventing them from overloading or underloading. Also, using this technique, more tasks is efficiently completed within the deadline. The efficacy of the offered arrangement is ensured with the conventional heuristic-based task scheduling approaches in accordance with various evaluation measures

    Data Placement And Task Mapping Optimization For Big Data Workflows In The Cloud

    Get PDF
    Data-centric workflows naturally process and analyze a huge volume of datasets. In this new era of Big Data there is a growing need to enable data-centric workflows to perform computations at a scale far exceeding a single workstation\u27s capabilities. Therefore, this type of applications can benefit from distributed high performance computing (HPC) infrastructures like cluster, grid or cloud computing. Although data-centric workflows have been applied extensively to structure complex scientific data analysis processes, they fail to address the big data challenges as well as leverage the capability of dynamic resource provisioning in the Cloud. The concept of “big data workflows” is proposed by our research group as the next generation of data-centric workflow technologies to address the limitations of exist-ing workflows technologies in addressing big data challenges. Executing big data workflows in the Cloud is a challenging problem as work-flow tasks and data are required to be partitioned, distributed and assigned to the cloud execution sites (multiple virtual machines). In running such big data work-flows in the cloud distributed across several physical locations, the workflow execution time and the cloud resource utilization efficiency highly depends on the initial placement and distribution of the workflow tasks and datasets across the multiple virtual machines in the Cloud. Several workflow management systems have been developed for scientists to facilitate the use of workflows; however, data and work-flow task placement issue has not been sufficiently addressed yet. In this dissertation, I propose BDAP strategy (Big Data Placement strategy) for data placement and TPS (Task Placement Strategy) for task placement, which improve workflow performance by minimizing data movement across multiple virtual machines in the Cloud during the workflow execution. In addition, I propose CATS (Cultural Algorithm Task Scheduling) for workflow scheduling, which improve workflow performance by minimizing workflow execution cost. In this dissertation, I 1) formalize data and task placement problems in workflows, 2) propose a data placement algorithm that considers both initial input dataset and intermediate datasets obtained during workflow run, 3) propose a task placement algorithm that considers placement of workflow tasks before workflow run, 4) propose a workflow scheduling strategy to minimize the workflow execution cost once the deadline is provided by user and 5)perform extensive experiments in the distributed environment to validate that our proposed strategies provide an effective data and task placement solution to distribute and place big datasets and tasks into the appropriate virtual machines in the Cloud within reasonable time
    corecore