925 research outputs found

    Optimization and Management of Large-scale Scientific Workflows in Heterogeneous Network Environments: From Theory to Practice

    Get PDF
    Next-generation computation-intensive scientific applications feature large-scale computing workflows of various structures, which can be modeled as simple as linear pipelines or as complex as Directed Acyclic Graphs (DAGs). Supporting such computing workflows and optimizing their end-to-end network performance are crucial to the success of scientific collaborations that require fast system response, smooth data flow, and reliable distributed operation.We construct analytical cost models and formulate a class of workflow mapping problems with different mapping objectives and network constraints. The difficulty of these mapping problems essentially arises from the topological matching nature in the spatial domain, which is further compounded by the resource sharing complicacy in the temporal dimension. We provide detailed computational complexity analysis and design optimal or heuristic algorithms with rigorous correctness proof or performance analysis. We decentralize the proposed mapping algorithms and also investigate these optimization problems in unreliable network environments for fault tolerance.To examine and evaluate the performance of the workflow mapping algorithms before actual deployment and implementation, we implement a simulation program that simulates the execution dynamics of distributed computing workflows. We also develop a scientific workflow automation and management platform based on an existing workflow engine for experimentations in real environments. The performance superiority of the proposed mapping solutions are illustrated by extensive simulation-based comparisons with existing algorithms and further verified by large-scale experiments on real-life scientific workflow applications through effective system implementation and deployment in real networks

    Executing Large Scale Scientific Workflows in Public Clouds

    Get PDF
    Scientists in different fields, such as high-energy physics, earth science, and astronomy are developing large-scale workflow applications. In many use cases, scientists need to run a set of interrelated but independent workflows (i.e., workflow ensembles) for the entire scientific analysis. As a workflow ensemble usually contains many sub-workflows in each of which hundreds or thousands of jobs exist with precedence constraints, the execution of such a workflow ensemble makes a great concern with cost even using elastic and pay-as-you-go cloud resources. In this thesis, we develop a set of methods to optimize the execution of large-scale scientific workflows in public clouds with both cost and deadline constraints with a two-step approach. Firstly, we present a set of methods to optimize the execution of scientific workflow in public clouds, with the Montage astronomical mosaic engine running on Amazon EC2 as an example. Secondly, we address three main challenges in realizing benefits of using public clouds when executing large-scale workflow ensembles: (1) execution coordination, (2) resource provisioning, and (3) data staging. To this end, we develop a new pulling-based workflow execution system with a profiling-based resource provisioning strategy. Our results show that our solution system can achieve 80% speed-up, by removing scheduling overhead, compared to the well-known Pegasus workflow management system when running scientific workflow ensembles. Besides, our evaluation using Montage workflow ensembles on around 1000-core Amazon EC2 clusters has demonstrated the efficacy of our resource provisioning strategy in terms of cost effectiveness within deadline

    Reliable and efficient webserver management for task scheduling in edge-cloud platform

    Get PDF
    The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology

    The Contemporary Affirmation of Taxonomy and Recent Literature on Workflow Scheduling and Management in Cloud Computing

    Get PDF
    The Cloud computing systemspreferred over the traditional forms of computing such as grid computing, utility computing, autonomic computing is attributed forits ease of access to computing, for its QoS preferences, SLA2019;s conformity, security and performance offered with minimal supervision. A cloud workflow schedule when designed efficiently achieves optimalre source sage, balance of workloads, deadline specific execution, cost control according to budget specifications, efficient consumption of energy etc. to meet the performance requirements of today2019; svast scientific and business requirements. The businesses requirements under recent technologies like pervasive computing are motivating the technology of cloud computing for further advancements. In this paper we discuss some of the important literature published on cloud workflow scheduling

    Realtime reservoir characterization and beyond: cyber-infrastructure tools and technologies

    Get PDF
    The advent of the digital oil _x000C_eld and rapidly decreasing cost of computing creates opportunities as well as challenges in simulation based reservoir studies, in particular, real-time reservoir characterization and optimization. One challenge our e_x000B_orts are directed toward is the use of real-time production data to perform live reservoir characterization using high throughput, high performance computing environments. To that end we developed the required tools of parallel reservoir simulator, parallel ensemble Kalman _x000C_lter and a scalable work ow manager. When using this collection of tools, a reservoir modeler is able to perform large scale reservoir management studies in short periods of time. This includes studies with thousands of models that are individually complex and large, involving millions of degrees of freedom. Using parallel processing, we are able to solve these models much faster than we otherwise would on a single, serial machine. This motivated the development of a fast parallel reservoir simulator. Furthermore, distributing those simulations across resources leads to a smaller total time to completion by making use of distributed processing. This allows the development of a scalable high throughput work ow manager. Finally, with thousands of models, each with millions of degrees of freedom, we end up with a super uity of model parameters. This translates directly to billions of degrees of freedom in the reservoir study. To be able to use the ensemble Kalman _x000C_lter on these models, we needed to develop a parallel implementation of the ensemble Kalman _x000C_lter. This thesis discusses the enabling tools and technologies developed to address a speci _x000C_c problem: how to accurately characterize reservoirs, using large numbers of complex detailed models. For these characterization studies to be helpful in making production decisions, the time to solution must be feasible. To that end, our work is focused on developing and extending these tools, and optimizing their performance

    Classification and Performance Study of Task Scheduling Algorithms in Cloud Computing Environment

    Get PDF
    Cloud computing is becoming very common in recent years and is growing rapidly due to its attractive benefits and features such as resource pooling, accessibility, availability, scalability, reliability, cost saving, security, flexibility, on-demand services, pay-per-use services, use from anywhere, quality of service, resilience, etc. With this rapid growth of cloud computing, there may exist too many users that require services or need to execute their tasks simultaneously by resources provided by service providers. To get these services with the best performance, and minimum cost, response time, makespan, effective use of resources, etc. an intelligent and efficient task scheduling technique is required and considered as one of the main and essential issues in the cloud computing environment. It is necessary for allocating tasks to the proper cloud resources and optimizing the overall system performance. To this end, researchers put huge efforts to develop several classes of scheduling algorithms to be suitable for the various computing environments and to satisfy the needs of the various types of individuals and organizations. This research article provides a classification of proposed scheduling strategies and developed algorithms in cloud computing environment along with the evaluation of their performance. A comparison of the performance of these algorithms with existing ones is also given. Additionally, the future research work in the reviewed articles (if available) is also pointed out. This research work includes a review of 88 task scheduling algorithms in cloud computing environment distributed over the seven scheduling classes suggested in this study. Each article deals with a novel scheduling technique and the performance improvement it introduces compared with previously existing task scheduling algorithms. Keywords: Cloud computing, Task scheduling, Load balancing, Makespan, Energy-aware, Turnaround time, Response time, Cost of task, QoS, Multi-objective. DOI: 10.7176/IKM/12-5-03 Publication date:September 30th 2022

    A Survey of Pipelined Workflow Scheduling: Models and Algorithms

    Get PDF
    International audienceA large class of applications need to execute the same workflow on different data sets of identical size. Efficient execution of such applications necessitates intelligent distribution of the application components and tasks on a parallel machine, and the execution can be orchestrated by utilizing task-, data-, pipelined-, and/or replicated-parallelism. The scheduling problem that encompasses all of these techniques is called pipelined workflow scheduling, and it has been widely studied in the last decade. Multiple models and algorithms have flourished to tackle various programming paradigms, constraints, machine behaviors or optimization goals. This paper surveys the field by summing up and structuring known results and approaches

    Using simple PID-inspired controllers for online resilient resource management of distributed scientific workflows

    Get PDF
    Scientific workflows have become mainstream for conducting large-scale scientific research. As a result, many workflow applications and Workflow Management Systems (WMSs) have been developed as part of the cyberinfrastructure to allow scientists to execute their applications seamlessly on a range of distributed platforms. Although the scientific community has addressed this challenge from both theoretical and practical approaches, failure prediction, detection, and recovery still raise many research questions. In this paper, we propose an approach inspired by the control theory developed as part of autonomic computing to predict failures before they happen, and mitigated them when possible. The proposed approach is inspired on the proportional–integral–derivative controller (PID controller) control loop mechanism, which is widely used in industrial control systems, where the controller will react to adjust its output to mitigate faults. PID controllers aim to detect the possibility of a non-steady state far enough in advance so that an action can be performed to prevent it from happening. To demonstrate the feasibility of the approach, we tackle two common execution faults of large scale data-intensive workflows—data storage overload and memory overflow. We developed a simulator, which implements and evaluates simple standalone PID-inspired controllers to autonomously manage data and memory usage of a data-intensive bioinformatics workflow that consumes/produces over 4.4 TB of data, and requires over 24 TB of memory to run all tasks concurrently. Experimental results obtained via simulation indicate that workflow executions may significantly benefit from the controller-inspired approach, in particular under online and unknown conditions. Simulation results show that nearly-optimal executions (slowdown of 1.01) can be attained when using our proposed method, and faults are detected and mitigated far in advance of their occurrence
    corecore