40 research outputs found

    Using simple PID-inspired controllers for online resilient resource management of distributed scientific workflows

    Get PDF
    Scientific workflows have become mainstream for conducting large-scale scientific research. As a result, many workflow applications and Workflow Management Systems (WMSs) have been developed as part of the cyberinfrastructure to allow scientists to execute their applications seamlessly on a range of distributed platforms. Although the scientific community has addressed this challenge from both theoretical and practical approaches, failure prediction, detection, and recovery still raise many research questions. In this paper, we propose an approach inspired by the control theory developed as part of autonomic computing to predict failures before they happen, and mitigated them when possible. The proposed approach is inspired on the proportional–integral–derivative controller (PID controller) control loop mechanism, which is widely used in industrial control systems, where the controller will react to adjust its output to mitigate faults. PID controllers aim to detect the possibility of a non-steady state far enough in advance so that an action can be performed to prevent it from happening. To demonstrate the feasibility of the approach, we tackle two common execution faults of large scale data-intensive workflows—data storage overload and memory overflow. We developed a simulator, which implements and evaluates simple standalone PID-inspired controllers to autonomously manage data and memory usage of a data-intensive bioinformatics workflow that consumes/produces over 4.4 TB of data, and requires over 24 TB of memory to run all tasks concurrently. Experimental results obtained via simulation indicate that workflow executions may significantly benefit from the controller-inspired approach, in particular under online and unknown conditions. Simulation results show that nearly-optimal executions (slowdown of 1.01) can be attained when using our proposed method, and faults are detected and mitigated far in advance of their occurrence

    A Framework for Approximate Optimization of BoT Application Deployment in Hybrid Cloud Environment

    Get PDF
    We adopt a systematic approach to investigate the efficiency of near-optimal deployment of large-scale CPU-intensive Bag-of-Task applications running on cloud resources with the non-proportional cost to performance ratios. Our analytical solutions perform in both known and unknown running time of the given application. It tries to optimize users' utility by choosing the most desirable tradeoff between the make-span and the total incurred expense. We propose a schema to provide a near-optimal deployment of BoT application regarding users' preferences. Our approach is to provide user with a set of Pareto-optimal solutions, and then she may select one of the possible scheduling points based on her internal utility function. Our framework can cope with uncertainty in the tasks' execution time using two methods, too. First, an estimation method based on a Monte Carlo sampling called AA algorithm is presented. It uses the minimum possible number of sampling to predict the average task running time. Second, assuming that we have access to some code analyzer, code profiling or estimation tools, a hybrid method to evaluate the accuracy of each estimation tool in certain interval times for improving resource allocation decision has been presented. We propose approximate deployment strategies that run on hybrid cloud. In essence, proposed strategies first determine either an estimated or an exact optimal schema based on the information provided from users' side and environmental parameters. Then, we exploit dynamic methods to assign tasks to resources to reach an optimal schema as close as possible by using two methods. A fast yet simple method based on First Fit Decreasing algorithm, and a more complex approach based on the approximation solution of the transformed problem into a subset sum problem. Extensive experiment results conducted on a hybrid cloud platform confirm that our framework can deliver a near optimal solution respecting user's utility function

    Online Self-Healing Control Loop to Prevent and Mitigate Faults in Scientific Workflows

    Get PDF
    Scientific workflows have become mainstream for conducting large-scale scientific research. As a result, many workflow applications and Workflow Management Systems (WMSs) have been developed as part of the cyberinfrastructure to allow scientists to execute their applications seamlessly on a range of distributed platforms. In spite of many success stories, a key challenge for running workflow in distributed systems is failure prediction, detection, and recovery. In this paper, we present a novel online self-healing framework, where failures are predicted before they happen, and are mitigated when possible. The proposed approach is to use control theory developed as part of autonomic computing, and in particular apply the proportional-integral-derivative controller (PID controller) control loop mechanism, which is widely used in industrial control systems, to mitigate faults by adjusting the inputs of the mechanism. The PID controller aims at detecting the possibility of a fault far enough in advance so that an action can be performed to prevent it from happening. To demonstrate the feasibility of the approach, we tackle two common execution faults of the Big Data era—data footprint and memory usage. We define, implement, and evaluate PID controllers to autonomously manage data and memory usage of a bioinformatics workflow that consumes/produces over 4.4TB of data, and requires over 24TB of memory to run all tasks concurrently. Experimental results indicate that workflow executions may significantly benefit from PID controllers, in particular under online and unknown conditions. Simulation results show that nearly-optimal executions (slowdown of 1.01) can be attained when using our proposed control loop, and faults are detected and mitigated far in advance

    On-the-fly scheduling vs. reservation-based scheduling for unpredictable workflows

    Get PDF
    International audienceScientific insights in the coming decade will clearly depend on the effective processing of large datasets generated by dynamic heterogeneous applications typical of workflows in large data centers or of emerging fields like neuroscience. In this paper, we show how these big data workflows have a unique set of characteristics that pose challenges for leveraging HPC methodologies, particularly in scheduling. Our findings indicate that execution times for these workflows are highly unpredictable and are not correlated with the size of the dataset involved or the precise functions used in the analysis. We characterize this inherent variability and sketch the need for new scheduling approaches by quantifying significant gaps in achievable performance. Through simulations, we show how on-the-fly scheduling approaches can deliver benefits in both system-level and user-level performance measures. On average, we find improvements of up to 35% in system utilization and up to 45% in average stretch of the applications, illustrating the potential of increasing performance through new scheduling approaches

    Enhanced Job Ranking Backfilling Based on Linear and Logarithmic Ranking Equations

    Get PDF
    Grid system is used by many researchers’ and scholars all over the world to solve the complicated and complex problems in different sciences. Job ranking backfilling is the most used model by many researchers in grid system to improve the performance of job scheduling algorithm. The model aims on serving the smallest job in the queue. As a second improvement of job backfilling, researchers proposed job ranking backfilling that serve job based on ranking equation. This paper proposes an enhance job ranking algorithm based on using linear and logarithmic ranking equations. Both proposed ranking equations used curve estimation model to predict on the variables’ coefficients. By simulation and after different tests, the average results of job ranking backfilling with linear ranking equation outperform conventional job ranking backfilling with improvement equal 3.2% and 56.53% in total execution time and average waiting time, respectively. In addition, job ranking backfilling with logarithmic ranking equation shows average improvement equal 1.78% and 46.62% in total execution time and average waiting time, respectively. The results indicate that the proposed ranking equations would improve conventional job ranking backfilling in high and low demand grid system under different condition

    Using Simple PID Controllers to Prevent and Mitigate Faults in Scientific Workflows

    Get PDF
    Scientific workflows have become mainstream for conductinglarge-scale scientific research. As a result, many workflowapplications and Workflow Management Systems (WMSs)have been developed as part of the cyberinfrastructure toallow scientists to execute their applications seamlessly ona range of distributed platforms. In spite of many successstories, a key challenge for running workflows in distributedsystems is failure prediction, detection, and recovery. Inthis paper, we propose an approach to use control theorydeveloped as part of autonomic computing to predict failures before they happen, and mitigated them when possible.The proposed approach applying the proportional-integralderivative controller (PID controller) control loop mechanism, which is widely used in industrial control systems, tomitigate faults by adjusting the inputs of the controller. ThePID controller aims at detecting the possibility of a fault farenough in advance so that an action can be performed toprevent it from happening. To demonstrate the feasibility ofthe approach, we tackle two common execution faults of theBig Data era—data storage overload and memory overflow.We define, implement, and evaluate simple PID controllersto autonomously manage data and memory usage of a bioinformatics workflow that consumes/produces over 4.4TB ofdata, and requires over 24TB of memory to run all tasksconcurrently. Experimental results indicate that workflowexecutions may significantly benefit from PID controllers,in particular under online and unknown conditions. Simulation results show that nearly-optimal executions (slowdownof 1.01) can be attained when using our proposed method,and faults are detected and mitigated far in advance of theiroccurence

    Fair, responsive scheduling of engineering workflows on computing grids

    Get PDF
    This thesis considers scheduling in the context of a grid computing system used in engineering design. Users desire responsiveness and fairness in the treatment of the workflows they submit. Submissions outstrip the available computing capacity during the work day, and the queue is only caught up on overnight and at weekends. The execution times observed span a wide range of 10^0 to 10^7 core-minutes. The Projected Schedule Length Ratio (P-SLR) list scheduling policy is designed to use execution time estimates and the structure of the dependency graph to improve on the existing industrial FairShare policy. P-SLR aims to minimise the worst-case SLR of jobs and keep SLR fair across the space of job execution times. P-SLR is shown to equal or surpass all other evaluated policies in responsiveness and fairness across the spectra of load and networking delays. P-SLR is also dominant where execution time estimates are within an order of magnitude of the real value. Such estimates are considered achievable using user knowledge or automated profiling. Outside this range, the Shortest Remaining Time First (SRTF) policy achieved better responsiveness and fairness. The Projected Value Remaining (PVR) policy considers the case where a curve specifying the value of a job over time is given. PVR aims to maximise total workload value, even under overload, by maximising the worst-case job value in a workload. PVR is shown to be dominant across the load and networking spectra. Where execution time estimates are coarser than the nearest power of 2, SRTF delivers higher value than PVR. SRTF is also shown to have responsiveness, fairness and value close behind P-SLR and PVR throughout the range of load and network delays considered. However, the kinds of starvation under overload incurred by SRTF would almost certainly be undesirable if implemented in a production system

    Market-Based Scheduling in Distributed Computing Systems

    Get PDF
    In verteilten Rechensystemen (bspw. im Cluster und Grid Computing) kann eine Knappheit der zur Verfügung stehenden Ressourcen auftreten. Hier haben Marktmechanismen das Potenzial, Ressourcenbedarf und -angebot durch geeignete Anreizmechanismen zu koordinieren und somit die ökonomische Effizienz des Gesamtsystems zu steigern. Diese Arbeit beschäftigt sich anhand vier spezifischer Anwendungsszenarien mit der Frage, wie Marktmechanismen für verteilte Rechensysteme ausgestaltet sein sollten

    Scheduling for Large Scale Distributed Computing Systems: Approaches and Performance Evaluation Issues

    Get PDF
    Although our everyday life and society now depends heavily oncommunication infrastructures and computation infrastructures,scientists and engineers have always been among the main consumers ofcomputing power. This document provides a coherent overview of theresearch I have conducted in the last 15 years and which targets themanagement and performance evaluation of large scale distributedcomputing infrastructures such as clusters, grids, desktop grids,volunteer computing platforms, ... when used for scientific computing.In the first part of this document, I present how I have addressedscheduling problems arising on distributed platforms (like computinggrids) with a particular emphasis on heterogeneity and multi-userissues, hence in connection with game theory. Most of these problemsare relaxed from a classical combinatorial optimization formulationinto a continuous form, which allows to easily account for keyplatform characteristics such as heterogeneity or complex topologywhile providing efficient practical and distributed solutions.The second part presents my main contributions to the SimGrid project,which is a simulation toolkit for building simulators of distributedapplications (originally designed for scheduling algorithm evaluationpurposes). It comprises a unified presentation of how the questions ofvalidation and scalability have been addressed in SimGrid as well asthoughts on specific challenges related to methodological aspects andto the application of SimGrid to the HPC context
    corecore