31,781 research outputs found
Decentralized List Scheduling
Classical list scheduling is a very popular and efficient technique for
scheduling jobs in parallel and distributed platforms. It is inherently
centralized. However, with the increasing number of processors, the cost for
managing a single centralized list becomes too prohibitive. A suitable approach
to reduce the contention is to distribute the list among the computational
units: each processor has only a local view of the work to execute. Thus, the
scheduler is no longer greedy and standard performance guarantees are lost.
The objective of this work is to study the extra cost that must be paid when
the list is distributed among the computational units. We first present a
general methodology for computing the expected makespan based on the analysis
of an adequate potential function which represents the load unbalance between
the local lists. We obtain an equation on the evolution of the potential by
computing its expected decrease in one step of the schedule. Our main theorem
shows how to solve such equations to bound the makespan. Then, we apply this
method to several scheduling problems, namely, for unit independent tasks, for
weighted independent tasks and for tasks with precendence constraints. More
precisely, we prove that the time for scheduling a global workload W composed
of independent unit tasks on m processors is equal to W/m plus an additional
term proportional to log_2 W. We provide a lower bound which shows that this is
optimal up to a constant. This result is extended to the case of weighted
independent tasks. In the last setting, precedence task graphs, our analysis
leads to an improvement on the bound of Arora et al. We finally provide some
experiments using a simulator. The distribution of the makespan is shown to fit
existing probability laws. The additive term is shown by simulation to be
around 3 \log_2 W confirming the tightness of our analysis
Decentralized list scheduling
Classical list scheduling is a very popular and efficient technique for scheduling jobs for parallel and distributed platforms. It is inherently centralized. However, with the increasing number of processors, the cost for managing a single centralized list becomes too prohibitive. A suitable approach to reduce the contention is to distribute the list among the computational units: each processor only has a local view of the work to execute. Thus, the scheduler is no longer greedy and standard performance guarantees are lost. The objective of this work is to study the extra cost that must be paid when the list is distributed among the computational units. We first present a general methodology for computing the expected makespan based on the analysis of an adequate potential function which represents the load imbalance between the local lists. We obtain an equation giving the evolution of the potential by computing its expected decrease in one step of the schedule. Our main theorem shows how to solve such equations to bound the makespan. Then, we apply this method to several scheduling problems, namely, for unit independent tasks, for weighted independent tasks and for tasks with precedence constraints. More precisely, we prove that the time for scheduling a global workload composed of independent unit tasks on processors is equal to plus an additional term proportional to . We provide a lower bound which shows that this is optimal up to a constant. This result is extended to the case of weighted independent tasks. In the last setting, precedence task graphs, our analysis leads to an improvement on the bound of \cite{ABP}. We end with some experiments using a simulator. The distribution of the makespan is shown to fit existing probability laws. Moreover, the simulations give a better insight into the additive term whose value is shown to be around confirming the precision of our analysis
Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud
With the advent of cloud computing, organizations are nowadays able to react
rapidly to changing demands for computational resources. Not only individual
applications can be hosted on virtual cloud infrastructures, but also complete
business processes. This allows the realization of so-called elastic processes,
i.e., processes which are carried out using elastic cloud resources. Despite
the manifold benefits of elastic processes, there is still a lack of solutions
supporting them.
In this paper, we identify the state of the art of elastic Business Process
Management with a focus on infrastructural challenges. We conceptualize an
architecture for an elastic Business Process Management System and discuss
existing work on scheduling, resource allocation, monitoring, decentralized
coordination, and state management for elastic processes. Furthermore, we
present two representative elastic Business Process Management Systems which
are intended to counter these challenges. Based on our findings, we identify
open issues and outline possible research directions for the realization of
elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and
P. Hoenisch (2015). Elastic Business Process Management: State of the Art and
Open Challenges for BPM in the Cloud. Future Generation Computer Systems,
Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00
- …