1 research outputs found

    Estimation of quality of service in stochastic workflow schedules

    No full text
    This paper investigates the problem of estimating the quality of a given solution to a workflow scheduling problem. The underlying workflow model is one where tasks and inter-task communication links have stochastic QoS attributes. It has been proved that the exact determination even of the schedule length distribution alone is #P-complete in the general case. This is true even if the problems of processor-to-task allocation and inter-task communication are abstracted away, as in program evaluation and review technique (PERT) approaches. Yet aside from the makespan, there are many more parameters that are important for service providers and customers alike, e.g., reliability, overall quality, cost, etc. The assumption is, as in the distributed makespan problem, that all of these parameters are defined in terms of random variables with distributions knownapriori for each possible task-toprocessor assignment. This research provides an answer to the open question of the complexity of the so formulated problem. We also propose other than naive Monte Carlomethods to estimate the schedule quality for the purpose of, e.g., benchmarking different scheduling algorithms in amulti-attribute stochastic setting. The key idea is to apply to a schedule a novel procedure of transformation into a Bayesian network (BN). Once such a transformation is done, it is possible to prove that, for a known schedule, the problem of determining the overall QoS is still #P-complete, i.e., not more complex than the distributed PERT makespan problem.Moreover, it is possible to use the familiar Bayesian posterior probability estimation methods, given appropriately chosen evidence, instead of a blindMonte Carlo approach. As the schedules are usually required to satisfy well-defined QoS constraints, it is possible to map these to appropriately chosen conditioning variables in the generated BN
    corecore