Abstract—We develop a quantitative framework in order to understand how OR parallelism can be used to reduce execution times. In this model, tasks may either succeed or fail and any one success completes the problem. We follow Hoare and call these tasks “colluding”. We model the situation where tasks’ execution times are not known in advance, but instead have some probability distribution of execution times. We show how expected serial and parallel execution times can be computed, and demonstrate how parallel execution can give a lower expected execution time than any serial order, even on a single processor. This model can be applied in domains, such as computer algebra, which use algorithms whose execution times cannot be readily predicted by examining the inputs. I
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.