2 research outputs found

    Scheduling computations with provably low synchronization overheads

    Get PDF
    Work Stealing has been a very successful algorithm for scheduling parallel computations, and is known to achieve high performances even for computations exhibiting fine-grained parallelism. We present a variant of \ws\ that provably avoids most synchronization overheads by keeping processors' deques entirely private by default, and only exposing work when requested by thieves. This is the first paper that obtains bounds on the synchronization overheads that are (essentially) independent of the total amount of work, thus corresponding to a great improvement, in both algorithm design and theory, over state-of-the-art \ws\ algorithms. Consider any computation with work T1T_{1} and critical-path length T∞T_{\infty} executed by PP processors using our scheduler. Our analysis shows that the expected execution time is O(T1P+T∞)O\left(\frac{T_{1}}{P} + T_{\infty}\right), and the expected synchronization overheads incurred during the execution are at most O((CCAS+CMFence)PT∞)O\left(\left(C_{CAS} + C_{MFence}\right)PT_{\infty}\right), where CCASC_{CAS} and CMFenceC_{MFence} respectively denote the maximum cost of executing a Compare-And-Swap instruction and a Memory Fence instruction
    corecore