2 research outputs found

    Data Flow Analysis Across Tuplespace Process Boundaries

    No full text
    The increasing attention toward distributed shared memory systems attests to the fact that programmers find shared memory parallel programming easier than message passing programming, while physically distributed memory multiprocessors and networks of workstations offer the desirable scalability for large applications. A current limitation of compilers for shared memory parallel languages is their restricted use of traditional scalar code-improving transformations, such as constant propagation and dead code elimination. The major problem lies in the failure of data flow analysis techniques developed for sequential programs in the context of shared memory programs with user-specified parallelism. Notable efforts to develop data flow frameworks for optimizing parallel programs have focused on programs with lexically-specified parallel constructs, such as cobegin/coend, where sections of the parallel constructs are data independent except where an appropriate synchronization mechanism is ..
    corecore