The Effect of Compiler Optimizations on Available Parallelism in Scalar Programs

Abstract

In this paper we analyze the e ect of compiler optimizations on ne grain parallelism in scalar programs. We characterize three levels of optimization: classical, superscalar, and multiprocessor. We show that classical optimizations not only improve a program's e ciency but also its parallelism. Superscalar optimizations further improve the parallelism for moderately parallel machines. For highly parallel machines, however, they actually constrain available parallelism. The multiprocessor optimizations we consider are memory renaming and data migration

    Similar works

    Full text

    thumbnail-image

    Available Versions