Automatic parallelization in a compiler is becoming more important as computer technologies expand to include more distributed computing. This paper focuses on a comparative study of past and present techniques for automatic parallelization. It includes techniques such as scalar analysis, array analysis, and commutativity analysis. The need for automatic parallelization in compilers is growing as clusters and other forms of distributed computing are becoming more popular just as CPU technology is trending towards higher degrees and coarser granularities of parallelism. In this paper, we review known parallelization techniques for thread level identification in programs, and argue that these same techniques may also apply to generalized coarse-grain task identification
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.