Skip to main content
Article thumbnail
Location of Repository


By Nicholas Dipasquale, Vijay Gehlot and Thomas Way


Automatic parallelization in a compiler is becoming more important as computer technologies expand to include more distributed computing. This paper focuses on a comparative study of past and present techniques for automatic parallelization. It includes techniques such as scalar analysis, array analysis, and commutativity analysis. The need for automatic parallelization in compilers is growing as clusters and other forms of distributed computing are becoming more popular just as CPU technology is trending towards higher degrees and coarser granularities of parallelism. In this paper, we review known parallelization techniques for thread level identification in programs, and argue that these same techniques may also apply to generalized coarse-grain task identification

Year: 2009
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.