479 research outputs found
Efficient Machine-Independent Programming of High-Performance Multiprocessors
Parallel computing is regarded by most computer scientists as the most
likely approach for significantly improving computing power for scientists
and engineers. Advances in programming languages and parallelizing
compilers are making parallel computers easier to use by providing
a high-level portable programming model that protects software
investment. However, experience has shown that simply finding
parallelism is not always sufficient for obtaining good performance
from today's multiprocessors. The goal of this project is to develop
advanced compiler analysis of data and computation decompositions,
thread placement, communication, synchronization, and memory system
effects needed in order to take advantage of performance-critical
elements in modern parallel architectures
Studies on automatic parallelization for heterogeneous and homogeneous multicore processors
制度:新 ; 報告番号:甲3537号 ; 学位の種類:博士(工学) ; 授与年月日:2012/2/25 ; 早大学位記番号:新587
An integrated runtime and compile-time approach for parallelizing structured and block structured applications
Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library
- …