5 research outputs found

    Message passing and shared address space parallelism on an SMP cluster

    No full text
    Currently, message passing (MP) andsh*#q address space (SAS) are th two leading parallel programming paradigms. MP h* been standardizedwith MPI, and is th more common and matureapproach hproach code development can be extremely di#cult, especially for irregularly structured computations. SAS o#ers substantial ease of programming, but may su#er from performance limitations due to poor spatial locality and hd* protocol overhol* Inthq paper, we compareth performance of and th programming e#ort required for six applications underboth programming models on a 32-processor PC-SMP cluster, a platformtht is becoming increasingly attractive forhr*##fifi scientific computing. Our application suite consists of codesthe typically do notexh##E scalable performance undershr*#/##x*)x programming due tothxfi hx communication-to-computation ratios and/or complex communication patterns. Results indicatethi SAS canach#fiD abouthou th parallel e#ciency of MPI for most of our applications,whpl being competitive for th oth#xI Ah#/fiD MPI strategyshte only a small performance advantage over pure MPI in some cases. Finally, improved implementations of two MPI collective operations on PC-SMP clusters are presented

    Message Passing and Shared Address Space Parallelism on an SMP Cluster

    No full text
    Currently, message passing (MP) and shared address space (SAS) are the two leading parallel programming paradigms. MP has been standardized with MPI, and is the more common and mature approach; however, code development can be extremely difficult, especially for irregularly structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality and high protocol overhead. In this paper, we compare the performance of and the programming effort required for six applications under both programming models on a 32-processor PC-SMP cluster, a platform that is becoming increasingly attractive for high-end scientific computing. Our application suite consists of codes that typically do not exhibit scalable performance under shared-memory programming due to their high communication-to-computation ratios and/or complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications, while being competitive for the others. A hybrid MPI+SAS strategy shows only a small performance advantage over pure MPI in some cases. Finally, improved implementations of two MPI collective operations on PC-SMP clusters are presented
    corecore