819 research outputs found

    Components and Interfaces of a Process Management System for Parallel Programs

    Full text link
    Parallel jobs are different from sequential jobs and require a different type of process management. We present here a process management system for parallel programs such as those written using MPI. A primary goal of the system, which we call MPD (for multipurpose daemon), is to be scalable. By this we mean that startup of interactive parallel jobs comprising thousands of processes is quick, that signals can be quickly delivered to processes, and that stdin, stdout, and stderr are managed intuitively. Our primary target is parallel machines made up of clusters of SMPs, but the system is also useful in more tightly integrated environments. We describe how MPD enables much faster startup and better runtime management of parallel jobs. We show how close control of stdio can support the easy implementation of a number of convenient system utilities, even a parallel debugger. We describe a simple but general interface that can be used to separate any process manager from a parallel library, which we use to keep MPD separate from MPICH.Comment: 12 pages, Workshop on Clusters and Computational Grids for Scientific Computing, Sept. 24-27, 2000, Le Chateau de Faverges de la Tour, Franc

    State-of-the-Art in Parallel Computing with R

    Get PDF
    R is a mature open-source programming language for statistical computing and graphics. Many areas of statistical research are experiencing rapid growth in the size of data sets. Methodological advances drive increased use of simulations. A common approach is to use parallel computing. This paper presents an overview of techniques for parallel computing with R on computer clusters, on multi-core systems, and in grid computing. It reviews sixteen different packages, comparing them on their state of development, the parallel technology used, as well as on usability, acceptance, and performance. Two packages (snow, Rmpi) stand out as particularly useful for general use on computer clusters. Packages for grid computing are still in development, with only one package currently available to the end user. For multi-core systems four different packages exist, but a number of issues pose challenges to early adopters. The paper concludes with ideas for further developments in high performance computing with R. Example code is available in the appendix

    State of the Art in Parallel Computing with R

    Get PDF
    R is a mature open-source programming language for statistical computing and graphics. Many areas of statistical research are experiencing rapid growth in the size of data sets. Methodological advances drive increased use of simulations. A common approach is to use parallel computing. This paper presents an overview of techniques for parallel computing with R on computer clusters, on multi-core systems, and in grid computing. It reviews sixteen different packages, comparing them on their state of development, the parallel technology used, as well as on usability, acceptance, and performance. Two packages (snow, Rmpi) stand out as particularly suited to general use on computer clusters. Packages for grid computing are still in development, with only one package currently available to the end user. For multi-core systems five different packages exist, but a number of issues pose challenges to early adopters. The paper concludes with ideas for further developments in high performance computing with R. Example code is available in the appendix.

    MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface

    Full text link
    Application development for distributed computing "Grids" can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I/O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.Comment: 20 pages, 8 figure

    Szåmítóhåló alkalmazåsok teljesítményanalízise és optimalizåciója = Performance analysis and optimisation of grid applications

    Get PDF
    SzĂĄmĂ­tĂłhĂĄlĂłn (griden) futĂł alkalmazĂĄsok, elsƑsorban workflow-k hatĂ©kony vĂ©grehajtĂĄsĂĄra kerestĂŒnk ĂșjszerƱ megoldĂĄsokat a grid teljesĂ­tmĂ©nyanalĂ­zis Ă©s optimalizĂĄciĂł terĂŒletĂ©n. ElkĂ©szĂ­tettĂŒk a Mercury monitort a grid teljesĂ­tmĂ©nyanalĂ­zis követelmĂ©nyeit figyelembe vĂ©ve. A pĂĄrhuzamos programok monitorozĂĄsĂĄra alkalmas GRM monitort integrĂĄltuk a relĂĄciĂłs adatmodell alapĂș R-GMA grid informĂĄciĂłs rendszerrel, illetve a Mercury monitorral. ElkĂ©szĂŒlt a Pulse, Ă©s a Prove vizualizĂĄciĂłs eszköz grid teljesĂ­tmĂ©nyanalĂ­zist tĂĄmogatĂł verziĂłja. ElkĂ©szĂ­tettĂŒnk egy state-of-the-art felmĂ©rĂ©st grid teljesĂ­tmĂ©nyanalĂ­zis eszközökrƑl. Kidolgoztuk a P-GRADE rendszer workflow absztrakciĂłs rĂ©tegĂ©t, melyhez kapcsolĂłdĂłan elkĂ©szĂŒlt a P-GRADE portĂĄl. Ennek segĂ­tsĂ©gĂ©vel a felhasznĂĄlĂłk egy web böngĂ©szƑn keresztĂŒl szerkeszthetnek Ă©s hajthatnak vĂ©gre workflow alkalmazĂĄsokat szĂĄmĂ­tĂłhĂĄlĂłn. A portĂĄl kĂŒlönbözƑ szĂĄmĂ­tĂłhĂĄlĂł implementĂĄciĂłkat tĂĄmogat. LehetƑsĂ©get biztosĂ­t informĂĄciĂł gyƱjtĂ©sĂ©re teljesĂ­tmĂ©nyanalĂ­zis cĂ©ljĂĄbĂłl. MegvizsgĂĄltuk a portĂĄl erƑforrĂĄs brĂłkerekkel valĂł egyĂŒttmƱködĂ©sĂ©t, felkĂ©szĂ­tettĂŒk a portĂĄlt a sikertelen futĂĄsok javĂ­tĂĄsĂĄra. A vĂ©grehajtĂĄs optimalizĂĄlĂĄsa megkövetelheti az alkalmazĂĄs egyes rĂ©szeinek ĂĄthelyezĂ©sĂ©t mĂĄs erƑforrĂĄsokra. Ennek tĂĄmogatĂĄsĂĄra tovĂĄbbfejlesztettĂŒk a P-GRADE alkalmazĂĄsok naplĂłzhatĂłsĂĄgĂĄt, Ă©s illesztettĂŒk a Condor feladatĂŒtemezƑjĂ©hez. Sikeresen kapcsoltunk a rendszerhez egy terhelĂ©s elosztĂł modult, mely kĂ©pes a terheltsĂ©gĂ©tƑl fĂŒggƑen ĂĄthelyezni a folyamatokat. | We investigated novel approaches for performance analysis and optimization for efficient execution of grid applications, especially workflows. We took into consideration the special requirements of grid performance analysis when elaborated Mercury, a grid monitoring infrastructure. GRM, a performance monitor for parallel applications, has been integrated with R-GMA, a relational grid information system and Mercury as well. We developed Pulse and Prove visualisation tools for supporting grid performance analysis. We wrote a comprehensive state-of-the art survey of grid performance tools. We designed a novel abstraction layer of P-GRADE supporting workflows, and a grid portal. Users can draft and execute workflow applications in the grid via a web browser using the portal. The portal supports multiple grid implementations and provides monitoring capabilities for performance analysis. We tested the integration of the portal with grid resource brokers and also augmented it with some degree of fault-tolerance. Optimization may require the migration of parts of the application to different resources and thus, it requires support for checkpointing. We enhanced the checkpointing facilities of P-GRADE and coupled it to Condor job scheduler. We also extended the system with a load balancer module that is able to migrate processes as part of the optimization

    Parallel image computation in clusters with task-distributor

    Get PDF

    Using MPI-Portable Parallel Programming with the Message-Passing Interface, by William Gropp

    Get PDF
    • 

    corecore