974 research outputs found

    PASSION: Parallel And Scalable Software for Input-Output

    Get PDF
    We are developing a software system called PASSION: Parallel And Scalable Software for Input-Output which provides software support for high performance parallel I/O. PASSION provides support at the language, compiler, runtime as well as file system level. PASSION provides runtime procedures for parallel access to files (read/write), as well as for out-of-core computations. These routines can either be used together with a compiler to translate out-of-core data parallel programs written in a language like HPF, or used directly by application programmers. A number of optimizations such as Two-Phase Access, Data Sieving, Data Prefetching and Data Reuse have been incorporated in the PASSION Runtime Library for improved performance. PASSION also provides an initial framework for runtime support for out-of-core irregular problems. The goal of the PASSION compiler is to automatically translate out- of-core data parallel programs to node programs for distributed memory machines, with calls to the PASSION Runtime Library. At the language level, PASSION suggests extensions to HPF for out-of-core programs. At the file system level, PASSION provides support for buffering and prefetching data from disks. A portable parallel file system is also being developed as part of this project, which can be used across homogeneous or heterogeneous networks of workstations. PASSION also provides support for integrating data and task parallelism using parallel I/O techniques. We have used PASSION to implement a number of out-of-core applications such as a Laplace\u27s equation solver, 2D FFT, Matrix Multiplication, LU Decomposition, image processing applications as well as unstructured mesh kernels in molecular dynamics and computational fluid dynamics. We are currently in the process of using PASSION in applications in CFD (3D turbulent flows), molecular structure calculations, seismic computations, and earth and space science applications such as Four-Dimensional Data Assimilation. PASSION is currently available on the Intel Paragon, Touchstone Delta and iPSC/860. Efforts are underway to port it to the IBM SP-1 and SP-2 using the Vesta Parallel File System

    SNAP, Crackle, WebWindows!

    Get PDF
    We elaborate the SNAP---Scalable (ATM) Network and (PC) Platforms---view of computing in the year 2000. The World Wide Web will continue its rapid evolution, and in the future, applications will not be written for Windows NT/95 or UNIX, but rather for WebWindows with interfaces defined by the standards of Web servers and clients. This universal environment will support WebTop productivity tools, such as WebWord, WebLotus123, and WebNotes built in modular dynamic fashion, and undermining the business model for large software companies. We define a layered WebWindows software architecture in which applications are built on top of multi-use services. We discuss examples including business enterprise systems (IntraNets), health care, financial services and education. HPCC is implicit throughout this discussion for there is no larger parallel system than the World Wide metacomputer. We suggest building the MPP programming environment in terms of pervasive sustainable WebWindows technologies. In particular, WebFlow will support naturally dataflow integrating data and compute intensive applications on distributed heterogeneous systems

    On the transition to turbulence of wall-bounded flows in general, and plane Couette flow in particular

    Full text link
    The main part of this contribution to the special issue of EJM-B/Fluids dedicated to Patrick Huerre outlines the problem of the subcritical transition to turbulence in wall-bounded flows in its historical perspective with emphasis on plane Couette flow, the flow generated between counter-translating parallel planes. Subcritical here means discontinuous and direct, with strong hysteresis. This is due to the existence of nontrivial flow regimes between the global stability threshold Re_g, the upper bound for unconditional return to the base flow, and the linear instability threshold Re_c characterized by unconditional departure from the base flow. The transitional range around Re_g is first discussed from an empirical viewpoint ({\S}1). The recent determination of Re_g for pipe flow by Avila et al. (2011) is recalled. Plane Couette flow is next examined. In laboratory conditions, its transitional range displays an oblique pattern made of alternately laminar and turbulent bands, up to a third threshold Re_t beyond which turbulence is uniform. Our current theoretical understanding of the problem is next reviewed ({\S}2): linear theory and non-normal amplification of perturbations; nonlinear approaches and dynamical systems, basin boundaries and chaotic transients in minimal flow units; spatiotemporal chaos in extended systems and the use of concepts from statistical physics, spatiotemporal intermittency and directed percolation, large deviations and extreme values. Two appendices present some recent personal results obtained in plane Couette flow about patterning from numerical simulations and modeling attempts.Comment: 35 pages, 7 figures, to appear in Eur. J. Mech B/Fluid

    Portable lattice QCD software for massively parallel processor systems

    Get PDF

    The seven ages of Fortran

    Get PDF
    When IBM's John Backus first developed the Fortran programming language, back in 1957, he certainly never dreamt that it would become a world-wide success and still be going strong many years later. Given the oft-repeated predictions of its imminent demise, starting around 1968, it is a surprise, even to some of its most devoted users, that this much-maligned language is not only still with us, but is being further developed for the demanding applications of the future. What has made this programming language succeed where most slip into oblivion? One reason is certainly that the language has been regularly standardized. In this paper we will trace the evolution of the language from its first version and though six cycles of formal revision, and speculate on how this might continue. Now, modern Fortran is a procedural, imperative, compiled language with a syntax well suited to a direct representation of mathematical formulas. Individual procedures may be compiled separately or grouped into modules, either way allowing the convenient construction of very large programs and procedure libraries. Procedures communicate via global data areas or by argument association. The language now contains features for array processing, abstract data types, dynamic data structures, objectoriented programming and parallel processing.Facultad de Informátic

    The seven ages of Fortran

    Get PDF
    When IBM's John Backus first developed the Fortran programming language, back in 1957, he certainly never dreamt that it would become a world-wide success and still be going strong many years later. Given the oft-repeated predictions of its imminent demise, starting around 1968, it is a surprise, even to some of its most devoted users, that this much-maligned language is not only still with us, but is being further developed for the demanding applications of the future. What has made this programming language succeed where most slip into oblivion? One reason is certainly that the language has been regularly standardized. In this paper we will trace the evolution of the language from its first version and though six cycles of formal revision, and speculate on how this might continue. Now, modern Fortran is a procedural, imperative, compiled language with a syntax well suited to a direct representation of mathematical formulas. Individual procedures may be compiled separately or grouped into modules, either way allowing the convenient construction of very large programs and procedure libraries. Procedures communicate via global data areas or by argument association. The language now contains features for array processing, abstract data types, dynamic data structures, objectoriented programming and parallel processing.Facultad de Informátic

    Introducing Molly: Distributed Memory Parallelization with LLVM

    Get PDF
    Programming for distributed memory machines has always been a tedious task, but necessary because compilers have not been sufficiently able to optimize for such machines themselves. Molly is an extension to the LLVM compiler toolchain that is able to distribute and reorganize workload and data if the program is organized in statically determined loop control-flows. These are represented as polyhedral integer-point sets that allow program transformations applied on them. Memory distribution and layout can be declared by the programmer as needed and the necessary asynchronous MPI communication is generated automatically. The primary motivation is to run Lattice QCD simulations on IBM Blue Gene/Q supercomputers, but since the implementation is not yet completed, this paper shows the capabilities on Conway's Game of Life
    corecore