9 research outputs found

    Parthenon -- a performance portable block-structured adaptive mesh refinement framework

    Full text link
    On the path to exascale the landscape of computer device architectures and corresponding programming models has become much more diverse. While various low-level performance portable programming models are available, support at the application level lacks behind. To address this issue, we present the performance portable block-structured adaptive mesh refinement (AMR) framework Parthenon, derived from the well-tested and widely used Athena++ astrophysical magnetohydrodynamics code, but generalized to serve as the foundation for a variety of downstream multi-physics codes. Parthenon adopts the Kokkos programming model, and provides various levels of abstractions from multi-dimensional variables, to packages defining and separating components, to launching of parallel compute kernels. Parthenon allocates all data in device memory to reduce data movement, supports the logical packing of variables and mesh blocks to reduce kernel launch overhead, and employs one-sided, asynchronous MPI calls to reduce communication overhead in multi-node simulations. Using a hydrodynamics miniapp, we demonstrate weak and strong scaling on various architectures including AMD and NVIDIA GPUs, Intel and AMD x86 CPUs, IBM Power9 CPUs, as well as Fujitsu A64FX CPUs. At the largest scale on Frontier (the first TOP500 exascale machine), the miniapp reaches a total of 1.7Ă—10131.7\times10^{13} zone-cycles/s on 9,216 nodes (73,728 logical GPUs) at ~92% weak scaling parallel efficiency (starting from a single node). In combination with being an open, collaborative project, this makes Parthenon an ideal framework to target exascale simulations in which the downstream developers can focus on their specific application rather than on the complexity of handling massively-parallel, device-accelerated AMR.Comment: 17 pages, 11 figures, accepted for publication in IJHPCA, Codes available at https://github.com/parthenon-hpc-la

    Doctor of Philosophy

    Get PDF
    dissertationAlmost all high performance computing applications are written in MPI, which will continue to be the case for at least the next several years. Given the huge and growing importance of MPI, and the size and sophistication of MPI codes, scalable and incisive MPI debugging tools are essential. Existing MPI debugging tools have, despite their strengths, many glaring de ficiencies, especially when it comes to debugging under the presence of nondeterminism related bugs, which are bugs that do not always show up during testing. These bugs usually become manifest when the systems are ported to di fferent platforms for production runs. This dissertation focuses on the problem of developing scalable dynamic verifi cation tools for MPI programs that can provide a coverage guarantee over the space of MPI nondeterminism. That is, the tools should be able to detect diff erent outcomes of nondeterministic events in an MPI program and enforce all those di fferent outcomes through repeated executions of the program with the same test harness. We propose to achieve the coverage guarantee by introducing efficient distributed causality tracking protocols that are based on the matches-before order. The matches-before order is introduced to address the shortcomings of the Lamport happens-before order [40], which is not sufficient to capture causality for MPI program executions due to the complexity of the MPI semantics. The two protocols we propose are the Lazy Lamport Clocks Protocol (LLCP) and the Lazy Vector Clocks Protocol (LVCP). LLCP provides good scalability with a small possibility of missing potential outcomes of nondeterministic events while LVCP provides full coverage guarantee with a scalability tradeoff . In practice, we show through our experiments that LLCP provides the same coverage as LVCP. This thesis makes the following contributions: •The MPI matches-before order that captures the causality between MPI events in an MPI execution. • Two distributed causality tracking protocols for MPI programs that rely on the matches-before order. • A Distributed Analyzer for MPI programs (DAMPI), which implements the two aforementioned protocols to provide scalable and modular dynamic verifi cation for MPI programs. • Scalability enhancement through algorithmic improvements for ISP, a dynamic verifi er for MPI programs

    An MPI-based 2D data redistribution library for dense arrays

    Get PDF
    In HPC, data redistributions (reorganizations) are used in parallel applications to improve performance and/or provide data-locality compatibility with sequences of parallel operations. Data reorganization refers to changing the logical and physical arrangement of data (such as dense arrays or matrices distributed over peer processes in a parallel program). This operation can be achieved by applying transformations such as transpositions or rotations or changing how data is mapped across the process grid P by Q, all of which are accomplished either with message passing or distributed shared memory operations. In this project, we restrict ourselves to a distributed memory model and message passing, not a shared-memory model, nor do we use distributed shared memory APIs. Our primary goal is to generate a library capable of diverse data reorganizations. We aim to develop a high-level Application Programming Interface (API) that directly works with the Message Passing Interface (MPI) to accomplish data redistributions in data-parallel applications and libraries, such as the polymath library, a library of many algorithms all of which implement parallel dense matrix multiplication algorithms with flexible data layouts. Using the reorganization mode of process grid shapes with constant total processes, we plan to observe the performance trends of the polymath dense parallel matrix-multiplication library (which is related research) based on different grid shapes, problem sizes, numbers of processes and decide whether it is more efficient to redistribute data or not to achieve the highest performance. We will test other redistributions for some of the process shapes used with the polymath library to identify how redistribution impacts performance. These tests will provide us with the information to determine if redistribution is worthwhile for non-optimal process layout (as established with the polymath system). Besides changing the shape of the data in terms of its layout on a logical process topology, we also plan to study and demonstrate data transpose algorithms in this library, another useful redistribution mechanism for dense linear algebra in distributed-memory parallel computing

    Staged Methodologies for Parallel Programming

    Get PDF
    This thesis presents a parallel programming model based on the gradual introduction of implementation detail. It comprises a series of decision stages that each fix a different facet of the implementation. The initial stages of the model elide many of the parallelisation concerns, while later stages allow low-level control over the implementation details. This allows the programmer to make decisions about each concern at an appropriate level of abstraction. The model provides abstractions not present in single-view explicitly parallel languages; while at the same time allowing more control and freedom of expression than typical high-level treatments of parallelism. A prototype system, called PEDL, was produced to evaluate the effectiveness of this programming model. This system allows the derivation of distributed-memory SPMD implementations for array based numerical computations. The decision stages are structured as a series of related languages, each of which presents a more explicit model of the parallel machine. The starting point is a high-level specification of the computational portion of the algorithm from which a low-level implementation is derived that describes all the parallelisation detail. The derivation proceeds by transforming the program from one language to the next, adding implementation detail at each stage. The system is amenable to producing correctness proofs of the transformations, although this is not required. All languages in the system are executable: programs undergoing derivation can be checked and tested to provide the programmer with feedback. The languages are implemented by embedding them within a host functional language. Their structure is represented within the type system of the host language. This allows programs to be expressed in languages from a combination of stages, which is useful during derivation, while still being able to distinguish the different languages. Once all the parallelisation details have been fixed the final implementation is generated by a process of transformation and translation. This implementation is a conventional imperative program in which communication is provided by the MPI library. The thesis presents case studies of the use of the system: programs undergoing derivation were found to be clear and concise, and it was found that the use of this system introduces little overhead into the final implementation

    Quality Assessment and Variance Reduction in Monte Carlo Rendering Algorithms

    Get PDF
    Over the past few decades much work has been focused on the area of physically based rendering which attempts to produce images that are indistinguishable from natural images such as photographs. Physically based rendering algorithms simulate the complex interactions of light with physically based material, light source, and camera models by structuring it as complex high dimensional integrals [Kaj86] which do not have a closed form solution. Stochastic processes such as Monte Carlo methods can be structured to approximate the expectation of these integrals, producing algorithms which converge to the true rendering solution as the amount of computation is increased in the limit.When a finite amount of computation is used to approximate the rendering solution, images will contain undesirable distortions in the form of noise from under-sampling in image regions with complex light interactions. An important aspect of developing algorithms in this domain is to have a means of accurately comparing and contrasting the relative performance gains between different approaches. Image Quality Assessment (IQA) measures provide a way of condensing the high dimensionality of image data to a single scalar value which can be used as a representative measure of image quality and fidelity. These measures are largely developed in the context of image datasets containing natural images (photographs) coupled with their synthetically distorted versions, and quality assessment scores given by human observers under controlled viewing conditions. Inference using these measures therefore relies on whether the synthetic distortions used to develop the IQA measures are representative of the natural distortions that will be seen in images from domain being assessed.When we consider images generated through stochastic rendering processes, the structure of visible distortions that are present in un-converged images is highly complex and spatially varying based on lighting and scene composition. In this domain the simple synthetic distortions used commonly to train and evaluate IQA measures are not representative of the complex natural distortions from the rendering process. This raises a question of how robust IQA measures are when applied to physically based rendered images.In this thesis we summarize the classical and recent works in the area of physicallybased rendering using stochastic approaches such as Monte Carlo methods. We develop a modern C++ framework wrapping MPI for managing and running code on large scale distributed computing environments. With this framework we use high performance computing to generate a dataset of Monte Carlo images. From this we provide a study on the effectiveness of modern and classical IQA measures and their robustness when evaluating images generated through stochastic rendering processes. Finally, we build on the strengths of these IQA measures and apply modern deep-learning methods to the No Reference IQA problem, where we wish to assess the quality of a rendered image without knowing its true value

    Programming Languages and Systems

    Get PDF
    This open access book constitutes the proceedings of the 29th European Symposium on Programming, ESOP 2020, which was planned to take place in Dublin, Ireland, in April 2020, as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The actual ETAPS 2020 meeting was postponed due to the Corona pandemic. The papers deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems

    Triple Helix as a Strategic Tool to Fast-Track Climate Change Adaptation in Rural Kenya: Case Study of Marsabit County

    Get PDF
    AbstractThe lack of affordable, clean, and reliable energy in Africa's rural areas forces people to resort to poor quality energy source, which is detrimental to the people's health and prevents the economic development of communities. Moreover, access to safe water and food security are concerns closely linked to health issues and children malnourishment. Recent climate change due to global warming has worsened the already critical situation.Electricity is well known to be an enabler of development as it allows the use of modern devices thus enabling the development of not only income-generating activities but also water pumping and food processing and conservation that can promote socioeconomic growth. However, all of this is difficult to achieve due to the lack of investors, local skills, awareness by the community, and often also government regulations.All the above mentioned barriers to the uptake of electricity in rural Kenya could be solved by the coordinated effort of government, private sector, and academia, also referred to as Triple Helix, in which each entity may partially take the other's role. This chapter discretizes the above and shows how a specific county (Marsabit) has benefited from this triple intervention. Existing government policies and actions and programs led by nongovernmental organizations (NGOs) and international agencies are reviewed, highlighting the current interconnection and gaps in promoting integrated actions toward climate change adaptation and energy access

    Plants and Plant Products in Local Markets Within Benin City and Environs

    Get PDF
    AbstractThe vulnerability of agriculture systems in Africa to climate change is directly and indirectly affecting the availability and diversity of plants and plant products available in local markets. In this chapter, markets in Benin City and environs were assessed to document the availability of plants and plant products. Markets were grouped into urban, suburban, and rural with each group having four markets. Majority of the plant and plant product vendors were women and 88 plant species belonging to 42 families were found. Their scientific and common names were documented as well as the parts of the plant and associated products available in the markets. Most of the plant and plant products found in local markets belong to major plant families. Urban markets had the highest diversity of plants and plant products. Three categories of plants and plant products were documented. Around 67% of the plants and plant products were categorized as whole plant/plant parts, 28% as processed plant parts, while 5% as reprocessed plant/plant parts. It was revealed that 86% of these plants are used as foods, 11% are for medicinal purposes, while 3% is used for other purposes. About 35% of plants and plant products across the markets were fruits, which is an indication that city and environs are a rich source of fruits. The local knowledge and practices associated with the plants and plant products can contribute towards formulating a strategic response for climate change impacts on agriculture, gender, poverty, food security, and plant diversity
    corecore