22 research outputs found

    SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI

    Get PDF

    Towards Automatic and Adaptive Optimizations of MPI Collective Operations

    Get PDF
    Message passing is one of the most commonly used paradigms of parallel programming. Message Passing Interface, MPI, is a standard used in scientific and high-performance computing. Collective operations are a subset of MPI standard that deals with processes synchronization, data exchange and computation among a group of processes. The collective operations are commonly used and can be application performance bottleneck. The performance of collective operations depends on many factors, some of which are the input parameters (e.g., communicator and message size); system characteristics (e.g., interconnect type); the application computation and communication pattern; and internal algorithm parameters (e.g., internal segment size). We refer to an algorithm and its internal parameters as a method. The goal of this dissertation is a performance improvement of MPI collective operations and applications that use them. In our framework, during a collective call, a system-specific decision function is invoked to select the most appropriate method for the particular collective instance. This dissertation focuses on automatic techniques for system-specific decision function generation. Our approach takes the following steps: first, we collect method performance information on the system of interest; second, we analyze this information using parallel communication models, graphical encoding methods, and decision trees; third, based on the previous step, we automatically generate the system-specific decision function to be used at run-time. In situation when a detailed performance measurement is not feasible, method performance models can be used to supplement the measured method performance information. We build and evaluate parallel communication models of 35 different collective algorithms. These models are built on top of the three commonly used point-to-point communication models, Hockney, LogGP, and PLogP.We use the method performance information on a system to build quadtrees and C4.5 decision trees of variable sizes and accuracies. The collective method selection functions are then generated automatically from these trees. Our experiments show that quadtrees of three or four levels are often enough to approximate experimentally optimal decision with a small mean performance penalty (less than 10%). The C4.5 decision trees are even more accurate (with mean performance penalty of less than 5%). The size and accuracy of C4.5 decision trees can be further improved with use of appropriate composite attributes (such as “total message size”, or “even communicator size”.) Finally, we apply these techniques to tune the collective operations on the Grig cluster at the University of Tennessee and to improve an application performance on the Cray XT4 system at Oak Ridge National Laboratory. The tuned collective is able to achieve more than 40% mean performance improvement over the native broadcast implementation. Using the platform-specific reduce on Cray XT4 lead to 10% improvement in the overall application performance. Our results show that the methods we explored are both applicable and effective for the system-specific optimizations of collective operations and are a right step toward automatically tunable, adaptive, MPI collectives

    Evaluating the performance of legacy applications on emerging parallel architectures

    Get PDF
    The gap between a supercomputer's theoretical maximum (\peak") oatingpoint performance and that actually achieved by applications has grown wider over time. Today, a typical scientific application achieves only 5{20% of any given machine's peak processing capability, and this gap leaves room for significant improvements in execution times. This problem is most pronounced for modern \accelerator" architectures { collections of hundreds of simple, low-clocked cores capable of executing the same instruction on dozens of pieces of data simultaneously. This is a significant change from the low number of high-clocked cores found in traditional CPUs, and effective utilisation of accelerators typically requires extensive code and algorithmic changes. In many cases, the best way in which to map a parallel workload to these new architectures is unclear. The principle focus of the work presented in this thesis is the evaluation of emerging parallel architectures (specifically, modern CPUs, GPUs and Intel MIC) for two benchmark codes { the LU benchmark from the NAS Parallel Benchmark Suite and Sandia's miniMD benchmark { which exhibit complex parallel behaviours that are representative of many scientific applications. Using combinations of low-level intrinsic functions, OpenMP, CUDA and MPI, we demonstrate performance improvements of up to 7x for these workloads. We also detail a code development methodology that permits application developers to target multiple architecture types without maintaining completely separate implementations for each platform. Using OpenCL, we develop performance portable implementations of the LU and miniMD benchmarks that are faster than the original codes, and at most 2x slower than versions highly-tuned for particular hardware. Finally, we demonstrate the importance of evaluating architectures at scale (as opposed to on single nodes) through performance modelling techniques, highlighting the problems associated with strong-scaling on emerging accelerator architectures

    Towards Scalable, Accurate, and Usable Simulations of Distributed Applications and Systems

    Get PDF
    The study of parallel and distributed applications and platforms, whether in the cluster, grid, peer-to-peer, volunteer, or cloud computing domain, often mandates empirical evaluation of proposed algorithm and system solutions via simulation. Unlike direct experimentation via an application deployment on a real-world testbed, simulation enables fully repeatable and configurable experiments that can often be conducted quickly for arbitrary hypothetical scenarios. In spite of these promises, current simulation practice is often not conducive to obtaining scientifically sound results. State-of-the-art simulators are often not validated and their accuracy is unknown. Furthermore, due to the lack of accepted simulation frameworks and of transparent simulation methodologies, published simulation results are rarely reproducible. We highlight recent advances made in the context of the SimGrid simulation framework in a view to addressing this predicament across the aforementioned domains. These advances, which pertain both to science and engineering, together lead to unprecedented combinations of simulation accuracy and scalability, allowing the user to trade off one for the other. They also enhance simulation usability and reusability so as to promote an Open Science approach for simulation-based research in the field.L'étude de systèmes et applications parallèles et distribués, qu'il s'agisse de clusters, de grilles, de systèmes pair-à-pair de volunteer computing, ou de cloud, demandent souvent l'évaluation empirique par simulation des algorithmes et solutions proposés. Contrairement à l'expérimentation directe par déploiement d'applications sur des plates-formes réelles, la simulation permet des expériences reproductibles pouvant être menée rapidement sur n'importe quel scénario hypothétique. Malgré ces avantages théoriques, les pratiques actuelles en matière de simulation ne permettent souvent pas d'obtenir des résultats scientifiquement éprouvés. Les simulateurs classiques sont trop souvent validés et leur réalisme n'est pas démontré. De plus, le manque d'environnements de simulation communément acceptés et de méthodologies classiques de simulation font que les résultats publiés grâce à cette approche sont rarement reproductibles par la communauté. Nous présentons dans cet article les avancées récentes dans le contexte de l'environnement SimGrid pour répondre à ces difficultés. Ces avancées, comprenant à la fois des aspects techniques et scientifiques, rendent possible une combinaison inégalée de réalisme et précision de simulation et d'extensibilité. Cela permet aux utilisateurs de choisir le grain des modèles utilisés pour ses simulations en fonction de ses besoins de réalisme et d'extensibilité. Les travaux présentés ici améliorent également l'utilisabilité et la réutilisabilité de façon à promouvoir l'approche d'Open Science pour les recherches basées sur la simulation dans notre domaine

    Analytical modelling for the performance prediction and optimisation of near-neighbour structured grid hydrodynamics

    Get PDF
    The advent of modern High Performance Computing (HPC) has facilitated the use of powerful supercomputing machines that have become the backbone of data analysis and simulation. With such a variety of software and hardware available today, understanding how well such machines can perform is key for both efficient use and future planning. With significant costs and multi-year turn-around times, procurement of a new HPC architecture can be a significant undertaking. In this work, we introduce one such measure to capture the performance of such machines – analytical performance models. These models provide a mathematical representation of the behaviour of an application in the context of how its various components perform for an architecture. By parameterising its workload in such a way that the time taken to compute can be described in relation to one or more benchmarkable statistics, this allows for a reusable representation of an application that can be applied to multiple architectures. This work goes on to introduce one such benchmark of interest, Hydra. Hydra is a benchmark 3D Eulerian structured mesh hydrocode implemented in Fortran, with which the explosive compression of materials, shock waves, and the behaviour of materials at the interface between components can be investigated. We assess its scaling behaviour and use this knowledge to construct a performance model that accurately predicts the runtime to within 15% across three separate machines, each with its own distinct characteristics. Further, this work goes on to explore various optimisation techniques, some of which see a marked speedup in the overall walltime of the application. Finally, another software application of interest with similar behaviour patterns, PETSc, is examined to demonstrate how different applications can exhibit similar modellable patterns

    Performance modelling and optimisation of inertial confinement fusion simulation codes

    Get PDF
    Legacy code performance has failed to keep up with that of modern hardware. Many new hardware features remain under-utilised, with the majority of code bases still unable to make use of accelerated or heterogeneous architectures. Code maintainers now accept that they can no longer rely solely on hardware improvements to drive code performance, and that changes at the software engineering level need to be made. The principal focus of the work presented in this thesis is an analysis of the changes legacy Inertial Confinement Fusion (ICF) codes need to make in order to efficiently use current and future parallel architectures. We discuss the process of developing a performance model, and demonstrate the ability of such a model to make accurate predictions about code performance for code variants on a range of architectures. We build on the knowledge gained from such a process, and examine how Particle-in-Cell (PIC) codes must change in order to move towards the required levels of portable and future-proof performance needed to leverage the capabilities of modern hardware. As part of this investigation, we present an OpenCL port of the legacy code EPOCH, as well as a fully featured mini-app representing EPOCH. Finally, as a direct consequence of these investigations, we directly apply these performance optimisations to the production version EPOCH, culminating in a speedup of over 2x for the core algorith

    Towards scalable adaptive mesh refinement on future parallel architectures

    Get PDF
    In the march towards exascale, supercomputer architectures are undergoing a significant change. Limited by power consumption and heat dissipation, future supercomputers are likely to be built around a lower-power many-core model. This shift in supercomputer design will require sweeping code changes in order to take advantage of the highly-parallel architectures. Evolving or rewriting legacy applications to perform well on these machines is a significant challenge. Mini-applications, small computer programs that represent the performance characteristics of some larger application, can be used to investigate new programming models and improve the performance of the legacy application by proxy. These applications, being both easy to modify and representative, are essential for establishing a path to move legacy applications into the exascale era. The focus of the work presented in this thesis is the design, development and employment of a new mini-application, CleverLeaf, for shock hydro- dynamics with block-structured adaptive mesh refinement (AMR). We report on the development of CleverLeaf, and show how the fresh start provided by a mini-application can be used to develop an application that is flexible, accurate, and easy to employ in the investigation of exascale architectures. We also detail the development of the first reported resident parallel block-structured AMR library for Graphics Processing Units (GPUs). Extending the SAMRAI library using the CUDA programming model, we develop datatypes that store data only in GPU memory, as well the necessary operators for moving and interpolating data on an adaptive mesh. We show that executing AMR simulations on a GPU is up to 4.8⇥ faster than a CPU, and demonstrate scalability on over 4,000 nodes using a combination of CUDA and MPI. Finally, we show how mini-applications can be employed to improve the performance of production applications on existing parallel architectures by selecting the optimal application configuration. Using CleverLeaf, we identify the most appropriate configurations on three contemporary supercomputer architectures. Selecting the best parameters for our application can reduce run-time by up to 82% and reduce memory usage by up to 32%

    Higher-order particle representation for a portable unstructured particle-in-cell application

    Get PDF
    As the field of High Performance Computing (HPC) moves towards the era of Exascale computation, computer hardware is becoming increasingly parallel and continues to diversify. As a result, it is now crucial for scientific codes to be able to take advantage of a wide variety of hardware types. Additionally, the growth in compute performance has outpaced the improvement in memory latency and bandwidth; this issue now poses a significant obstacle to performance. This thesis examines these matters in the context of modern plasma physics simulations, specifically those that make use of the Particle-in-Cell (PIC) method on unstructured computational grids. Specifically, we begin by documenting the implementation of the particle-based kernels of such a code using a performance portability library to enable the application to run on a variety of modern hardware, including both CPUs and GPUs. The use of hardware specific tuning is also explored, culminating in a 3x speedup of a key component of the core PIC algorithm. We also show that portability is achievable on both single-node machines and production supercomputers of multiple hardware types. This thesis also documents an algorithmic change to particle representation within the same code that improves solution accuracy, and adds compute intensity { an important property where memory bandwidth is limited and the ratio of the amount of computation to memory accesses is low. We conclude the work by comparing the performance of the modified algorithm to the base implementation, where we find that shifting the simulation workload towards computation can improve parallel efficiency by up to 2:5x. While the performance improvements that were hoped for were not achieved, we end this thesis by postulating that the proposed methods will become more viable as compilers and hardware improve

    Predictive analysis and optimisation of pipelined wavefront applications using reusable analytic models

    Get PDF
    Pipelined wavefront computations are an ubiquitous class of high performance parallel algorithms used for the solution of many scientific and engineering applications. In order to aid the design and optimisation of these applications, and to ensure that during procurement platforms are chosen best suited to these codes, there has been considerable research in analysing and evaluating their operational performance. Wavefront codes exhibit complex computation, communication, synchronisation patterns, and as a result there exist a large variety of such codes and possible optimisations. The problem is compounded by each new generation of high performance computing system, which has often introduced a previously unexplored architectural trait, requiring previous performance models to be rewritten and reevaluated. In this thesis, we address the performance modelling and optimisation of this class of application, as a whole. This differs from previous studies in which bespoke models are applied to specific applications. The analytic performance models are generalised and reusable, and we demonstrate their application to the predictive analysis and optimisation of pipelined wavefront computations running on modern high performance computing systems. The performance model is based on the LogGP parameterisation, and uses a small number of input parameters to specify the particular behaviour of most wavefront codes. The new parameters and model equations capture the key structural and behavioural differences among different wavefront application codes, providing a succinct summary of the operations for each application and insights into alternative wavefront application design. The models are applied to three industry-strength wavefront codes and are validated on several systems including a Cray XT3/XT4 and an InfiniBand commodity cluster. Model predictions show high quantitative accuracy (less than 20% error) for all high performance configurations and excellent qualitative accuracy. The thesis presents applications, projections and insights for optimisations using the model, which show the utility of reusable analytic models for performance engineering of high performance computing codes. In particular, we demonstrate the use of the model for: (1) evaluating application configuration and resulting performance; (2) evaluating hardware platform issues including platform sizing, configuration; (3) exploring hardware platform design alternatives and system procurement and, (4) considering possible code and algorithmic optimisations

    Parallel fluid dynamics for the film and animation industries

    Get PDF
    Includes bibliographical references (leaves 142-149).The creation of automated fluid effects for film and media using computer simulations is popular, as artist time is reduced and greater realism can be achieved through the use of numerical simulation of physical equations. The fluid effects in today’s films and animations have large scenes with high detail requirements. With these requirements, the time taken by such automated approaches is large. To solve this, cluster environments making use of hundreds or more CPUs have been used. This overcomes the processing power and memory limitations of a single computer and allows very large scenes to be created. One of the newer methods for fluid simulation is the Lattice Boltzmann Method (LBM). This is a cellular automata type of algorithm, which parallelizes easily. An important part of the process of parallelization is load balancing; the distribution of computation amongst the available computing resources in the cluster. To date, the parallelization of the Lattice Boltzmann method only makes use of static load balancing. Instead, it is possible to make use of dynamic load balancing, which adjusts the computation distribution as the simulation progresses. Here, we investigate the use of the LBM in conjunction with a Volume of Fluid (VOF) surface representation in a parallel environment with the aim of producing large scale scenes for the film and animation industries. The VOF method tracks mass exchange between cells of the LBM. In particular, we implement the new dynamic load balancing algorithm to improve the efficiency of the fluid simulation using this method. Fluid scenes from films and animations have two important requirements: the amount of detail and the spatial resolution of the fluid. These aspects of the VOF LBM are explored by considering the time for scene creation using a single and multi-CPU implementation of the method. The scalability of the method is studied by plotting the run time, speedup and efficiency of scene creation against the number of CPUs. From such plots, an estimate is obtained of the feasibility of creating scenes of a giving level of detail. Such estimates enable the recommendation of architectures for creation of specific scenes. Using a parallel implementation of the VOF LBM method we successfully create large scenes with great detail. In general, considering the significant amounts of communication required for the parallel method, it is shown to scale well, favouring scenes with greater detail. The scalability studies show that the new dynamic load balancing algorithm improves the efficiency of the parallel implementation, but only when using lower number of CPUs. In fact, for larger number of CPUs, the dynamic algorithm reduces the efficiency. We hypothesise the latter effect can be removed by making using of centralized load balancing decision instead of the current decentralized approach. The use of a cluster comprising of 200 CPUs is recommended for the production of large scenes of a grid size 6003 in a reasonable time frame
    corecore