11 research outputs found

    An investigation of the performance portability of OpenCL

    Get PDF
    This paper reports on the development of an MPI/OpenCL implementation of LU, an application-level benchmark from the NAS Parallel Benchmark Suite. An account of the design decisions addressed during the development of this code is presented, demonstrating the importance of memory arrangement and work-item/work-group distribution strategies when applications are deployed on different device types. The resulting platform-agnostic, single source application is benchmarked on a number of different architectures, and is shown to be 1.3–1.5× slower than native FORTRAN 77 or CUDA implementations on a single node and 1.3–3.1× slower on multiple nodes. We also explore the potential performance gains of OpenCL’s device fissioning capability, demonstrating up to a 3× speed-up over our original OpenCL implementation

    On the acceleration of wavefront applications using distributed many-core architectures

    Get PDF
    In this paper we investigate the use of distributed graphics processing unit (GPU)-based architectures to accelerate pipelined wavefront applications—a ubiquitous class of parallel algorithms used for the solution of a number of scientific and engineering applications. Specifically, we employ a recently developed port of the LU solver (from the NAS Parallel Benchmark suite) to investigate the performance of these algorithms on high-performance computing solutions from NVIDIA (Tesla C1060 and C2050) as well as on traditional clusters (AMD/InfiniBand and IBM BlueGene/P). Benchmark results are presented for problem classes A to C and a recently developed performance model is used to provide projections for problem classes D and E, the latter of which represents a billion-cell problem. Our results demonstrate that while the theoretical performance of GPU solutions will far exceed those of many traditional technologies, the sustained application performance is currently comparable for scientific wavefront applications. Finally, a breakdown of the GPU solution is conducted, exposing PCIe overheads and decomposition constraints. A new k-blocking strategy is proposed to improve the future performance of this class of algorithm on GPU-based architectures

    Evaluating the performance of legacy applications on emerging parallel architectures

    Get PDF
    The gap between a supercomputer's theoretical maximum (\peak") oatingpoint performance and that actually achieved by applications has grown wider over time. Today, a typical scientific application achieves only 5{20% of any given machine's peak processing capability, and this gap leaves room for significant improvements in execution times. This problem is most pronounced for modern \accelerator" architectures { collections of hundreds of simple, low-clocked cores capable of executing the same instruction on dozens of pieces of data simultaneously. This is a significant change from the low number of high-clocked cores found in traditional CPUs, and effective utilisation of accelerators typically requires extensive code and algorithmic changes. In many cases, the best way in which to map a parallel workload to these new architectures is unclear. The principle focus of the work presented in this thesis is the evaluation of emerging parallel architectures (specifically, modern CPUs, GPUs and Intel MIC) for two benchmark codes { the LU benchmark from the NAS Parallel Benchmark Suite and Sandia's miniMD benchmark { which exhibit complex parallel behaviours that are representative of many scientific applications. Using combinations of low-level intrinsic functions, OpenMP, CUDA and MPI, we demonstrate performance improvements of up to 7x for these workloads. We also detail a code development methodology that permits application developers to target multiple architecture types without maintaining completely separate implementations for each platform. Using OpenCL, we develop performance portable implementations of the LU and miniMD benchmarks that are faster than the original codes, and at most 2x slower than versions highly-tuned for particular hardware. Finally, we demonstrate the importance of evaluating architectures at scale (as opposed to on single nodes) through performance modelling techniques, highlighting the problems associated with strong-scaling on emerging accelerator architectures

    Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

    Get PDF
    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots

    Interactive Rendering of Scattering and Refraction Effects in Heterogeneous Media

    Get PDF
    In this dissertation we investigate the problem of interactive and real-time visualization of single scattering, multiple scattering and refraction effects in heterogeneous volumes. Our proposed solutions span a variety of use scenarios: from a very fast yet physically-based approximation to a physically accurate simulation of microscopic light transmission. We add to the state of the art by introducing a novel precomputation and sampling strategy, a system for efficiently parallelizing the computation of different volumetric effects, and a new and fast version of the Discrete Ordinates Method. Finally, we also present a collateral work on real-time 3D acquisition devices

    Autotuning wavefront patterns for heterogeneous architectures

    Get PDF
    Manual tuning of applications for heterogeneous parallel systems is tedious and complex. Optimizations are often not portable, and the whole process must be repeated when moving to a new system, or sometimes even to a different problem size. Pattern based parallel programming models were originally designed to provide programmers with an abstract layer, hiding tedious parallel boilerplate code, and allowing a focus on only application specific issues. However, the constrained algorithmic model associated with each pattern also enables the creation of pattern-specific optimization strategies. These can capture more complex variations than would be accessible by analysis of equivalent unstructured source code. These variations create complex optimization spaces. Machine learning offers well established techniques for exploring such spaces. In this thesis we use machine learning to create autotuning strategies for heterogeneous parallel implementations of applications which follow the wavefront pattern. In a wavefront, computation starts from one corner of the problem grid and proceeds diagonally like a wave to the opposite corner in either two or three dimensions. Our framework partitions and optimizes the work created by these applications across systems comprising multicore CPUs and multiple GPU accelerators. The tuning opportunities for a wavefront include controlling the amount of computation to be offloaded onto GPU accelerators, choosing the number of CPU and GPU threads to process tasks, tiling for both CPU and GPU memory structures, and trading redundant halo computation against communication for multiple GPUs. Our exhaustive search of the problem space shows that these parameters are very sensitive to the combination of architecture, wavefront instance and problem size. We design and investigate a family of autotuning strategies, targeting single and multiple CPU + GPU systems, and both two and three dimensional wavefront instances. These yield an average of 87% of the performance found by offline exhaustive search, with up to 99% in some cases

    Physically Based Preconditioning Techniques Applied to the First Order Particle Transport and to Fluid Transport in Porous Media

    Get PDF
    Physically based preconditioning is applied to linear systems resulting from solving the first order formulation of the particle transport equation and from solving the homogenized form of the simple flow equation for porous media flows. The first order formulation of the particle transport equation is solved two ways. The first uses a least squares finite element method resulting in a symmetric positive definite linear system which is solved by a preconditioned conjugate gradient method. The second uses a discontinuous finite element method resulting in a non-symmetric linear system which is solved by a preconditioned biconjugate gradient stabilized method. The flow equation is solved using a mixed finite element method. Specifically four levels of improvement are applied: homogenization of the porous media domain, a projection method for the mixed finite element method which simplifies the linear system, physically based preconditioning, and implementation of the linear solver in parallel on graphic processing units. The conjugate gradient linear solver for the least squares finite element method is also applied in parallel on graphics processing units. The physically based preconditioner is shown to perform well in each case, in relation to speed-ups gained and as compared with several algebraic preconditioners

    New contributions for modeling and simulating high performance computing applications on parallel and distributed architectures

    Get PDF
    In this thesis we propose a new simulation platform specifically designed for modeling parallel and distributed architectures, which consists on integrating the model of the four basic systems into a single simulation platform. Those systems consist of storage system, memory system, processing system and network system. The main characteristics of this platform are flexibility, to embrace the widest range of possible designs; scalability, to check the limits of extending the architecture designs; and the necessary trade-offs between the execution time and the accuracy obtained. This simulation platform is aimed to model both existent and new designs of HPC architectures and applications. Then, depending on the user's requirements, the model can be focused on a set of the basic systems, or by the contrary on the complete system. Therefore, a complete distributed system can be modeled by integrating those basic systems in the model, each one with the corresponding level of detail, which provides a high level of flexibility. Moreover, it provides a good compromise between accuracy and performance, and flexibility provided for building a wide range of architectures with different configurations. A validation process of the proposed simulation platform has been fulfilled by comparing the results obtained in real architectures with those obtained in the analogous simulated environments. Furthermore, in order to evaluate and analyze how evolve both scalability and bottlenecks existent on a typical HPC multi-core architecture using different configurations, a set of experiments have been achieved. Basically those experiments consist on executing the two application models (HPC and checkpointing applications) in several HPC architectures. Finally, performance results of the simulation itself for executing the corresponding experiments have been achieved. The main purpose of this process is to calculate both the amount of time and memory needed for executing a specific simulation, depending of the size of the environment to be modeled, and the hardware resources available for executing each simulation. ----------------------------------------------------------------------------------------------------------------------------------------------------------En esta tesis se propone una nueva plataforma de simulación específicamente diseñada para modelar sistemas paralelos y distribuidos, la cual se basa en la integración del modelo de los cuatro sistemas básicos en una única plataforma de simulación. Estos sistemas están formados por el sistema de almacenamiento, el sistema de memoria, el sistema de procesamiento (CPU) y el sistema de red. Las principales características de esta plataforma de simulación son flexibilidad, para abarcar el mayor rango de diseños posible; escalabilidad, para comprobar los límites al incrementar el tamaño de las arquitecturas modeladas; y el balance entre los tiempos de ejecución y la precisión obtenida en las simulaciones. Esta plataforma de simulación está orientada a modelar tanto sistemas actuales como nuevos diseños de arquitecturas HPC y aplicaciones. De esta forma, dependiendo de los requisitos del usuario, el modelo puede estar enfocado a un conjunto de sistemas, o por el contrario, éste puede estar enfocado en el sistema completo. Por ello, se pueden modelar sistemas distribuidos completos integrando los sistemas básicos en un único modelo, cada uno con su nivel de detalle correspondiente, lo cual proporciona un alto nivel de flexibilidad. Además, esta plataforma proporciona un buen compromiso tanto entre precisión y rendimiento, como en la flexibilidad proporcionada para poder construir un amplio rango de arquitecturas utilizando diferentes configuraciones. Además, se ha llevado a cabo un proceso de validación de la plataforma de simulación propuesta, comparando los resultados obtenidos en entornos reales con aquellos obtenidos en los modelos análogos. Posteriormente, se han realizado una serie de experimentos para realizar una evaluación y análisis de cómo evolucionan, tanto la escalabilidad como los cuellos de botella, existentes en una arquitectura HPC típica multi-core utilizando diferentes configuraciones. Básicamente estos experimentos consisten en ejecutar 2 modelos de aplicaciones (HPC y checkpointing) en varias arquitecturas. Finalmente, se han calculado datos de rendimiento de la propia plataforma de simulación con los experimentos realizados. El propósito de este proceso es calcular, tanto el tiempo como la cantidad de memoria necesaria, para ejecutar una simulación concreta dependiendo tanto del tamaño del entorno simulado, como de los recursos disponibles para ejecutar tal simulación
    corecore