22 research outputs found

    The readying of applications for heterogeneous computing

    Get PDF
    High performance computing is approaching a potentially significant change in architectural design. With pressures on the cost and sheer amount of power, additional architectural features are emerging which require a re-think to the programming models deployed over the last two decades. Today's emerging high performance computing (HPC) systems are maximising performance per unit of power consumed resulting in the constituent parts of the system to be made up of a range of different specialised building blocks, each with their own purpose. This heterogeneity is not just limited to the hardware components but also in the mechanisms that exploit the hardware components. These multiple levels of parallelism, instruction sets and memory hierarchies, result in truly heterogeneous computing in all aspects of the global system. These emerging architectural solutions will require the software to exploit tremendous amounts of on-node parallelism and indeed programming models to address this are emerging. In theory, the application developer can design new software using these models to exploit emerging low power architectures. However, in practice, real industrial scale applications last the lifetimes of many architectural generations and therefore require a migration path to these next generation supercomputing platforms. Identifying that migration path is non-trivial: With applications spanning many decades, consisting of many millions of lines of code and multiple scientific algorithms, any changes to the programming model will be extensive and invasive and may turn out to be the incorrect model for the application in question. This makes exploration of these emerging architectures and programming models using the applications themselves problematic. Additionally, the source code of many industrial applications is not available either due to commercial or security sensitivity constraints. This thesis highlights this problem by assessing current and emerging hard- ware with an industrial strength code, and demonstrating those issues described. In turn it looks at the methodology of using proxy applications in place of real industry applications, to assess their suitability on the next generation of low power HPC offerings. It shows there are significant benefits to be realised in using proxy applications, in that fundamental issues inhibiting exploration of a particular architecture are easier to identify and hence address. Evaluations of the maturity and performance portability are explored for a number of alternative programming methodologies, on a number of architectures and highlighting the broader adoption of these proxy applications, both within the authors own organisation, and across the industry as a whole

    Parallel fluid dynamics for the film and animation industries

    Get PDF
    Includes bibliographical references (leaves 142-149).The creation of automated fluid effects for film and media using computer simulations is popular, as artist time is reduced and greater realism can be achieved through the use of numerical simulation of physical equations. The fluid effects in today’s films and animations have large scenes with high detail requirements. With these requirements, the time taken by such automated approaches is large. To solve this, cluster environments making use of hundreds or more CPUs have been used. This overcomes the processing power and memory limitations of a single computer and allows very large scenes to be created. One of the newer methods for fluid simulation is the Lattice Boltzmann Method (LBM). This is a cellular automata type of algorithm, which parallelizes easily. An important part of the process of parallelization is load balancing; the distribution of computation amongst the available computing resources in the cluster. To date, the parallelization of the Lattice Boltzmann method only makes use of static load balancing. Instead, it is possible to make use of dynamic load balancing, which adjusts the computation distribution as the simulation progresses. Here, we investigate the use of the LBM in conjunction with a Volume of Fluid (VOF) surface representation in a parallel environment with the aim of producing large scale scenes for the film and animation industries. The VOF method tracks mass exchange between cells of the LBM. In particular, we implement the new dynamic load balancing algorithm to improve the efficiency of the fluid simulation using this method. Fluid scenes from films and animations have two important requirements: the amount of detail and the spatial resolution of the fluid. These aspects of the VOF LBM are explored by considering the time for scene creation using a single and multi-CPU implementation of the method. The scalability of the method is studied by plotting the run time, speedup and efficiency of scene creation against the number of CPUs. From such plots, an estimate is obtained of the feasibility of creating scenes of a giving level of detail. Such estimates enable the recommendation of architectures for creation of specific scenes. Using a parallel implementation of the VOF LBM method we successfully create large scenes with great detail. In general, considering the significant amounts of communication required for the parallel method, it is shown to scale well, favouring scenes with greater detail. The scalability studies show that the new dynamic load balancing algorithm improves the efficiency of the parallel implementation, but only when using lower number of CPUs. In fact, for larger number of CPUs, the dynamic algorithm reduces the efficiency. We hypothesise the latter effect can be removed by making using of centralized load balancing decision instead of the current decentralized approach. The use of a cluster comprising of 200 CPUs is recommended for the production of large scenes of a grid size 6003 in a reasonable time frame

    Massivel y parallel declarative computational models

    Get PDF
    Current computer archictectures are parallel, with an increasing number of processors. Parallel programming is an error-prone task and declarative models such as those based on constraints relieve the programmer from some of its difficult aspects, because they abstract control away. In this work we study and develop techniques for declarative computational models based on constraints using GPI, aiming at large scale parallel execution. The main contributions of this work are: A GPI implementation of a scalable dynamic load balancing scheme based on work stealing, suitable for tree shaped computations and effective for systems with thousands of threads. A parallel constraint solver, MaCS, implemented to take advantage of the GPI programming model. Experimental evaluation shows very good scalability results on systems with hundreds of cores. A GPI parallel version of the Adaptive Search algorithm, including different variants. The study on different problems advances the understanding of scalability issues known to exist with large numbers of cores; ### SUMÁRIO: Actualmente as arquitecturas de computadores sĂŁo paralelas, com um crescente nĂșmero de processadores. A programação paralela Ă© uma tarefa propensa a erros e modelos declarativos baseados em restriçÔes aliviam o programador de aspectos difĂ­ceis dado que abstraem o controlo. Neste trabalho estudamos e desenvolvemos tĂ©cnicas para modelos de computação declarativos baseados em restriçÔes usando o GPI, uma ferramenta e modelo de programação recente. O Objectivo Ă© a execução paralela em larga escala. As contribuiçÔes deste trabalho sĂŁo as seguintes: a implementação de um esquema dinĂąmico para balanceamento da computação baseado no GPI. O esquema Ă© adequado para computaçÔes em ĂĄrvores e efectiva em sistemas compostos por milhares de unidades de computação. Uma abordagem Ă  resolução paralela de restriçÔes denominadas de MaCS, que tira partido do modelo de programação do GPI. A Avaliação experimental revelou boa escalabilidade num sistema com centenas de processadores. Uma versĂŁo paralela do algoritmo Adaptive Search baseada no GPI, que inclui diferentes variantes. O estudo de diversos problemas aumenta a compreensĂŁo de aspectos relacionados com a escalabilidade e presentes na execução deste tipo de algoritmos num grande nĂșmero de processadores

    Profilage et débogage par prise de traces efficaces d'applications hybrides multi-threadées HPC

    Get PDF
    Supercomputers’ evolution is at the source of both hardware and software challenges. In the quest for the highest computing power, the interdependence in-between simulation components is becoming more and more impacting, requiring new approaches. This thesis is focused on the software development aspect and particularly on the observation of parallel software when being run on several thousand cores. This observation aims at providing developers with the necessary feedback when running a program on an execution substrate which has not been modeled yet because of its complexity. In this purpose, we firstly introduce the development process from a global point of view, before describing developer tools and related work. In a second time, we present our contribution which consists in a trace based profiling and debugging tool and its evolution towards an on-line coupling method which as we will show is more scalable as it overcomes IOs limitations. Our contribution also covers our time-stamp synchronisation algorithm for tracing purposes which relies on a probabilistic approach with quantified error. We also present a tool allowing machine characterisation from the MPI aspect and demonstrate the presence of machine noise for both point to point and collectives, justifying the use of an empirical approach. In summary, this work proposes and motivates an alternative approach to trace based event collection while preserving event granularity and a reduced overheadL’évolution des supercalculateurs est Ă  la source de dĂ©fis logiciels et architecturaux. Dans la quĂȘte de puissance de calcul, l’interdĂ©pendance des Ă©lĂ©ments du processus de simulation devient de plus en plus impactante et requiert de nouvelles approches. Cette thĂšse se concentre sur le dĂ©veloppement logiciel et particuliĂšrement sur l’observation des programmes parallĂšles s’exĂ©cutant sur des milliers de cƓurs. Dans ce but, nous dĂ©crivons d’abord le processus de dĂ©veloppement de maniĂšre globale avant de prĂ©senter les outils existants et les travaux associĂ©s. Dans un second temps, nous dĂ©taillons notre contribution qui consiste d’une part en des outils de dĂ©bogage et profilage par prise de traces, et d’autre part en leur Ă©volution vers un couplage en ligne qui palie les limitations d’entrĂ©es–sorties. Notre contribution couvre Ă©galement la synchronisation des horloges pour la prise de traces avec la prĂ©sentation d’un algorithme de synchronisation probabiliste dont nous avons quantifiĂ© l’erreur. En outre, nous dĂ©crivons un outil de caractĂ©risation machine qui couvre l’aspect MPI. Un tel outil met en Ă©vidence la prĂ©sence de bruit aussi bien sur les communications de type point-Ă -point que de type collective. Enfin, nous proposons et motivons une alternative Ă  la collecte d’évĂ©nements par prise de traces tout en prĂ©servant la granularitĂ© des Ă©vĂ©nements et un impact rĂ©duit sur les performances, tant sur le volet utilisation CPU que sur les entrĂ©es–sortie

    HI Lightcones for LADUMA using Gadget-3 : performance profiling and application of an HPC code

    Get PDF
    Includes bibliographical references.This project concerns the investigation, performance profiling and optimisation of the high performance cosmological code, GADGET-3. This code was used to develop a synthetic field-of-view, or lightcone, for the MeerKAT telescope to replicate what it will observe when it conducts the LADUMA ultra-deep HI survey. This lightcone will assist in the planning process of the survey. The deliverables for this project are summarised as follows: * Provide an up-to-date performance evaluation and optimisation report for the cosmological simulation code GADGET-3. * Use GADGET-3 to produce an sufficiently high resolution simulation of a region of the Universe. ‱ Develop a Python code to produce a lightcone which represents the MeerKAT telescope's field-of-view, by post-processing simulation output snapshots. * Extract relevant metadata from the simulation snapshots to provide additional insight into the simulated observation. * Produce an efficiently written and well documented software package to enable other researchers to produce synthetic lightcones

    Evaluating technologies and techniques for transitioning hydrodynamics applications to future generations of supercomputers

    Get PDF
    Current supercomputer development trends present severe challenges for scientific codebases. Moore’s law continues to hold, however, power constraints have brought an end to Dennard scaling, forcing significant increases in overall concurrency. The performance imbalance between the processor and memory sub-systems is also increasing and architectures are becoming significantly more complex. Scientific computing centres need to harness more computational resources in order to facilitate new scientific insights and maintaining their codebases requires significant investments. Centres therefore have to decide how best to develop their applications to take advantage of future architectures. To prevent vendor "lock-in" and maximise investments, achieving portableperformance across multiple architectures is also a significant concern. Efficiently scaling applications will be essential for achieving improvements in science and the MPI (Message Passing Interface) only model is reaching its scalability limits. Hybrid approaches which utilise shared memory programming models are a promising approach for improving scalability. Additionally PGAS (Partitioned Global Address Space) models have the potential to address productivity and scalability concerns. Furthermore, OpenCL has been developed with the aim of enabling applications to achieve portable-performance across a range of heterogeneous architectures. This research examines approaches for achieving greater levels of performance for hydrodynamics applications on future supercomputer architectures. The development of a Lagrangian-Eulerian hydrodynamics application is presented together with its utility for conducting such research. Strategies for improving application performance, including PGAS- and hybrid-based approaches are evaluated at large node-counts on several state-of-the-art architectures. Techniques to maximise the performance and scalability of OpenMP-based hybrid implementations are presented together with an assessment of how these constructs should be combined with existing approaches. OpenCL is evaluated as an additional technology for implementing a hybrid programming model and improving performance-portability. To enhance productivity several tools for automatically hybridising applications and improving process-to-topology mappings are evaluated. Power constraints are starting to limit supercomputer deployments, potentially necessitating the use of more energy efficient technologies. Advanced processor architectures are therefore evaluated as future candidate technologies, together with several application optimisations which will likely be necessary. An FPGA-based solution is examined, including an analysis of how effectively it can be utilised via a high-level programming model, as an alternative to the specialist approaches which currently limit the applicability of this technology

    Communication-Efficient Probabilistic Algorithms: Selection, Sampling, and Checking

    Get PDF
    Diese Dissertation behandelt drei grundlegende Klassen von Problemen in Big-Data-Systemen, fĂŒr die wir kommunikationseffiziente probabilistische Algorithmen entwickeln. Im ersten Teil betrachten wir verschiedene Selektionsprobleme, im zweiten Teil das Ziehen gewichteter Stichproben (Weighted Sampling) und im dritten Teil die probabilistische KorrektheitsprĂŒfung von Basisoperationen in Big-Data-Frameworks (Checking). Diese Arbeit ist durch einen wachsenden Bedarf an Kommunikationseffizienz motiviert, der daher rĂŒhrt, dass der auf das Netzwerk und seine Nutzung zurĂŒckzufĂŒhrende Anteil sowohl der Anschaffungskosten als auch des Energieverbrauchs von Supercomputern und der Laufzeit verteilter Anwendungen immer weiter wĂ€chst. Überraschend wenige kommunikationseffiziente Algorithmen sind fĂŒr grundlegende Big-Data-Probleme bekannt. In dieser Arbeit schließen wir einige dieser LĂŒcken. ZunĂ€chst betrachten wir verschiedene Selektionsprobleme, beginnend mit der verteilten Version des klassischen Selektionsproblems, d. h. dem Auffinden des Elements von Rang kk in einer großen verteilten Eingabe. Wir zeigen, wie dieses Problem kommunikationseffizient gelöst werden kann, ohne anzunehmen, dass die Elemente der Eingabe zufĂ€llig verteilt seien. Hierzu ersetzen wir die Methode zur Pivotwahl in einem schon lange bekannten Algorithmus und zeigen, dass dies hinreichend ist. Anschließend zeigen wir, dass die Selektion aus lokal sortierten Folgen – multisequence selection – wesentlich schneller lösbar ist, wenn der genaue Rang des Ausgabeelements in einem gewissen Bereich variieren darf. Dies benutzen wir anschließend, um eine verteilte PrioritĂ€tswarteschlange mit Bulk-Operationen zu konstruieren. SpĂ€ter werden wir diese verwenden, um gewichtete Stichproben aus Datenströmen zu ziehen (Reservoir Sampling). Schließlich betrachten wir das Problem, die global hĂ€ufigsten Objekte sowie die, deren zugehörige Werte die grĂ¶ĂŸten Summen ergeben, mit einem stichprobenbasierten Ansatz zu identifizieren. Im Kapitel ĂŒber gewichtete Stichproben werden zunĂ€chst neue Konstruktionsalgorithmen fĂŒr eine klassische Datenstruktur fĂŒr dieses Problem, sogenannte Alias-Tabellen, vorgestellt. Zu Beginn stellen wir den ersten Linearzeit-Konstruktionsalgorithmus fĂŒr diese Datenstruktur vor, der mit konstant viel Zusatzspeicher auskommt. Anschließend parallelisieren wir diesen Algorithmus fĂŒr Shared Memory und erhalten so den ersten parallelen Konstruktionsalgorithmus fĂŒr Aliastabellen. Hiernach zeigen wir, wie das Problem fĂŒr verteilte Systeme mit einem zweistufigen Algorithmus angegangen werden kann. Anschließend stellen wir einen ausgabesensitiven Algorithmus fĂŒr gewichtete Stichproben mit ZurĂŒcklegen vor. Ausgabesensitiv bedeutet, dass die Laufzeit des Algorithmus sich auf die Anzahl der eindeutigen Elemente in der Ausgabe bezieht und nicht auf die GrĂ¶ĂŸe der Stichprobe. Dieser Algorithmus kann sowohl sequentiell als auch auf Shared-Memory-Maschinen und verteilten Systemen eingesetzt werden und ist der erste derartige Algorithmus in allen drei Kategorien. Wir passen ihn anschließend an das Ziehen gewichteter Stichproben ohne ZurĂŒcklegen an, indem wir ihn mit einem SchĂ€tzer fĂŒr die Anzahl der eindeutigen Elemente in einer Stichprobe mit ZurĂŒcklegen kombinieren. Poisson-Sampling, eine Verallgemeinerung des Bernoulli-Sampling auf gewichtete Elemente, kann auf ganzzahlige Sortierung zurĂŒckgefĂŒhrt werden, und wir zeigen, wie ein bestehender Ansatz parallelisiert werden kann. FĂŒr das Sampling aus Datenströmen passen wir einen sequentiellen Algorithmus an und zeigen, wie er in einem Mini-Batch-Modell unter Verwendung unserer im Selektionskapitel eingefĂŒhrten Bulk-PrioritĂ€tswarteschlange parallelisiert werden kann. Das Kapitel endet mit einer ausfĂŒhrlichen Evaluierung unserer Aliastabellen-Konstruktionsalgorithmen, unseres ausgabesensitiven Algorithmus fĂŒr gewichtete Stichproben mit ZurĂŒcklegen und unseres Algorithmus fĂŒr gewichtetes Reservoir-Sampling. Um die Korrektheit verteilter Algorithmen probabilistisch zu verifizieren, schlagen wir Checker fĂŒr grundlegende Operationen von Big-Data-Frameworks vor. Wir zeigen, dass die ÜberprĂŒfung zahlreicher Operationen auf zwei „Kern“-Checker reduziert werden kann, nĂ€mlich die PrĂŒfung von Aggregationen und ob eine Folge eine Permutation einer anderen Folge ist. WĂ€hrend mehrere AnsĂ€tze fĂŒr letzteres Problem seit geraumer Zeit bekannt sind und sich auch einfach parallelisieren lassen, ist unser Summenaggregations-Checker eine neuartige Anwendung der gleichen Datenstruktur, die auch zĂ€hlenden Bloom-Filtern und dem Count-Min-Sketch zugrunde liegt. Wir haben beide Checker in Thrill, einem Big-Data-Framework, implementiert. Experimente mit absichtlich herbeigefĂŒhrten Fehlern bestĂ€tigen die von unserer theoretischen Analyse vorhergesagte Erkennungsgenauigkeit. Dies gilt selbst dann, wenn wir hĂ€ufig verwendete schnelle Hash-Funktionen mit in der Theorie suboptimalen Eigenschaften verwenden. Skalierungsexperimente auf einem Supercomputer zeigen, dass unsere Checker nur sehr geringen Laufzeit-Overhead haben, welcher im Bereich von 2 %2\,\% liegt und dabei die Korrektheit des Ergebnisses nahezu garantiert wird

    Performance, memory efficiency and programmability: the ambitious triptych of combining vertex-centricity with HPC

    Get PDF
    The field of graph processing has grown significantly due to the flexibility and wide applicability of the graph data structure. In the meantime, so has interest from the community in developing new approaches to graph processing applications. In 2010, Google introduced the vertex-centric programming model through their framework Pregel. This consists of expressing computation from the perspective of a vertex, whilst inter-vertex communications are achieved via data exchanges along incoming and outgoing edges, using the message-passing abstraction provided. Pregel ’s high-level programming interface, designed around a set of simple functions, provides ease of programmability to the user. The aim is to enable the development of graph processing applications without requiring expertise in optimisation or parallel programming. Such challenges are instead abstracted from the user and offloaded to the underlying framework. However, fine-grained synchronisation, unpredictable memory access patterns and multiple sources of load imbalance make it difficult to implement the vertex centric model efficiently on high performance computing platforms without sacrificing programmability. This research focuses on combining vertex-centric and High-Performance Comput- ing (HPC), resulting in the development of a shared-memory framework, iPregel, which demonstrates that a performance and memory efficiency similar to that of non-vertex- centric approaches can be achieved while preserving the programmability benefits of vertex-centric. Non-volatile memory is then explored to extend single-node capabilities, during which multiple versions of iPregel are implemented to experiment with the various data movement strategies. Then, distributed memory parallelism is investigated to overcome the resource limitations of single node processing. A second framework named DiP, which ports applicable iPregel ’s optimisations to distributed memory, prioritises performance to high scalability. This research has resulted in a set of techniques and optimisations illustrated through a shared-memory framework iPregel and a distributed-memory framework DiP. The former closes a gap of several orders of magnitude in both performance and memory efficiency, even able to process a graph of 750 billion edges using non-volatile memory. The latter has proved that this competitiveness can also be scaled beyond a single node, enabling the processing of the largest graph generated in this research, comprising 1.6 trillion edges. Most importantly, both frameworks achieved these performance and capability gains whilst also preserving programmability, which is the cornerstone of the vertex-centric programming model. This research therefore demonstrates that by combining vertex-centricity and High-Performance Computing (HPC), it is possible to maintain performance, memory efficiency and programmability
    corecore