649 research outputs found

    Efficient Parallel Particle Advection via Targeting Devices

    Get PDF
    Particle advection is a fundamental operation for a wide range of flow visualization algorithms. Particle advection execution times can vary based on many factors, including the number of particles, duration of advection, and the underlying architecture. In this study, we introduce a new algorithm for parallel particle advection which improves execution time by targeting devices, i.e., adapting to use the CPU or GPU based on the current work. This algorithm is motivated by the observation that CPUs are sometimes able to better perform part of the overall computation since CPUs operate at a faster rate when the threads of a GPU can not be fully utilized. To evaluate our algorithm, we ran 162 experiments and compared our algorithm to traditional GPU-only and CPU-only approaches. Our results show that our algorithm adapts to match the performance of the faster of CPU-only and GPU-only approaches

    General Purpose Flow Visualization at the Exascale

    Get PDF
    Exascale computing, i.e., supercomputers that can perform 1018 math operations per second, provide significant opportunity for improving the computational sciences. That said, these machines can be difficult to use efficiently, due to their massive parallelism, due to the use of accelerators, and due to the diversity of accelerators used. All areas of the computational science stack need to be reconsidered to address these problems. With this dissertation, we consider flow visualization, which is critical for analyzing vector field data from simulations. We specifically consider flow visualization techniques that use particle advection, i.e., tracing particle trajectories, which presents performance and implementation challenges. The dissertation makes four primary contributions. First, it synthesizes previous work on particle advection performance and introduces a high-level analytical cost model. Second, it proposes an approach for performance portability across accelerators. Third, it studies expected speedups based on using accelerators, including the importance of factors such as duration, particle count, data set, and others. Finally, it proposes an exascale-capable particle advection system that addresses diversity in many dimensions, including accelerator type, parallelism approach, analysis use case, underlying vector field, and more

    DisPar Methods and Their Implementation on a Heterogeneous PC Cluster

    Get PDF
    Esta dissertação avalia duas áreas cruciais da simulação de advecção- difusão. A primeira parte é dedicada a estudos numéricos. Foi comprovado que existe uma relação directa entre os momentos de deslocamento de uma partícula de poluente e os erros de truncatura. Esta relação criou os fundamentos teóricos para criar uma nova família de métodos numéricos, DisPar. Foram introduzidos e avaliados três métodos. O primeiro é um método semi-Lagrangeano 2D baseado nos momentos de deslocamento de uma partícula para malhas regulares, DisPar-k. Com este método é possível controlar explicitamente o erro de truncatura desejado. O segundo método também se baseia nos momentos de deslocamento de uma partícula, sendo, contudo, desenvolvido para malhas uniformes não regulares, DisParV. Este método também apresentou uma forte robustez numérica. Ao contrário dos métodos DisPar-K e DisParV, o terceiro segue uma aproximação Eulereana com três regiões de destino da partícula. O método foi desenvolvido de forma a manter um perfil de concentração inicial homogéneo independentemente dos parâmetros usados. A comparação com o método DisPar-k em situações não lineares realçou as fortes limitações associadas aos métodos de advecção-difusão em cenários reais. A segunda parte da tese é dedicada à implementação destes métodos num Cluster de PCs heterogéneo. Para o fazer, foi desenvolvido um novo esquema de partição, AORDA. A aplicação, Scalable DisPar, foi implementada com a plataforma da Microsoft .Net, tendo sido totalmente escrita em C#. A aplicação foi testada no estuário do Tejo que se localiza perto de Lisboa, Portugal. Para superar os problemas de balanceamento de cargas provocados pelas marés, foram implementados diversos esquemas de partição: “Scatter Partitioning”, balanceamento dinâmico de cargas e uma mistura de ambos. Pelos testes elaborados, foi possível verificar que o número de máquinas vizinhas se apresentou como o mais limitativo em termos de escalabilidade, mesmo utilizando comunicações assíncronas. As ferramentas utilizadas para as comunicações foram a principal causa deste fenómeno. Aparentemente, o Microsoft .Net remoting 1.0 não funciona de forma apropriada nos ambientes de concorrência criados pelas comunicações assíncronas. Este facto não permitiu a obtenção de conclusões acerca dos níveis relativos de escalabilidade das diferentes estratégias de partição utilizadas. No entanto, é fortemente sugerido que a melhor estratégia irá ser “Scatter Partitioning” associada a balanceamento dinâmico de cargas e a comunicações assíncronas. A técnica de “Scatter Partitioning” mitiga os problemas de desbalanceamentos de cargas provocados pelas marés. Por outro lado, o balanceamento dinâmico será essencialmente activado no inicio da simulação para corrigir possíveis problemas nas previsões dos poderes de cada processador.This thesis assesses two main areas of the advection-diffusion simulation. The first part is dedicated to the numerical studies. It has been proved that there is a direct relation between pollutant particle displacement moments and truncation errors. This relation raised the theoretical foundations to create a new family of numerical methods, DisPar. Three methods have been introduced and appraised. The first is a 2D semi- Lagrangian method based on particle displacement moments for regular grids, DisPar-k. With this method one can explicitly control the desired truncation error. The second method is also based on particle displacement moments but it is targeted to regular/non-uniform grids, DisParV. The method has also shown a strong numerical capacity. Unlike DisPar-k and DisParV, the third method is a Eulerian approximation for three particle destination units. The method was developed so that an initial concentration profile will be kept homogeneous independently of the used parameters. The comparison with DisPar-k in non-linear situations has emphasized the strong shortcomings associated with numerical methods for advection-diffusion in real scenarios. The second part of the dissertation is dedicated to the implementation of these methods in a heterogeneous PC Cluster. To do so, a new partitioning method has been developed, AORDA. The application, Scalable DisPar, was implemented with the Microsoft .Net framework and was totally written in C#. The application was tested on the Tagus Estuary, near Lisbon (Portugal). To overcome the load imbalances caused by tides scatter partitioning was implemented, dynamic load balancing and a mix of both. By the tests made, it was possible to verify that the number of neighboring machines was the main factor affecting the application scalability, even with asynchronous communications. The tools used for communications mainly caused this. Microsoft .Net remoting 1.0 does not seem to properly work in environments with concurrency associated with the asynchronous communications. This did not allow taking conclusions about the relative efficiency between the partitioning strategies used. However, it is strongly suggested that the best approach will be to scatter partitioning with dynamic load balancing and with asynchronous communications. Scatter partitioning mitigates load imbalances caused by tides and dynamic load balancing is basically trigged at the begging of the simulation to correct possible problems in processor power predictions

    Efficient Evaluation of Data-intensive Batch-queries in Open Simulation Laboratories

    Get PDF
    Better instruments, faster and bigger supercomputers and easier collaboration and sharing of data in the sciences have introduced the need to manage increasingly large datasets. Advances in high-performance computing (HPC) have empowered many science disciplines' computational branches. However, many scientists lack access to HPC facilities or the necessary sophistication to develop and run HPC codes. The benefits of testing new theories and experimenting with large numerical simulations have thus been restricted to a few top users. In this dissertation, I describe the ``remote immersive analysis" approach to computational science and present new techniques and methods for the efficient evaluation of scientific analysis tasks in analysis cluster environments. I will discuss several techniques developed for the efficient evaluation of data-intensive batch-queries in large numerical simulation databases. An I/O streaming method for the evaluation of decomposable kernel computations utilizes partial-sums to evaluate a batch query by performing a single sequential pass over the data. Spatial filtering computations, which use a box filter, share not only data, but also computation and can be evaluated over an intermediate summed volumes dataset derived from the original data. This is more efficient for certain workloads even when the intermediate dataset is computed dynamically. Threshold queries have immense data requirements and potentially operate over entire time-steps of the simulation. An efficient and scalable data-parallel approach evaluates threshold queries of fields derived from the raw simulation data and stores their results in an application-aware semantic cache for fast subsequent retrieval. Finally, synchronization at a mediator, task parallel and data-parallel approaches for the evaluation of particle tracking queries are compared and examined. These techniques are developed, deployed and evaluated in the Johns Hopkins Turbulence Databases (JHTDB), an open simulation laboratory for turbulence research. The JHTDB stores the output of world-class numerical simulations of turbulence and provides public access to and means to explore their complete space-time history. The techniques discussed implement core scientific analysis routines and significantly increase the utility of the service. Additionally, they improve the performance of these routines by up-to an order of magnitude or more when compared with direct implementations or implementations adapted from the simulation code

    A Multi-Core Numerical Framework for Characterizing Flow in Oil Reservoirs

    Get PDF
    Presented at the SCS Spring Simulation Multi-Conference – SpringSim 2011, April 4-7, 2011 – Boston, USA Awarded Best Paper in the 19th High Performance Computing Symposium and Best Overall Paper at SpringSim 2011.This paper presents a numerical framework that enables scalable, parallel execution of engineering simulations on multi-core, shared memory architectures. Distribution of the simulations is done by selective hash-tabling of the model domain which spatially decomposes it into a number of orthogonal computational tasks. These tasks, the size of which is critical to optimal cache blocking and consequently performance, are then distributed for execution to multiple threads using the previously presented task management algorithm, H-Dispatch. Two numerical methods, smoothed particle hydrodynamics (SPH) and the lattice Boltzmann method (LBM), are discussed in the present work, although the framework is general enough to be used with any explicit time integration scheme. The implementation of both SPH and the LBM within the parallel framework is outlined, and the performance of each is presented in terms of speed-up and efficiency. On the 24-core server used in this research, near linear scalability was achieved for both numerical methods with utilization efficiencies up to 95%. To close, the framework is employed to simulate fluid flow in a porous rock specimen, which is of broad geophysical significance, particularly in enhanced oil recovery

    A texture-based framework for improving CFD data visualization in a virtual environment

    Get PDF
    In the field of computational fluid dynamics (CFD) accurate representations of fluid phenomena can be simulated but require large amounts of data to represent the flow domain. Inefficient handling and access of the data at initialization and runtime can limit the ability of the engineering to quickly visualize and investigate the entire flow simulation, and thus hampering the ability to make a quality engineering decision in a timely manner. This problem is amplified n-fold if the solution set is time dependent, or transient. To visualize the data efficiently, dataset access should be decreased if not eliminated at runtime to provide an interactive environment to the end user. Also a reduction in the size of the initial datasets should be reduced as much as possible while maintaining validity of the solution so that larger (i.e. transient) solution datasets can be visualized. To accomplish this, the format in which the dataset is stored should be changed from conventional formats. With the recent advancements of graphical processor unit (GPU) technology, current research in the computer graphics community has lead a novel approach for efficiently storing and accessing flow field data as texture data during a visualization. A so-called texture-based solution for visualization of flow fields allows the end user to visualize complex three-dimensional flow fields in an intuitive fashion while remaining interactive. This work presents a framework for incorporating texture-based analysis techniques into a current CFD visualization application to improve the capabilities for investigating flow fields. The framework presented easily extendible to allow for research and incorporation of progressive visualization methods, in keeping with current technology. Comparisons of the current framework with the texture-based framework are shown to effectively visualize a dataset that could not be visualized in its entirety with the current framework. Comparisons of common visualization techniques, such as contour planes and streamlines, are made to show how the texture-based framework out performs the current framework

    Evaluation and intercomparison of three-dimensional global marine carbon cycle models

    Full text link

    Imposing a Lagrangian Particle Framework on an Eulerian Hydrodynamics Infrastructure in Flash

    Get PDF
    In many astrophysical simulations, both Eulerian and Lagrangian quantities are of interest. For example, in a galaxy cluster merger simulation, the intracluster gas can have Eulerian discretization, while dark matter can be modeled using particles. FLASH, a component-based scientific simulation code, superimposes a Lagrangian framework atop an adaptive mesh refinement Eulerian framework to enable such simulations. The discretization of the field variables is Eulerian, while the Lagrangian entities occur in many different forms including tracer particles, massive particles, charged particles in particle-in-cell mode, and Lagrangian markers to model fluid structure interactions. These widely varying roles for Lagrangian entities are possible because of the highly modular, flexible, and extensible architecture of the Lagrangian framework. In this paper, we describe the Lagrangian framework in FLASH in the context of two very different applications, Type Ia supernovae and galaxy cluster mergers, which use the Lagrangian entities in fundamentally different ways

    Data-driven deep-learning methods for the accelerated simulation of Eulerian fluid dynamics

    Get PDF
    Deep-learning (DL) methods for the fast inference of the temporal evolution of fluid-dynamics systems, based on the previous recognition of features underlying large sets of fluid-dynamics data, have been studied. Specifically, models based on convolution neural networks (CNNs) and graph neural networks (GNNs) were proposed and discussed. A U-Net, a popular fully-convolutional architecture, was trained to infer wave dynamics on liquid surfaces surrounded by walls, given as input the system state at previous time-points. A term for penalising the error of the spatial derivatives was added to the loss function, which resulted in a suppression of spurious oscillations and a more accurate location and length of the predicted wavefronts. This model proved to accurately generalise to complex wall geometries not seen during training. As opposed to the image data-structures processed by CNNs, graphs offer higher freedom on how data is organised and processed. This motivated the use of graphs to represent the state of fluid-dynamic systems discretised by unstructured sets of nodes, and GNNs to process such graphs. Graphs have enabled more accurate representations of curvilinear geometries and higher resolution placement exclusively in areas where physics is more challenging to resolve. Two novel GNN architectures were designed for fluid-dynamics inference: the MuS-GNN, a multi-scale GNN, and the REMuS-GNN, a rotation-equivariant multi-scale GNN. Both architectures work by repeatedly passing messages from each node to its nearest nodes in the graph. Additionally, lower-resolutions graphs, with a reduced number of nodes, are defined from the original graph, and messages are also passed from finer to coarser graphs and vice-versa. The low-resolution graphs allowed for efficiently capturing physics encompassing a range of lengthscales. Advection and fluid flow, modelled by the incompressible Navier-Stokes equations, were the two types of problems used to assess the proposed GNNs. Whereas a single-scale GNN was sufficient to achieve high generalisation accuracy in advection simulations, flow simulation highly benefited from an increasing number of low-resolution graphs. The generalisation and long-term accuracy of these simulations were further improved by the REMuS-GNN architecture, which processes the system state independently of the orientation of the coordinate system thanks to a rotation-invariant representation and carefully designed components. To the best of the author’s knowledge, the REMuS-GNN architecture was the first rotation-equivariant and multi-scale GNN. The simulations were accelerated between one (in a CPU) and three (in a GPU) orders of magnitude with respect to a CPU-based numerical solver. Additionally, the parallelisation of multi-scale GNNs resulted in a close-to-linear speedup with the number of CPU cores or GPUs.Open Acces
    corecore