84 research outputs found

    General Purpose Flow Visualization at the Exascale

    Get PDF
    Exascale computing, i.e., supercomputers that can perform 1018 math operations per second, provide significant opportunity for improving the computational sciences. That said, these machines can be difficult to use efficiently, due to their massive parallelism, due to the use of accelerators, and due to the diversity of accelerators used. All areas of the computational science stack need to be reconsidered to address these problems. With this dissertation, we consider flow visualization, which is critical for analyzing vector field data from simulations. We specifically consider flow visualization techniques that use particle advection, i.e., tracing particle trajectories, which presents performance and implementation challenges. The dissertation makes four primary contributions. First, it synthesizes previous work on particle advection performance and introduces a high-level analytical cost model. Second, it proposes an approach for performance portability across accelerators. Third, it studies expected speedups based on using accelerators, including the importance of factors such as duration, particle count, data set, and others. Finally, it proposes an exascale-capable particle advection system that addresses diversity in many dimensions, including accelerator type, parallelism approach, analysis use case, underlying vector field, and more

    Performance Analysis of Traditional and Data-Parallel Primitive Implementations of Visualization and Analysis Kernels

    Full text link
    Measurements of absolute runtime are useful as a summary of performance when studying parallel visualization and analysis methods on computational platforms of increasing concurrency and complexity. We can obtain even more insights by measuring and examining more detailed measures from hardware performance counters, such as the number of instructions executed by an algorithm implemented in a particular way, the amount of data moved to/from memory, memory hierarchy utilization levels via cache hit/miss ratios, and so forth. This work focuses on performance analysis on modern multi-core platforms of three different visualization and analysis kernels that are implemented in different ways: one is "traditional", using combinations of C++ and VTK, and the other uses a data-parallel approach using VTK-m. Our performance study consists of measurement and reporting of several different hardware performance counters on two different multi-core CPU platforms. The results reveal interesting performance differences between these two different approaches for implementing these kernels, results that would not be apparent using runtime as the only metric

    Efficient Parallel Particle Advection via Targeting Devices

    Get PDF
    Particle advection is a fundamental operation for a wide range of flow visualization algorithms. Particle advection execution times can vary based on many factors, including the number of particles, duration of advection, and the underlying architecture. In this study, we introduce a new algorithm for parallel particle advection which improves execution time by targeting devices, i.e., adapting to use the CPU or GPU based on the current work. This algorithm is motivated by the observation that CPUs are sometimes able to better perform part of the overall computation since CPUs operate at a faster rate when the threads of a GPU can not be fully utilized. To evaluate our algorithm, we ran 162 experiments and compared our algorithm to traditional GPU-only and CPU-only approaches. Our results show that our algorithm adapts to match the performance of the faster of CPU-only and GPU-only approaches

    A texture-based framework for improving CFD data visualization in a virtual environment

    Get PDF
    In the field of computational fluid dynamics (CFD) accurate representations of fluid phenomena can be simulated but require large amounts of data to represent the flow domain. Inefficient handling and access of the data at initialization and runtime can limit the ability of the engineering to quickly visualize and investigate the entire flow simulation, and thus hampering the ability to make a quality engineering decision in a timely manner. This problem is amplified n-fold if the solution set is time dependent, or transient. To visualize the data efficiently, dataset access should be decreased if not eliminated at runtime to provide an interactive environment to the end user. Also a reduction in the size of the initial datasets should be reduced as much as possible while maintaining validity of the solution so that larger (i.e. transient) solution datasets can be visualized. To accomplish this, the format in which the dataset is stored should be changed from conventional formats. With the recent advancements of graphical processor unit (GPU) technology, current research in the computer graphics community has lead a novel approach for efficiently storing and accessing flow field data as texture data during a visualization. A so-called texture-based solution for visualization of flow fields allows the end user to visualize complex three-dimensional flow fields in an intuitive fashion while remaining interactive. This work presents a framework for incorporating texture-based analysis techniques into a current CFD visualization application to improve the capabilities for investigating flow fields. The framework presented easily extendible to allow for research and incorporation of progressive visualization methods, in keeping with current technology. Comparisons of the current framework with the texture-based framework are shown to effectively visualize a dataset that could not be visualized in its entirety with the current framework. Comparisons of common visualization techniques, such as contour planes and streamlines, are made to show how the texture-based framework out performs the current framework

    An automatic procedure to forecast tephra fallout

    Get PDF
    Tephra fallout constitutes a serious threat to communities around active volcanoes. Reliable short-term forecasts represent a valuable aid for scientists and civil authorities to mitigate the effects of fallout on the surrounding areas during an episode of crisis. We present a platform-independent automatic procedure with the aim to daily forecast transport and deposition of volcanic particles. The procedure builds on a series of programs and interfaces that automate the data flow and the execution and subsequent postprocess of fallout models. Firstly, the procedure downloads regional meteorological forecasts for the area and time interval of interest, filters and converts data from its native format, and runs the CALMET diagnostic model to obtain the wind field and other micro-meteorological variables on a finer local-scale 3-D grid defined by the user. Secondly, it assesses the distribution of mass along the eruptive column, commonly by means of the radial averaged buoyant plume equations depending on the prognostic wind field and on the conditions at the vent (granulometry, mass flow rate, etc). All these data serve as input for the fallout models. The initial version of the procedure includes only two Eulerian models, HAZMAP and FALL3D, the latter available as serial and parallel implementations. However, the procedure is designed to incorporate easily other models in a near future with minor modifications on the model source code. The last step is to postprocess the outcomes of models to obtain maps written in standard file formats. These maps contain plots of relevant quantities such as predicted ground load, expected deposit thickness and, for the case of or 3-D models, concentration on air or flight safety concentration thresholds

    Steering in computational science: mesoscale modelling and simulation

    Full text link
    This paper outlines the benefits of computational steering for high performance computing applications. Lattice-Boltzmann mesoscale fluid simulations of binary and ternary amphiphilic fluids in two and three dimensions are used to illustrate the substantial improvements which computational steering offers in terms of resource efficiency and time to discover new physics. We discuss details of our current steering implementations and describe their future outlook with the advent of computational grids.Comment: 40 pages, 11 figures. Accepted for publication in Contemporary Physic

    A texture-based framework for improving CFD data visualization in a virtual environment

    Full text link
    corecore