895 research outputs found

    Harnessing the Power of Many: Extensible Toolkit for Scalable Ensemble Applications

    Full text link
    Many scientific problems require multiple distinct computational tasks to be executed in order to achieve a desired solution. We introduce the Ensemble Toolkit (EnTK) to address the challenges of scale, diversity and reliability they pose. We describe the design and implementation of EnTK, characterize its performance and integrate it with two distinct exemplar use cases: seismic inversion and adaptive analog ensembles. We perform nine experiments, characterizing EnTK overheads, strong and weak scalability, and the performance of two use case implementations, at scale and on production infrastructures. We show how EnTK meets the following general requirements: (i) implementing dedicated abstractions to support the description and execution of ensemble applications; (ii) support for execution on heterogeneous computing infrastructures; (iii) efficient scalability up to O(10^4) tasks; and (iv) fault tolerance. We discuss novel computational capabilities that EnTK enables and the scientific advantages arising thereof. We propose EnTK as an important addition to the suite of tools in support of production scientific computing

    Geometry Modeling for Unstructured Mesh Adaptation

    Get PDF
    The quantification and control of discretization error is critical to obtaining reliable simulation results. Adaptive mesh techniques have the potential to automate discretization error control, but have made limited impact on production analysis workflow. Recent progress has matured a number of independent implementations of flow solvers, error estimation methods, and anisotropic mesh adaptation mechanics. However, the poor integration of initial mesh generation and adaptive mesh mechanics to typical sources of geometry has hindered adoption of adaptive mesh techniques, where these geometries are often created in Mechanical Computer- Aided Design (MCAD) systems. The difficulty of this coupling is compounded by two factors: the inherent complexity of the model (e.g., large range of scales, bodies in proximity, details not required for analysis) and unintended geometry construction artifacts (e.g., translation, uneven parameterization, degeneracy, self-intersection, sliver faces, gaps, large tolerances be- tween topological elements, local high curvature to enforce continuity). Manual preparation of geometry is commonly employed to enable fixed-grid and adaptive-grid workflows by reducing the severity and negative impacts of these construction artifacts, but manual process interaction inhibits workflow automation. Techniques to permit the use of complex geometry models and reduce the impact of geometry construction artifacts on unstructured grid workflows are models from the AIAA Sonic Boom and High Lift Prediction are shown to demonstrate the utility of the current approach

    Enhancing Energy Production with Exascale HPC Methods

    Get PDF
    High Performance Computing (HPC) resources have become the key actor for achieving more ambitious challenges in many disciplines. In this step beyond, an explosion on the available parallelism and the use of special purpose processors are crucial. With such a goal, the HPC4E project applies new exascale HPC techniques to energy industry simulations, customizing them if necessary, and going beyond the state-of-the-art in the required HPC exascale simulations for different energy sources. In this paper, a general overview of these methods is presented as well as some specific preliminary results.The research leading to these results has received funding from the European Union's Horizon 2020 Programme (2014-2020) under the HPC4E Project (www.hpc4e.eu), grant agreement n° 689772, the Spanish Ministry of Economy and Competitiveness under the CODEC2 project (TIN2015-63562-R), and from the Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP). Computer time on Endeavour cluster is provided by the Intel Corporation, which enabled us to obtain the presented experimental results in uncertainty quantification in seismic imagingPostprint (author's final draft

    Verification of Unstructured Grid Adaptation Components

    Get PDF
    Adaptive unstructured grid techniques have made limited impact on production analysis workflows where the control of discretization error is critical to obtaining reliable simulation results. Recent progress has matured a number of independent implementations of flow solvers, error estimation methods, and anisotropic grid adaptation mechanics. Known differences and previously unknown differences in grid adaptation components and their integrated processes are identified here for study. Unstructured grid adaptation tools are verified using analytic functions and the Code Comparison Principle. Three analytic functions with different smoothness properties are adapted to show the impact of smoothness on implementation differences. A scalar advection-diffusion problem with an analytic solution that models a boundary layer is adapted to test individual grid adaptation components. Laminar flow over a delta wing and turbulent flow over an ONERA M6 wing are verified with multiple, independent grid adaptation procedures to show consistent convergence to fine-grid forces and a moment. The scalar problems illustrate known differences in a grid adaptation component implementation and a previously unknown interaction between components. The wing adaptation cases in the current study document a clear improvement to existing grid adaptation procedures. The stage is set for the infusion of verified grid adaptation into production fluid flow simulations

    Unstructured Grid Adaptation: Status, Potential Impacts, and Recommended Investments Towards CFD 2030

    Get PDF
    International audienceUnstructured grid adaptation is a powerful tool to control Computational Fluid Dynamics (CFD) discretization error. It has enabled key increases in the accuracy, automation, and capacity of some fluid simulation applications. Slotnick et al. provide a number of case studies in the CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences to illustrate the current state of CFD capability and capacity. The study authors forecast the potential impact of emerging High Performance Computing (HPC) environments forecast in the year 2030 and identify that mesh generation and adaptivity will continue to be significant bottlenecks in the CFD workflow. These bottlenecks may persist because very little government investment has been targeted in these areas. To motivate investment, the impacts of improved grid adaptation technologies are identified. The CFD Vision 2030 Study roadmap and anticipated capabilities in complementary disciplines are quoted to provide context for the progress made in grid adaptation in the past fifteen years, current status, and a forecast for the next fifteen years with recommended investments. These investments are specific to mesh adaptation and impact other aspects of the CFD process. Finally, a strategy is identified to di↵use grid adaptation technology into production CFD work flows

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Advancing the Open Modeling Interface (OpenMI) for Integrated Water Resources Modeling

    Get PDF
    The use of existing component-based modeling frameworks for integrated water resources modeling is currently hampered for some important use cases because they lack support for commonly used, topology-aware, spatiotemporal data structures. Additionally, existing frameworks are often accompanied by large software stacks with steep learning curves. Others lack specifications for deploying them on high performance, heterogeneous computing (HPC) infrastructure. This puts their use beyond the reach of many water resources modelers. In this paper, we describe new advances in component-based modeling using a framework called HydroCouple. This framework largely adopts the Open Modeling Interface (OpenMI) 2.0 interface definitions but demonstrates important advances for water resources modeling. HydroCouple explicitly defines standard and widely used geospatial data formats and provides interface definitions to support simulations on HPC infrastructure. In this paper, we illustrate how these advances can be used to develop efficient model components through a coupled urban stormwater modeling exercise

    Realtime reservoir characterization and beyond: cyber-infrastructure tools and technologies

    Get PDF
    The advent of the digital oil _x000C_eld and rapidly decreasing cost of computing creates opportunities as well as challenges in simulation based reservoir studies, in particular, real-time reservoir characterization and optimization. One challenge our e_x000B_orts are directed toward is the use of real-time production data to perform live reservoir characterization using high throughput, high performance computing environments. To that end we developed the required tools of parallel reservoir simulator, parallel ensemble Kalman _x000C_lter and a scalable work ow manager. When using this collection of tools, a reservoir modeler is able to perform large scale reservoir management studies in short periods of time. This includes studies with thousands of models that are individually complex and large, involving millions of degrees of freedom. Using parallel processing, we are able to solve these models much faster than we otherwise would on a single, serial machine. This motivated the development of a fast parallel reservoir simulator. Furthermore, distributing those simulations across resources leads to a smaller total time to completion by making use of distributed processing. This allows the development of a scalable high throughput work ow manager. Finally, with thousands of models, each with millions of degrees of freedom, we end up with a super uity of model parameters. This translates directly to billions of degrees of freedom in the reservoir study. To be able to use the ensemble Kalman _x000C_lter on these models, we needed to develop a parallel implementation of the ensemble Kalman _x000C_lter. This thesis discusses the enabling tools and technologies developed to address a speci _x000C_c problem: how to accurately characterize reservoirs, using large numbers of complex detailed models. For these characterization studies to be helpful in making production decisions, the time to solution must be feasible. To that end, our work is focused on developing and extending these tools, and optimizing their performance

    Advancing the Cyberinfrastructure for Integrated Water Resources Modeling

    Get PDF
    Like other scientists, hydrologists encode mathematical formulations that simulate various hydrologic processes as computer programs so that problems with water resource management that would otherwise be manually intractable can be solved efficiently. These computer models are typically developed to answer specific questions within a specific study domain. For example, one computer model may be developed to solve for magnitudes of water flow and water levels in an aquifer while another may be developed to solve for magnitudes of water flow through a water distribution network of pipes and reservoirs. Interactions between different processes are often ignored or are approximated using overly simplistic assumptions. The increasing complexity of the water resources challenges society faces, including stresses from variable climate and land use change, means that some of these models need to be stitched together so that these challenges are not evaluated myopically from the perspective of a single research discipline or study domain. The research in this dissertation presents an investigation of the various approaches and technologies that can be used to support model integration. The research delves into some of the computational challenges associated with model integration and suggests approaches for dealing with these challenges. Finally, it advances new software that provides data structures that water resources modelers are more accustomed to and allows them to take advantage of advanced computing resources for efficient simulations
    • …
    corecore