157 research outputs found

    Dynamic Workflows for Multiphysics Design

    Get PDF
    International audienceLarge-scale multiphysics applications, e.g., aircraft flight simulation, require several layers for their effective implementation.The first layer includes the design of efficient methods for mathematical models, numeric problem solutions and data processing. This involves the optimization of complex application codes, that might include intricate and distributed execution tools. This includes the asynchronous execution of coordinated tasks executing in parallel on remotely connected environments, e.g., using grid middleware. The third layer includes sophisticated tools that allow the users to interact dynamically in explicit and coordinated ways to design new artefacts, e.g., workflow systems. This presentation is devoted to the third layer where sophisticated application codes are deployed and must be run cooperatively in heterogeneous, distributed and parallel environments. It is assumed here that the first and second layers are implemented and running to support the execution of the workflows. The paper focuses on some open challenges in deploying and running distributed workflows for mutiphysics design: resiliency, exception handling, dynamic user interactions and high-performance computing

    Resilient Workflows for High-Performance Simulation Platforms

    Get PDF
    International audienceWorkflows systems are considered here to support large-scale multiphysics simulations. Because the use of large distributed and parallel multi-core infrastructures is prone to software and hardware failures, the paper addresses the need for error recovery procedures. A new mechanism based on asymmetric checkpointing is presented. A rule-based implementation for a distributed workflow platform is detailed

    Resilient Workflows for High-Performance Simulation Platforms

    Get PDF
    International audienceWorkflows systems are considered here to support large-scale multiphysics simulations. Because the use of large distributed and parallel multi-core infrastructures is prone to software and hardware failures, the paper addresses the need for error recovery procedures. A new mechanism based on asymmetric checkpointing is presented. A rule-based implementation for a distributed workflow platform is detailed

    Robust Workflows for Large-Scale Multiphysics Simulation

    Get PDF
    International audienceLarge-scale simulations, e.g. fluid-structure interactions and aeroacoustics noise generation, require important computing power, visualization systems and high-end storage capacity. Because 3D multi-physics simulations also run long processes on large datasets, an important issue is the robustness of the computing systems involved, i.e., the ability to resume the inadvertantly aborted computations. A new approach is presented here to handle application failures. It is based on extensions of bracketing checkpoints usually implemented in database and transactional systems. An assymetric scheme is devised to reduce the number of checkpoints required to safely restart aborted applications when unexpected failures occur. The tasks are controled by a workflow graph than can be deployed on various distributed platforms and high-performance infrastructures. An automated bracketing process inserts in the workflow graph checkpoints that are placed at critical execution points in the graph. The checkpoints are inserted using a heuristic process based on a evolving set of rules. Preliminary tests show that the number of checkpoints, hence the overhead incurred by the checkpointing mechanism, can be significantly reduced to enhance the application performance while supporting its resilience

    Predictive Scale-Bridging Simulations through Active Learning

    Full text link
    Throughout computational science, there is a growing need to utilize the continual improvements in raw computational horsepower to achieve greater physical fidelity through scale-bridging over brute-force increases in the number of mesh elements. For instance, quantitative predictions of transport in nanoporous media, critical to hydrocarbon extraction from tight shale formations, are impossible without accounting for molecular-level interactions. Similarly, inertial confinement fusion simulations rely on numerical diffusion to simulate molecular effects such as non-local transport and mixing without truly accounting for molecular interactions. With these two disparate applications in mind, we develop a novel capability which uses an active learning approach to optimize the use of local fine-scale simulations for informing coarse-scale hydrodynamics. Our approach addresses three challenges: forecasting continuum coarse-scale trajectory to speculatively execute new fine-scale molecular dynamics calculations, dynamically updating coarse-scale from fine-scale calculations, and quantifying uncertainty in neural network models

    System-Level Support for Composition of Applications

    Get PDF
    ABSTRACT Current HPC system software lacks support for emerging application deployment scenarios that combine one or more simulations with in situ analytics, sometimes called multi-component or multi-enclave applications. This paper presents an initial design study, implementation, and evaluation of mechanisms supporting composite multi-enclave applications in the Hobbes exascale operating system. These mechanisms include virtualization techniques isolating application custom enclaves while using the vendor-supplied host operating system and high-performance inter-VM communication mechanisms. Our initial single-node performance evaluation of these mechanisms on multi-enclave science applications, both real and proxy, demonstrate the ability to support multi-enclave HPC job composition with minimal performance overhead

    Testing And Verification For The Open Source Release Of The Horizon Simulation Framework

    Get PDF
    Modeling and simulation tools are exceptionally useful for designing aerospace systems because they allow engineers to test and iterate designs before committing the massive resources required for system realization. The Horizon Simulation Framework (HSF) is a time-driven modeling and simulation tool which attempts to optimize how a modeled system could perform a mission profile. After 15 years of development, the HSF team aims to achieve a wider user and developer base by releasing the software open source. To ensure a successful release, the software required extensive testing, and the main scheduling algorithm required protections against new code breaking old functionality. The goal of the work presented in this thesis is to satisfy these requirements and officially release the software open source. The software was tested with \u3e 80% coverage and a continuous integration pipeline which runs build and unit/integration tests on every new commit was set up. Finally, supporting documentation and user resources were created and organized to promote community adoption of the software, making Horizon ready for an open source release

    Innovative Data Management in advanced characterization: Implications for materials design

    Get PDF
    Abstract This paper describes a novel methodology of data documentation in materials characterization, which has as starting point the creation and usage of any Data Management Plan (DMP) for scientific data in the field of materials science and engineering, followed by the development and exploitation of ontologies for the harnessing of data created through experimental techniques. The case study that is discussed here is nanoindentation, a widely used method for the experimental assessment of mechanical properties on a small scale. The new documentation structure for characterization data (CHADA) is based on the definition of (i) sample, (ii) method, (iii) raw data and (iv) data analysis as the main component of the metadata associated to any characterization experiment. In this way, the relevant information can be stored inside the metadata associated to the experiment. The same methodology can be applicable to a large number of techniques that produce big amount of raw data, while at the same time it can be invaluable tool for big data analysis and for the creation of an open innovation environment, where data can be accessed freely and efficiently. Other fundamental aspects are reviewed in the paper, including the taxonomy and curation of data, the creation of ontology and classification of characterization techniques, the harnessing of data in open innovation environments via database construction along with the retrieval of information via algorithms. The issues of harmonization and standardization of such novel approaches are also critically discussed. Finally, the possible implications for nanomaterial design and the potential industrial impact of the new approach are described and a critical outlook is given
    • 

    corecore