65 research outputs found

    Fault Diagnosis of Hybrid Computing Systems Using Chaotic-Map Method

    Get PDF
    Computing systems are becoming increasingly complex with nodes consisting of a combination of multi-core central processing units (CPUs), many integrated core (MIC) and graphics processing unit (GPU) accelerators. These computing units and their interconnections are subject to different classes of hardware and software faults, which should be detected to support mitigation measures. We present the chaotic-map method that uses the exponential divergence and wide Fourier properties of the trajectories, combined with memory allocations and assignments to diagnose component-level faults in these hybrid computing systems. We propose lightweight codes that utilize highly parallel chaotic-map computations tailored to isolate faults in arithmetic units, memory elements and interconnects. The diagnosis module on a node utilizes pthreads to place chaotic-map threads on CPU and MIC cores, and CUDA C and OpenCL kernels on GPU blocks. We present experimental diagnosis results on five multi-core CPUs; one MIC; and, seven GPUs with typical diagnosis run-times under a minute

    Software for Exascale Computing - SPPEXA 2016-2019

    Get PDF
    This open access book summarizes the research done and results obtained in the second funding phase of the Priority Program 1648 "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) presented at the SPPEXA Symposium in Dresden during October 21-23, 2019. In that respect, it both represents a continuation of Vol. 113 in Springer’s series Lecture Notes in Computational Science and Engineering, the corresponding report of SPPEXA’s first funding phase, and provides an overview of SPPEXA’s contributions towards exascale computing in today's sumpercomputer technology. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest

    A Spectral Discontinuous Galerkin method for incompressible flow with Applications to turbulence

    Get PDF
    In this thesis we develop a numerical solution method for the instationary incompressible Navier-Stokes equations. The approach is based on projection methods for discretization in time and a higher order discontinuous Galerkin discretization in space. We propose an upwind scheme for the convective term that chooses the direction of flux across cell interfaces by the mean value of the velocity and has favorable properties in the context of DG. We present new variants of solenoidal projection operators in the Helmholtz decomposition which are indeed discrete projection operators. The discretization is accomplished on quadrilateral or hexahedral meshes where sum-factorization in tensor product finite elements can be exploited. Sum-factorization significantly reduces algorithmic complexity during assembling. In this thesis we thereby build efficient scalable matrix-free solvers and preconditioners to tackle the arising subproblems in the discretization. Conservation properties of the numerical method are demonstrated for both problems with exact solution and turbulent flows. Finally, the presented DG solver enables long time stable direct numerical simulations of the Navier-Stokes equations. As an application we perform computations on a model of the atmospheric boundary layer and demonstrate the existence of surface renewal

    Simulation Intelligence: Towards a New Generation of Scientific Methods

    Full text link
    The original "Seven Motifs" set forth a roadmap of essential methods for the field of scientific computing, where a motif is an algorithmic method that captures a pattern of computation and data movement. We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system. Using this metaphor, we explore the nature of each layer of the simulation intelligence operating system stack (SI-stack) and the motifs therein: (1) Multi-physics and multi-scale modeling; (2) Surrogate modeling and emulation; (3) Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based modeling; (6) Probabilistic programming; (7) Differentiable programming; (8) Open-ended optimization; (9) Machine programming. We believe coordinated efforts between motifs offers immense opportunity to accelerate scientific discovery, from solving inverse problems in synthetic biology and climate science, to directing nuclear energy experiments and predicting emergent behavior in socioeconomic settings. We elaborate on each layer of the SI-stack, detailing the state-of-art methods, presenting examples to highlight challenges and opportunities, and advocating for specific ways to advance the motifs and the synergies from their combinations. Advancing and integrating these technologies can enable a robust and efficient hypothesis-simulation-analysis type of scientific method, which we introduce with several use-cases for human-machine teaming and automated science

    Lagrangian ocean analysis: fundamentals and practices

    Get PDF
    Lagrangian analysis is a powerful way to analyse the output of ocean circulation models and other ocean velocity data such as from altimetry. In the Lagrangian approach, large sets of virtual particles are integrated within the three-dimensional, time-evolving velocity fields. Over several decades, a variety of tools and methods for this purpose have emerged. Here, we review the state of the art in the field of Lagrangian analysis of ocean velocity data, starting from a fundamental kinematic framework and with a focus on large-scale open ocean applications. Beyond the use of explicit velocity fields, we consider the influence of unresolved physics and dynamics on particle trajectories. We comprehensively list and discuss the tools currently available for tracking virtual particles. We then showcase some of the innovative applications of trajectory data, and conclude with some open questions and an outlook. The overall goal of this review paper is to reconcile some of the different techniques and methods in Lagrangian ocean analysis, while recognising the rich diversity of codes that have and continue to emerge, and the challenges of the coming age of petascale computing

    Model-data fusion in digital twins of large scale dynamical systems

    Get PDF
    Digital twins (DTs) are virtual entities that serve as the real-time digital counterparts of actual physical systems across their life-cycle. In a typical application of DTs, the physical system provides sensor measurements and the DT should incorporate the incoming data and run different simulations to assess various scenarios and situations. As a result, an informed decision can be made to alter the physical system or at least take necessary precautions, and the process is repeated along the system's life-cycle. Thus, the effective deployment of DTs requires fulfilling multi-queries while communicating with the physical system in real-time. Nonetheless, DTs of large-scale dynamical systems, as in fluid flows, come with three grand challenges that we address in this dissertation.First, the high dimensionality makes full order modeling (FOM) methodologies unfeasible due to the associated computational time and memory costs. In this regard, reduced order models (ROMs) can potentially accelerate the forward simulations by orders of magnitude, especially for systems with recurrent spatial structures. However, traditional ROMs yield inaccurate and unstable results for turbulent and convective flows. Therefore, we propose a hybrid variational multi-scale framework that benefits from the locality of modal interactions to deliver accurate ROMs. Furthermore, we adopt a novel physics guided machine learning technique to provide on-the-fly corrections and elevate the trustworthiness of the resulting ROM in the sparse data and incomplete governing equations regimes.Second, complex natural or engineered systems are characterized by multi-scale, multi-physics, and multi-component nature. The efficient simulation of such systems requires quick communication and information sharing between several heterogeneous computing units. In order to address this challenge, we pioneer an interface learning (IL) paradigm to ensure the seamless integration of hierarchical solvers with different scales, physics, abstractions, and geometries without compromising the integrity of the computational setup. We demonstrate the IL paradigm for non-iterative domain decomposition and the FOM-ROM coupling in multi-fidelity computations.Third, fluid flow systems are continuously evolving and thus the validity of the DT should be warranted across varying operating conditions and flow regimes. To do so, we embed data assimilation (DA) techniques to enable the DT to self-adapt based on in-situ observational data and efficiently replicate the physical system. In addition, we combine DA algorithms with machine learning models to build a robust framework that collectively addresses the model closure problem, the error in prior information, and the measurement noise

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    • …
    corecore