78 research outputs found

    A Reinforcement Learning Powered Digital Twin to Support Supply Chain Decisions

    Get PDF
    The complexity of making supply chain planning decisions is growing along with the Volatility, Uncertainty, Complexity and Ambiguity of supply chain environments. As a consequence, the complexity of designing adequate decision support systems is also increasing. New approaches emerged for supporting decisions, and digital twins is one of those. Concurrently, the artificial intelligence field is growing, including approaches such as reinforcement learning. This paper explores the potential of creating digital twins with reinforcement learning capabilities. It first proposes a framework for unifying digital twins and reinforcement learning into a single approach. It then illustrates how this framework is put into practice for making supply and delivery decisions within a drug supply chain use case. Finally, the results of the experiment are compared with results given by traditional approaches, showing the applicability of the proposed framework

    Stratégie de couplage faible entre les méthodes FEM et SPH pour les simulations d'IFS

    Get PDF
    Un couplage bi-dimensionnel pour l'Interaction Fluide-Structure est proposĂ© entre la mĂ©thode Smoothed Particles Hydrodynamics (SPH) pour le fluide et la mĂ©thode ElĂ©ments Finis (EF) pour le solide. Avec ce couplage nous prenons des avantages dans les deux mĂ©thodes, Ă  savoir la capacitĂ© de la mĂ©thode SPH de prendre en compte de grandes dĂ©formations du domaine fluide et la capacitĂ© Ă©prouvĂ©e de prĂ©diction du comportement des solides sous chargement instationnaire de pression de la mĂ©thode EF. De plus aucun algorithme spĂ©cifique n'est requis Ă  l'interface solide-fluide pour Ă©viter l'interpĂ©nĂ©tration des matĂ©riaux. Tout ceci conduit Ă  une implĂ©mentation relativement aisĂ©e du couplage. Des cas de validation de ce couplage seront prĂ©sentĂ©s. En particulier la conservation totale de l'Ă©nergie Ă  travers le couplage sera soigneusement suivie et analysĂ©e, dĂ©montrant la validitĂ© et la prĂ©cision de ce couplage totalement explicite. Ainsi les diffĂ©rentes Ă©nergies seront exprimĂ©es et suivies au cours du temps, Ă  la fois pour la mĂ©thode SPH pour le fluide et pour la mĂ©thode EF pour le solide. La somme de ces diffĂ©rentes Ă©nergies devant ĂȘtre constante dans le temps en absence de dissipation. De bonnes propriĂ©tĂ©s de convergence par rapport Ă  la conservation de cette Ă©nergie totale ont Ă©tĂ© observĂ©es et seront prĂ©sentĂ©es. Les validations du couplage SPH-EF seront prĂ©sentĂ©es en dĂ©tail, en comparaison de rĂ©sultats analytiques et expĂ©rimentaux. Enfin le modĂšle sera appliquĂ© Ă  un cas rĂ©aliste complexe oĂč les effets 3D ne peuvent ĂȘtre nĂ©gligĂ©s

    Parallel Computational Steering and Analysis for HPC Applications using a ParaView Interface and the HDF5 DSM Virtual File Driver

    Get PDF
    Honourable Mention AwardInternational audienceWe present a framework for interfacing an arbitrary HPC simulation code with an interactive ParaView session using the HDF5 parallel IO library as the API. The implementation allows a flexible combination of parallel simulation, concurrent parallel analysis and GUI client, all of which may be on the same or separate machines. Data transfer between the simulation and the ParaView server takes place using a virtual file driver for HDF5 that bypasses the disk entirely and instead communicates directly between the coupled applications in parallel. The simulation and ParaView tasks run as separate MPI jobs and may therefore use different core counts and/or hardware configurations/platforms, making it possible to carefully tailor the amount of resources dedicated to each part of the workload. The coupled applications write and read datasets to the shared virtual HDF5 file layer, which allows the user to read data representing any aspect of the simulation and modify it using ParaView pipelines, then write it back, to be reread by the simulation (or vice versa). This allows not only simple parameter changes, but complete remeshing of grids, or operations involving regeneration of field values over the entire domain, to be carried out. To avoid the problem of manually customizing the GUI for each application that is to be steered, we make use of XML templates that describe outputs from the simulation, inputs back to it, and what user interactions are permitted on the controlled elements. This XML is used to generate GUI and 3D controls for manipulation of the simulation without requiring explicit knowledge of the underlying model

    SPH High-Performance Computing simulations of rigid solids impacting the free-surface of water

    Get PDF
    Numerical simulations of water entries based on a three-dimensional parallelized Smoothed Particle Hydrodynamics (SPH) model developed by Ecole Centrale Nantes are presented. The aim of the paper is to show how such SPH simulations of complex 3D problems involving a free surface can be performed on a super computer like the IBM Blue Gene/L with 8,192 cores of Ecole polytechnique fédérale de Lausanne. The present paper thus presents the different techniques which had to be included into the SPH model to make possible such simulations. Memory handling, in particular, is a quite subtle issue because of constraints due to the use of a variable-h scheme. These improvements made possible the simulation of test cases involving hundreds of million particles computed by using thousands of cores. Speedup and efficiency of these parallel calculations are studied. The model capabilities are illustrated in the paper for two water entry problems, firstly, on a simple test case involving a sphere impacting the free surface at high velocity; and secondly, on a complex 3D geometry involving a ship hull impacting the free surface in forced motion. Sensitivity to spatial resolution is investigated as well in the case of the sphere water entry, and the flow analysis is performed by comparing both experimental and theoretical reference results

    High performance computing 3D SPH model: Sphere impacting the free-surface of water

    Get PDF
    In this work, an analysis based on a three-dimensional parallelized SPH model developed by ECN and applied to free surface impact simulations is presented. The aim of this work is to show that SPH simulations can be performed on huge computer as EPFL IBM Blue Gene/L with 8'192 cores. This paper presents improvements concerning namely the memory consumption, which remains quite subtle because of the variable-H scheme constraints. These improvements have made possible the simulation of test cases involving tens of millions of particles computed by using more than thousand cores. Furthermore, pv-meshless developed by CSCS, is used to show the pressure field and the effect of impact

    Parallélisation d'un code SPH 3D pour des simulations massives sur mémoire distribuée

    No full text
    La mĂ©thode Smoothed Particle Hydrodynamics (SPH) est une mĂ©thode particulaire qui a connu une forte Ă©mergence au cours des deux derniĂšres dĂ©cennies. Bien qu’initialement dĂ©veloppĂ©e pour des applications astrophysiques, cette mĂ©thode numĂ©rique est aujourd'hui largement appliquĂ©e Ă  la mĂ©canique des fluides, Ă  la mĂ©canique des structures et Ă  des applications variĂ©es dans diffĂ©rentes branches de la physique. Le code SPH-flow est dĂ©veloppĂ© conjointement par le LHEEA de l'Ecole Centrale de Nantes et l’entreprise HydrOcean. Cet outil est principalement dĂ©diĂ© Ă  la modĂ©lisation d’écoulements complexes Ă  surface libre, dans un contexte de dynamique rapide, et en prĂ©sence de solides prĂ©sentant des gĂ©omĂ©tries complexes en interaction avec le fluide. Dans ce domaine, le principal avantage de cette mĂ©thode repose sur sa capacitĂ© Ă  simuler les dĂ©connexions/reconnexions de surface libre (dĂ©ferlements, jets de surface libre...) sans nĂ©cessiter sa capture. Comme c’est le cas pour la plupart des autres mĂ©thodes particulaires, cette mĂ©thode est exigeante en termes de ressources de calcul, et sa parallĂ©lisation est inĂ©vitable dans le cadre d’applications 3D massives, en vue de conserver des temps de restitution raisonnables. L'ordre de grandeur des rĂ©solutions adoptĂ©es dans nos simulations est de plusieurs centaines de millions de particules, impliquant l’utilisation de plusieurs milliers de processeurs en rĂ©seau, et donc une parallĂ©lisation performante. Cet article prĂ©sente la stratĂ©gie de parallĂ©lisation retenue dans notre code SPH, basĂ©e sur une dĂ©composition de domaine et passant par l’utilisation du standard MPI (mĂ©moire distribuĂ©e). Les diffĂ©rentes procĂ©dures dĂ©diĂ©es Ă  l'Ă©quilibrage de charge sont prĂ©sentĂ©es, ainsi que les solutions retenues pour limiter les latences de communication par l’emploi de communication non bloquantes. Les performances parallĂšles sont ensuite prĂ©sentĂ©es en termes d’accĂ©lĂ©ration et d’efficacitĂ©, sur des cas allant jusqu’à 3 milliards de particules et utilisant 32768 processeurs en rĂ©seau

    Parallel Computational Steering for HPC Applications Using HDF5 Files in Distributed Shared Memory

    No full text
    International audienceInterfacing a GUI driven visualization/analysis package to an HPC application enables a supercomputer to be used as an interactive instrument. We achieve this by replacing the IO layer in the HDF5 library with a custom driver which transfers data in parallel between simulation and analysis. Our implementation using ParaView as the interface, allows a flexible combination of parallel simulation, concurrent parallel analysis, and GUI client, either on the same or separate machines. Each MPI job may use different core counts or hardware configurations, allowing fine tuning of the amount of resources dedicated to each part of the workload. By making use of a distributed shared memory file, one may read data from the simulation, modify it using ParaView pipelines, write it back, to be reused by the simulation (or vice versa). This allows not only simple parameter changes, but complete remeshing of grids, or operations involving regeneration of field values over the entire domain. To avoid the problem of manually customizing the GUI for each application that is to be steered, we make use of XML templates that describe outputs from the simulation (and inputs back to it) to automatically generate GUI controls for manipulation of the simulation
    • 

    corecore