20 research outputs found

    Regularized regressions for parametric models based on separated representations

    Get PDF
    Regressions created from experimental or simulated data enable the construction of metamodels, widely used in a variety of engineering applications. Many engineering problems involve multi-parametric physics whose corresponding multi-parametric solutions can be viewed as a sort of computational vademecum that, once computed offline, can be then used in a variety of real-time engineering applications including optimization, inverse analysis, uncertainty propagation or simulation based control. Sometimes, these multi-parametric problems can be solved by using advanced model order reduction—MOR-techniques. However, solving these multi-parametric problems can be very costly. In that case, one possibility consists in solving the problem for a sample of the parametric values and creating a regression from all the computed solutions. The solution for any choice of the parameters is then inferred from the prediction of the regression model. However, addressing high-dimensionality at the low data limit, ensuring accuracy and avoiding overfitting constitutes a difficult challenge. The present paper aims at proposing and discussing different advanced regressions based on the proper generalized decomposition (PGD) enabling the just referred features. In particular, new PGD strategies are developed adding different regularizations to the s-PGD method. In addition, the ANOVA-based PGD is proposed to ally them

    Application of PGD separation of space to create a reduced-order model of a lithium-ion cell structure

    Get PDF
    Lithium-ion cells can be considered a laminate of thin plies comprising the anode, separator, and cathode. Lithium-ion cells are vulnerable toward out-of-plane loading. When simulating such structures under out-of-plane mechanical loads, subordinate approaches such as shells or plates are sub-optimal because they are blind toward out-of-plane strains and stresses. On the other hand, the use of solid elements leads to limitations in terms of computational efficiency independent of the time integration method. In this paper, the bottlenecks of both (implicit and explicit) methods are discussed, and an alternative approach is shown. Proper generalized decomposition (PGD) is used for this purpose. This computational method makes it possible to divide the problem into the characteristic in-plane and out-of-plane behaviors. The separation of space achieved with this method is demonstrated on a static linearized problem of a lithium-ion cell structure. The results are compared with conventional solution approaches. Moreover, an in-plane/out-of-plane separated representation is also built using proper orthogonal decomposition (POD). This simply serves to compare the in-plane and out-of-plane behaviors estimated by the PGD and does not allow computational advantages relative to conventional techniques. Finally, the time savings and the resulting deviations are discussed

    Surrogate parametric metamodel based on Optimal Transport

    Get PDF
    The description of a physical problem through a model necessarily involves the introduction of parameters. Hence, one wishes to have a solution of the problem that is a function of all these parameters: a parametric solution. However, the construction of such parametric solutions exhibiting localization in space is only ensured by costly and time-consuming tests, which can be both numerical or experimental. Numerical methodologies used classically imply enormous computational efforts for exploring the design space. Therefore, parametric solutions obtained using advanced nonlinear regressions are an essential tool to address this challenge. However, classical regression techniques, even the most advanced ones, can lead to non physical interpolation in some fields such as fluid dynamics, where the solution localizes in different regions depending on the problem parameters choice. In this context, Optimal Transport (OT) offers a mathematical approach to measure distances and interpolate between general objects in a, sometimes, more physical way than the classical interpolation approach. Thus, OT has become fundamental in some fields such as statistics or computer vision, and it is being increasingly used in fields such as computational mechanics. However, the OT problem is usually computationally costly to solve and not adapted to be accessed in an online manner. Therefore, the aim of this paper is combining advanced nonlinear regressions with Optimal Transport in order to implement a parametric real-time model based on OT. To this purpose, a parametric model is built offline relying on Model Order Reduction and OT, leading to a real-time interpolation tool following Optimal Transport theory. Such a tool is of major interest in design processes, but also within the digital twin rationale

    Describing and Modeling Rough Composites Surfaces by Using Topological Data Analysis and Fractional Brownian Motion

    Get PDF
    Many composite manufacturing processes employ the consolidation of pre-impregnated preforms. However, in order to obtain adequate performance of the formed part, intimate contact and molecular diffusion across the different composites’ preform layers must be ensured. The latter takes place as soon as the intimate contact occurs and the temperature remains high enough during the molecular reptation characteristic time. The former, in turn, depends on the applied compression force, the temperature and the composite rheology, which, during the processing, induce the flow of asperities, promoting the intimate contact. Thus, the initial roughness and its evolution during the process, become critical factors in the composite consolidation. Processing optimization and control are needed for an adequate model, enabling it to infer the consolidation degree from the material and process features. The parameters associated with the process are easily identifiable and measurable (e.g., temperature, compression force, process time, ⋯). The ones concerning the materials are also accessible; however, describing the surface roughness remains an issue. Usual statistical descriptors are too poor and, moreover, they are too far from the involved physics. The present paper focuses on the use of advanced descriptors out-performing usual statistical descriptors, in particular those based on the use of homology persistence (at the heart of the so-called topological data analysis—TDA), and their connection with fractional Brownian surfaces. The latter constitutes a performance surface generator able to represent the surface evolution all along the consolidation process, as the present paper emphasizes

    Modeling systems from partial observations

    Get PDF
    Modeling systems from collected data faces two main difficulties: the first one concerns the choice of measurable variables that will define the learnt model features, which should be the ones concerned by the addressed physics, optimally neither more nor less than the essential ones. The second one is linked to accessibility to data since, generally, only limited parts of the system are accessible to perform measurements. This work revisits some aspects related to the observation, description, and modeling of systems that are only partially accessible and shows that a model can be defined when the loading in unresolved degrees of freedom remains unaltered in the different experiments

    Data-Driven Modeling for Multiphysics Parametrized Problems-Application to Induction Hardening Process

    Get PDF
    Data-driven modeling provides an efficient approach to compute approximate solutions for complex multiphysics parametrized problems such as induction hardening (IH) process. Basically, some physical quantities of interest (QoI) related to the IH process will be evaluated under real-time constraint, without any explicit knowledge of the physical behavior of the system. Hence, computationally expensive finite element models will be replaced by a parametric solution, called metamodel. Two data-driven models for temporal evolution of temperature and austenite phase transformation, during induction heating, were first developed by using the proper orthogonal decomposition based reduced-order model followed by a nonlinear regression method for temperature field and a classification combined with regression for austenite evolution. Then, data-driven and hybrid models were created to predict hardness, after quenching. It is shown that the results of artificial intelligence models are promising and provide good approximations in the low-data limit case

    Learning the Parametric Transfer Function of Unitary Operations for Real-Time Evaluation of Manufacturing Processes Involving Operations Sequencing

    Get PDF
    For better designing manufacturing processes, surrogate models were widely considered in the past, where the effect of different material and process parameters was considered from the use of a parametric solution. The last contains the solution of the model describing the system under study, for any choice of the selected parameters. These surrogate models, also known as meta-models, virtual charts or computational vademecum, in the context of model order reduction, were successfully employed in a variety of industrial applications. However, they remain confronted to a major difficulty when the number of parameters grows exponentially. Thus, processes involving trajectories or sequencing entail a combinatorial exposition (curse of dimensionality) not only due to the number of possible combinations, but due to the number of parameters needed to describe the process. The present paper proposes a promising route for circumventing, or at least alleviating that difficulty. The proposed technique consists of a parametric transfer function that, as soon as it is learned, allows for, from a given state, inferring the new state after the application of a unitary operation, defined as a step in the sequenced process. Thus, any sequencing can be evaluated almost in real time by chaining that unitary transfer function, whose output becomes the input of the next operation. The benefits and potential of such a technique are illustrated on a problem of industrial relevance, the one concerning the induced deformation on a structural part when printing on it a series of stiffeners

    Parametric Electromagnetic Analysis of Radar-Based Advanced Driver Assistant Systems

    Get PDF
    Efficient and optimal design of radar-based Advanced Driver Assistant Systems (ADAS) needs the evaluation of many different electromagnetic solutions for evaluating the impact of the radome on the electromagnetic wave propagation. Because of the very high frequency at which these devices operate, with the associated extremely small wavelength, very fine meshes are needed to accurately discretize the electromagnetic equations. Thus, the computational cost of each numerical solution for a given choice of the design or operation parameters, is high (CPU time consuming and needing significant computational resources) compromising the efficiency of standard optimization algorithms. In order to alleviate the just referred difficulties the present paper proposes an approach based on the use of reduced order modeling, in particular the construction of a parametric solution by employing a non-intrusive formulation of the Proper Generalized Decomposition, combined with a powerful phase-angle unwrapping strategy for accurately addressing the electric and magnetic fields interpolation, contributing to improve the design, the calibration and the operational use of those systems

    Learning data-driven reduced elastic and inelastic models of spot-welded patches

    Get PDF
    Solving mechanical problems in large structures with rich localized behaviors remains a challenging issue despite the enormous advances in numerical procedures and computational performance. In particular, these localized behaviors need for extremely fine descriptions, and this has an associated impact in the number of degrees of freedom from one side, and the decrease of the time step employed in usual explicit time integrations, whose stability scales with the size of the smallest element involved in the mesh. In the present work we propose a data-driven technique for learning the rich behavior of a local patch and integrate it into a standard coarser description at the structure level. Thus, localized behaviors impact the global structural response without needing an explicit description of that fine scale behaviors

    Réduction de modèle avancée et approches basées sur les données pour la construction de jumeaux numériques augmentés par la physique

    No full text
    In the 20th century, engineering made remarkable strides in various fields, while other disciplines turned to data for diagnostic and prognostic purposes. Recognizing the potential of data and AI, engineering sciences have embraced these technologies to make better predictions, enhance performance, and gain a deeper understanding of complex systems. This has given rise to the digital twin paradigm.The challenge lies in developing accurate models that can predict output based on input data. Choosing between physics-based and data-driven approaches in engineering can be difficult. Physics-based approaches offer advantages but are computationally intensive and struggle with large-scale systems and uncertainty. They often require the use of Model Order Reduction techniques to become adequate with the real-time operation constraints. Meanwhile, data-driven approaches are promising when accurate models are unavailable, yet they face issues related to data costs, extrapolation risks, and lack of explanations and certifications.Therefore, combining both approaches seems to be the optimal choice as it strikes a balance between their advantages and disadvantages. By integrating physics-based and data-driven perspectives, we can leverage the strengths of each approach to achieve better engineering outcomes.One significant advantage of this alliance is the reduction in data requirements for model building. This reduction is achieved by leveraging known laws or focusing on simplifying the approximation of the gap between model and reality. Moreover, using physics-based models allows for explanations of fundamental aspects and resulting predictions, enabling certification of results and mitigating extrapolation issues.Furthermore, the reduction in data requirements is not solely attributed to the use of physics but also to how physics can guide the selection of optimal locations and times for data collection. This is particularly evident in active learning, where incorporating existing physics-based knowledge can enhance its effectiveness.By combining physics-based understanding with data-driven techniques, engineers can harness the power of both approaches, leading to more efficient and insightful engineering practices.By combining these chapters, this thesis contributes to the development, improvement, and application of methodologies that enable the construction of hybrid twins. These methodologies bridge the gap between data science and numerical simulation, addressing current industrial challenges. They offer high-fidelity and high-dimensional parametric models, accelerate physics-based models through MOR and data-driven techniques, tackle challenges in model construction under partial observability, and create hybrid models that combine physics-based principles with data. Through these contributions, this thesis aims to advance the field of engineering by leveraging the power of data and simulation to address complex real-world problems.Au 20e siècle, l'ingénierie a fait des progrès remarquables dans divers domaines, tandis que d'autres disciplines se sont tournées vers les données à des fins de diagnostic et de pronostic. Reconnaissant le potentiel des données et de l'IA, les sciences de l'ingénieur ont adopté ces technologies pour faire de meilleures prédictions, améliorer les performances et mieux comprendre les systèmes complexes. C'est ainsi qu'est né le paradigme du jumeau numérique.Le défi consiste à développer des modèles précis capables de prédire des résultats en fonction de données d'entrée. Dans le domaine de l'ingénierie, il peut être difficile de choisir entre les approches basées sur la physique et les approches basées sur les données. Les approches basées sur la physique offrent des avantages, mais elles sont gourmandes en calculs et ont du mal à gérer les systèmes à grande échelle et l'incertitude. Elles nécessitent souvent l'utilisation de techniques de réduction de modèle pour devenir adéquates avec les contraintes de fonctionnement en temps réel. En parallèle, les approches basées sur les données sont prometteuses lorsque des modèles précis ne sont pas disponibles, mais elles sont confrontées à des problèmes liés au coût des données, aux risques d'extrapolation, et au manque d'explicabilité et de certifications.Par conséquent, la combinaison des deux approches semble être le choix optimal, car elle permet de trouver un équilibre entre leurs avantages et leurs inconvénients. En intégrant les perspectives basées sur la physique et les données, nous pouvons tirer parti des forces de chaque approche pour obtenir de meilleurs résultats en matière d'ingénierie.Un avantage significatif de cette alliance est la réduction des besoins en données pour la construction de modèles. Cette réduction est obtenue en exploitant des lois connues ou en se concentrant sur l'approximation de l'écart entre le modèle et la réalité. En outre, l'utilisation de modèles basés sur la physique permet d'expliquer les aspects fondamentaux et les prédictions qui en résultent, ce qui permet de certifier les résultats et d'atténuer les problèmes d'extrapolation.En outre, la réduction des besoins en données n'est pas seulement une conséquence de l'utilisation des lois physiques, mais aussi de la manière dont la physique peut guider la sélection des lieux et des instants optimaux pour la collecte des données. Ceci est particulièrement évident dans l'apprentissage actif, où l'incorporation de connaissances existantes basées sur la physique peut améliorer son efficacité.En combinant la compréhension basée sur la physique avec des techniques basées sur les données, les ingénieurs peuvent exploiter la puissance des deux approches, ce qui conduit à des pratiques d'ingénierie plus efficaces et plus perspicaces.Cette thèse contribue au développement, à l'amélioration et à l'application de méthodologies qui permettent la construction de jumeaux hybrides. Ces méthodologies comblent le fossé entre la science des données et la simulation numérique, en répondant aux défis industriels actuels. Elles offrent des modèles paramétriques de haute fidélité et de haute dimension, accélèrent les modèles basés sur la physique grâce à la réduction de modèle et l'apprentissage machine, s'attaquent aux défis de la construction de modèles sous observabilité partielle, et créent des modèles hybrides qui combinent des principes basés sur la physique avec des données. Grâce à ces contributions, cette thèse vise à faire progresser le domaine de l'ingénierie en tirant parti de la puissance des données et de la simulation pour résoudre des problèmes complexes du monde réel
    corecore