35 research outputs found

    Investigation of Process-Structure Relationship for Additive Manufacturing with Multiphysics Simulation and Physics-Constrained Machine Learning

    Get PDF
    Metal additive manufacturing (AM) is a group of processes by which metal parts are built layer by layer from powder or wire feedstock with high-energy laser or electron beams. The most well-known metal AM processes include selective laser melting, electron beam melting, and direct energy deposition. Metal AM can significantly improve the manufacturability of products with complex geometries and heterogeneous materials. It has the potential to be widely applied in various industries including automotive, aerospace, biomedical, energy, and other high-value low-volume manufacturing environments. However, the lack of complete and reliable process-structure-property (P-S-P) relationships for metal AM is still the bottleneck to produce defect-free, structurally sound, and reliable AM parts. There are several technical challenges in establishing the P-S-P relationships for process design and optimization. First, there is a lack of fundamental understanding of the rapid solidification process during which microstructures are formed and the properties of solid parts are determined. Second, the curse of dimensionality in the process and structure design space leads to the lack of data to construct reliable P-S-P relationships. Simulation becomes an important tool to enable us to understand rapid solidification given the limitations of experimental techniques for in-situ measurement. In this research, a mesoscale multiphysics simulation model, called phase-field and thermal lattice Boltzmann method (PF-TLBM), is developed with simultaneous considerations of heterogeneous nucleation, solute transport, heat transfer, and phase transition. The simulation can reveal the complex dynamics of rapid solidification in the melt pool, such as the effects of latent heat and cooling rate on dendritic morphology and solute distribution. The microstructure evolution in the complex heating and cooling environment in the layer-by-layer AM process is simulated with the PF-TLBM model. To meet the lack-of-data challenge in constructing P-S-P relationships, a new scheme of multi-fidelity physics-constrained neural network (MF-PCNN) is developed to improve the efficiency of training in neural networks by reducing the required amount of training data and incorporating physical knowledge as constraints. Neural networks with two levels of fidelities are combined to improve prediction accuracy. Low-fidelity networks predict the general trend, whereas high-fidelity networks model local details and fluctuations. The developed MF-PCNN is applied to predict phase transition and dendritic growth. A new physics-constrained neural network with the minimax architecture (PCNN-MM) is also developed, where the training of PCNN-MM is formulated as a minimax problem. A novel training algorithm called Dual-Dimer method is developed to search high-order saddle points. The developed PCNN-MM is also extended to solve multiphysics problems. A new sequential training scheme is developed for PCNN-MMs to ensure the convergence in solving multiphysics problems. A new Dual-Dimer with compressive sampling (DD-CS) algorithm is also developed to alleviate the curse of dimensionality in searching high-order saddle points during the training. A surrogate model of process-structure relationship for AM is constructed based on the PF-TLBM and PCNN-MM. Based on the surrogate model, multi-objective Bayesian optimization is utilized to search the optimal initial temperature and cooling rate to obtain the desired dendritic area and microsegregation level. The developed PF-TLBM and PCNN-MM provide a systematic and efficient approach to construct P-S-P relationships for AM process design.Ph.D

    A deep learning framework for multi-scale models based on physics-informed neural networks

    Full text link
    Physics-informed neural networks (PINN) combine deep neural networks with the solution of partial differential equations (PDEs), creating a new and promising research area for numerically solving PDEs. Faced with a class of multi-scale problems that include loss terms of different orders of magnitude in the loss function, it is challenging for standard PINN methods to obtain an available prediction. In this paper, we propose a new framework for solving multi-scale problems by reconstructing the loss function. The framework is based on the standard PINN method, and it modifies the loss function of the standard PINN method by applying different numbers of power operations to the loss terms of different magnitudes, so that the individual loss terms composing the loss function have approximately the same order of magnitude among themselves. In addition, we give a grouping regularization strategy, and this strategy can deal well with the problem which varies significantly in different subdomains. The proposed method enables loss terms with different magnitudes to be optimized simultaneously, and it advances the application of PINN for multi-scale problems

    Investigating and Mitigating Failure Modes in Physics-informed Neural Networks (PINNs)

    Full text link
    This paper explores the difficulties in solving partial differential equations (PDEs) using physics-informed neural networks (PINNs). PINNs use physics as a regularization term in the objective function. However, a drawback of this approach is the requirement for manual hyperparameter tuning, making it impractical in the absence of validation data or prior knowledge of the solution. Our investigations of the loss landscapes and backpropagated gradients in the presence of physics reveal that existing methods produce non-convex loss landscapes that are hard to navigate. Our findings demonstrate that high-order PDEs contaminate backpropagated gradients and hinder convergence. To address these challenges, we introduce a novel method that bypasses the calculation of high-order derivative operators and mitigates the contamination of backpropagated gradients. Consequently, we reduce the dimension of the search space and make learning PDEs with non-smooth solutions feasible. Our method also provides a mechanism to focus on complex regions of the domain. Besides, we present a dual unconstrained formulation based on Lagrange multiplier method to enforce equality constraints on the model's prediction, with adaptive and independent learning rates inspired by adaptive subgradient methods. We apply our approach to solve various linear and non-linear PDEs

    Ultrafast Electronic Coupling Estimators: Neural Networks versus Physics-Based Approaches

    Get PDF
    Fast and accurate estimation of electronic coupling matrix elements between molecules is essential for the simulation of charge transfer phenomena in chemistry, materials science, and biology. Here we investigate neural-network-based coupling estimators combined with different protocols for sampling reference data (random, farthest point, and query by committee) and compare their performance to the physics-based analytic overlap method (AOM), introduced previously. We find that neural network approaches can give smaller errors than AOM, in particular smaller maximum errors, while they require an order of magnitude more reference data than AOM, typically one hundred to several hundred training points, down from several thousand required in previous ML works. A Δ-ML approach taking AOM as a baseline is found to give the best overall performance at a relatively small computational overhead of about a factor of 2. Highly flexible π-conjugated organic molecules like non-fullerene acceptors are found to be a particularly challenging case for ML because of the varying (de)localization of the frontier orbitals for different intramolecular geometries sampled along molecular dynamics trajectories. Here the local symmetry functions used in ML are insufficient, and long-range descriptors are expected to give improved performance

    Deep learning for full-field ultrasonic characterization

    Full text link
    This study takes advantage of recent advances in machine learning to establish a physics-based data analytic platform for distributed reconstruction of mechanical properties in layered components from full waveform data. In this vein, two logics, namely the direct inversion and physics-informed neural networks (PINNs), are explored. The direct inversion entails three steps: (i) spectral denoising and differentiation of the full-field data, (ii) building appropriate neural maps to approximate the profile of unknown physical and regularization parameters on their respective domains, and (iii) simultaneous training of the neural networks by minimizing the Tikhonov-regularized PDE loss using data from (i). PINNs furnish efficient surrogate models of complex systems with predictive capabilities via multitask learning where the field variables are modeled by neural maps endowed with (scaler or distributed) auxiliary parameters such as physical unknowns and loss function weights. PINNs are then trained by minimizing a measure of data misfit subject to the underlying physical laws as constraints. In this study, to facilitate learning from ultrasonic data, the PINNs loss adopts (a) wavenumber-dependent Sobolev norms to compute the data misfit, and (b) non-adaptive weights in a specific scaling framework to naturally balance the loss objectives by leveraging the form of PDEs germane to elastic-wave propagation. Both paradigms are examined via synthetic and laboratory test data. In the latter case, the reconstructions are performed at multiple frequencies and the results are verified by a set of complementary experiments highlighting the importance of verification and validation in data-driven modeling

    Loss-attentional physics-informed neural networks

    Get PDF
    Physics-informed neural networks (PINNs) have emerged as a significant endeavour in recent years to utilize artificial intelligence technology for solving various partial differential equations (PDEs). Nevertheless, the vanilla PINN model structure encounters challenges in accurately approximating solutions at hard-to-fit regions with, for instance, “stiffness” points characterized by fast-paced alterations in timescale. To this end, we introduce a novel model architecture based on PINN, named loss-attentional physics-informed neural networks (LA-PINN), which equips each loss component with an independent loss-attentional network (LAN). Feeding the squared errors (SE) on every training point into LAN as the input, the attentional function is then built by each LAN and provides different weights to diverse point SEs. A point error-based weighting approach that utilizes the adversarial training between multiple networks in the LA-PINN model is proposed to dynamically update weights of SE during every training epoch. Additionally, the weighting mechanism of LA-PINN is analysed and also be validated by performing several numerical experiments. The experimental results indicate that the proposed method displays superior predictive performance compared to the vanilla PINN and holds a swift convergence characteristic. Moreover, it can advance the convergence of those hard-to-fit points by progressively increasing the growth rates of both the weight and the update gradient for point error

    Annual Research Report 2020

    Get PDF

    Uncertainty Quantification in Machine Learning for Engineering Design and Health Prognostics: A Tutorial

    Full text link
    On top of machine learning models, uncertainty quantification (UQ) functions as an essential layer of safety assurance that could lead to more principled decision making by enabling sound risk assessment and management. The safety and reliability improvement of ML models empowered by UQ has the potential to significantly facilitate the broad adoption of ML solutions in high-stakes decision settings, such as healthcare, manufacturing, and aviation, to name a few. In this tutorial, we aim to provide a holistic lens on emerging UQ methods for ML models with a particular focus on neural networks and the applications of these UQ methods in tackling engineering design as well as prognostics and health management problems. Toward this goal, we start with a comprehensive classification of uncertainty types, sources, and causes pertaining to UQ of ML models. Next, we provide a tutorial-style description of several state-of-the-art UQ methods: Gaussian process regression, Bayesian neural network, neural network ensemble, and deterministic UQ methods focusing on spectral-normalized neural Gaussian process. Established upon the mathematical formulations, we subsequently examine the soundness of these UQ methods quantitatively and qualitatively (by a toy regression example) to examine their strengths and shortcomings from different dimensions. Then, we review quantitative metrics commonly used to assess the quality of predictive uncertainty in classification and regression problems. Afterward, we discuss the increasingly important role of UQ of ML models in solving challenging problems in engineering design and health prognostics. Two case studies with source codes available on GitHub are used to demonstrate these UQ methods and compare their performance in the life prediction of lithium-ion batteries at the early stage and the remaining useful life prediction of turbofan engines
    corecore