6 research outputs found

    Solving Partial Differential Equations Using Artificial Neural Networks

    Get PDF
    <p>This thesis presents a method for solving partial differential equations (PDEs) using articial neural networks. The method uses a constrained backpropagation (CPROP) approach for preserving prior knowledge during incremental training for solving nonlinear elliptic and parabolic PDEs adaptively, in non-stationary environments. Compared to previous methods that use penalty functions or Lagrange multipliers,</p><p>CPROP reduces the dimensionality of the optimization problem by using direct elimination, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic</p><p>and parabolic PDEs with changing parameters and non-homogeneous terms. The computational complexity analysis shows that CPROP compares favorably to existing methods of solution, and that it leads to considerable computational savings when subject to non-stationary environments.</p><p>The CPROP based approach is extended to a constrained integration (CINT) method for solving initial boundary value partial differential equations (PDEs). The CINT method combines classical Galerkin methods with CPROP in order to constrain the ANN to approximately satisfy the boundary condition at each stage of integration. The advantage of the CINT method is that it is readily applicable to PDEs in irregular domains and requires no special modification for domains with complex geometries. Furthermore, the CINT method provides a semi-analytical solution that is infinitely differentiable. The CINT method is demonstrated on two hyperbolic and one parabolic initial boundary value problems (IBVPs). These IBVPs are widely used and have known analytical solutions. When compared with Matlab's nite element (FE) method, the CINT method is shown to achieve significant improvements both in terms of computational time and accuracy. The CINT method is applied to a distributed optimal control (DOC) problem of computing optimal state and control trajectories for a multiscale dynamical system comprised of many interacting dynamical systems, or agents. A generalized reduced gradient (GRG) approach is presented in which the agent dynamics are described by a small system of stochastic dierential equations (SDEs). A set of optimality conditions is derived using calculus of variations, and used to compute the optimal macroscopic state and microscopic control laws. An indirect GRG approach is used to solve the optimality conditions numerically for large systems of agents. By assuming a parametric control law obtained from the superposition of linear basis functions, the agent control laws can be determined via set-point regulation, such</p><p>that the macroscopic behavior of the agents is optimized over time, based on multiple, interactive navigation objectives.</p><p>Lastly, the CINT method is used to identify optimal root profiles in water limited ecosystems. Knowledge of root depths and distributions is vital in order to accurately model and predict hydrological ecosystem dynamics. Therefore, there is interest in accurately predicting distributions for various vegetation types, soils, and climates. Numerical experiments were were performed that identify root profiles that maximize transpiration over a 10 year period across a transect of the Kalahari. Storm types were varied to show the dependence of the optimal profile on storm frequency and intensity. It is shown that more deeply distributed roots are optimal for regions where</p><p>storms are more intense and less frequent, and shallower roots are advantageous in regions where storms are less intense and more frequent.</p>Dissertatio

    Connectionist Learning Based Numerical Solution of Differential Equations

    Get PDF
    It is well known that the differential equations are back bone of different physical systems. Many real world problems of science and engineering may be modeled by various ordinary or partial differential equations. These differential equations may be solved by different approximate methods such as Euler, Runge-Kutta, predictor-corrector, finite difference, finite element, boundary element and other numerical techniques when the problems cannot be solved by exact/analytical methods. Although these methods provide good approximations to the solution, they require a discretization of the domain via meshing, which may be challenging in two or higher dimension problems. These procedures provide solutions at the pre-defined points and computational complexity increases with the number of sampling points.In recent decades, various machine intelligence methods in particular connectionist learning or Artificial Neural Network (ANN) models are being used to solve a variety of real-world problems because of its excellent learning capacity. Recently, a lot of attention has been given to use ANN for solving differential equations. The approximate solution of differential equations by ANN is found to be advantageous but it depends upon the ANN model that one considers. Here our target is to solve ordinary as well as partial differential equations using ANN. The approximate solution of differential equations by ANN method has various inherent benefits in comparison with other numerical methods such as (i) the approximate solution is differentiable in the given domain, (ii) computational complexity does not increase considerably with the increase in number of sampling points and dimension of the problem, (iii) it can be applied to solve linear as well as non linear Ordinary Differential Equations (ODEs) and Partial Differential Equations (PDEs). Moreover, the traditional numerical methods are usually iterative in nature, where we fix the step size before the start of the computation. After the solution is obtained, if we want to know the solution in between steps then again the procedure is to be repeated from initial stage. ANN may be one of the ways where we may overcome this repetition of iterations. Also, we may use it as a black box to get numerical results at any arbitrary point in the domain after training of the model.Few authors have solved ordinary and partial differential equations by combining the feed forward neural network and optimization technique. As said above that the objective of this thesis is to solve various types of ODEs and PDEs using efficient neural network. Algorithms are developed where no desired values are known and the output of the model can be generated by training only. The architectures of the existing neural models are usually problem dependent and the number of nodes etc. are taken by trial and error method. Also, the training depends upon the weights of the connecting nodes. In general, these weights are taken as random number which dictates the training. In this investigation, firstly a new method viz. Regression Based Neural Network (RBNN) has been developed to handle differential equations. In RBNN model, the number of nodes in hidden layer may be fixed by using the regression method. For this, the input and output data are fitted first with various degree polynomials using regression analysis and the coefficients involved are taken as initial weights to start with the neural training. Fixing of the hidden nodes depends upon the degree of the polynomial.We have considered RBNN model for solving different ODEs with initial/boundary conditions. Feed forward neural model and unsupervised error back propagation algorithm have been used for minimizing the error function and modification of the parameters (weights and biases) without use of any optimization technique. Next, single layer Functional Link Artificial Neural Network (FLANN) architecture has been developed for solving differential equations for the first time. In FLANN, the hidden layer is replaced by a functional expansion block for enhancement of the input patterns using orthogonal polynomials such as Chebyshev, Legendre, Hermite, etc. The computations become efficient because the procedure does not need to have hidden layer. Thus, the numbers of network parameters are less than the traditional ANN model. Varieties of differential equations are solved here using the above mentioned methods to show the reliability, powerfulness, and easy computer implementation of the methods. As such singular nonlinear initial value problems such as Lane-Emden and Emden-Fowler type equations have been solved using Chebyshev Neural Network (ChNN) model. Single layer Legendre Neural Network (LeNN) model has also been developed to handle Lane-Emden equation, Boundary Value Problem (BVP) and system of coupled ordinary differential equations. Unforced Duffing oscillator and unforced Van der Pol-Duffing oscillator equations are solved by developing Simple Orthogonal Polynomial based Neural Network (SOPNN) model. Further, Hermite Neural Network (HeNN) model is proposed to handle the Van der Pol-Duffing oscillator equation. Finally, a single layer Chebyshev Neural Network (ChNN) model has also been implemented to solve partial differential equations

    Investigation of Process-Structure Relationship for Additive Manufacturing with Multiphysics Simulation and Physics-Constrained Machine Learning

    Get PDF
    Metal additive manufacturing (AM) is a group of processes by which metal parts are built layer by layer from powder or wire feedstock with high-energy laser or electron beams. The most well-known metal AM processes include selective laser melting, electron beam melting, and direct energy deposition. Metal AM can significantly improve the manufacturability of products with complex geometries and heterogeneous materials. It has the potential to be widely applied in various industries including automotive, aerospace, biomedical, energy, and other high-value low-volume manufacturing environments. However, the lack of complete and reliable process-structure-property (P-S-P) relationships for metal AM is still the bottleneck to produce defect-free, structurally sound, and reliable AM parts. There are several technical challenges in establishing the P-S-P relationships for process design and optimization. First, there is a lack of fundamental understanding of the rapid solidification process during which microstructures are formed and the properties of solid parts are determined. Second, the curse of dimensionality in the process and structure design space leads to the lack of data to construct reliable P-S-P relationships. Simulation becomes an important tool to enable us to understand rapid solidification given the limitations of experimental techniques for in-situ measurement. In this research, a mesoscale multiphysics simulation model, called phase-field and thermal lattice Boltzmann method (PF-TLBM), is developed with simultaneous considerations of heterogeneous nucleation, solute transport, heat transfer, and phase transition. The simulation can reveal the complex dynamics of rapid solidification in the melt pool, such as the effects of latent heat and cooling rate on dendritic morphology and solute distribution. The microstructure evolution in the complex heating and cooling environment in the layer-by-layer AM process is simulated with the PF-TLBM model. To meet the lack-of-data challenge in constructing P-S-P relationships, a new scheme of multi-fidelity physics-constrained neural network (MF-PCNN) is developed to improve the efficiency of training in neural networks by reducing the required amount of training data and incorporating physical knowledge as constraints. Neural networks with two levels of fidelities are combined to improve prediction accuracy. Low-fidelity networks predict the general trend, whereas high-fidelity networks model local details and fluctuations. The developed MF-PCNN is applied to predict phase transition and dendritic growth. A new physics-constrained neural network with the minimax architecture (PCNN-MM) is also developed, where the training of PCNN-MM is formulated as a minimax problem. A novel training algorithm called Dual-Dimer method is developed to search high-order saddle points. The developed PCNN-MM is also extended to solve multiphysics problems. A new sequential training scheme is developed for PCNN-MMs to ensure the convergence in solving multiphysics problems. A new Dual-Dimer with compressive sampling (DD-CS) algorithm is also developed to alleviate the curse of dimensionality in searching high-order saddle points during the training. A surrogate model of process-structure relationship for AM is constructed based on the PF-TLBM and PCNN-MM. Based on the surrogate model, multi-objective Bayesian optimization is utilized to search the optimal initial temperature and cooling rate to obtain the desired dendritic area and microsegregation level. The developed PF-TLBM and PCNN-MM provide a systematic and efficient approach to construct P-S-P relationships for AM process design.Ph.D
    corecore