98 research outputs found

    Solving Partial Differential Equations Using Artificial Neural Networks

    Get PDF
    <p>This thesis presents a method for solving partial differential equations (PDEs) using articial neural networks. The method uses a constrained backpropagation (CPROP) approach for preserving prior knowledge during incremental training for solving nonlinear elliptic and parabolic PDEs adaptively, in non-stationary environments. Compared to previous methods that use penalty functions or Lagrange multipliers,</p><p>CPROP reduces the dimensionality of the optimization problem by using direct elimination, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic</p><p>and parabolic PDEs with changing parameters and non-homogeneous terms. The computational complexity analysis shows that CPROP compares favorably to existing methods of solution, and that it leads to considerable computational savings when subject to non-stationary environments.</p><p>The CPROP based approach is extended to a constrained integration (CINT) method for solving initial boundary value partial differential equations (PDEs). The CINT method combines classical Galerkin methods with CPROP in order to constrain the ANN to approximately satisfy the boundary condition at each stage of integration. The advantage of the CINT method is that it is readily applicable to PDEs in irregular domains and requires no special modification for domains with complex geometries. Furthermore, the CINT method provides a semi-analytical solution that is infinitely differentiable. The CINT method is demonstrated on two hyperbolic and one parabolic initial boundary value problems (IBVPs). These IBVPs are widely used and have known analytical solutions. When compared with Matlab's nite element (FE) method, the CINT method is shown to achieve significant improvements both in terms of computational time and accuracy. The CINT method is applied to a distributed optimal control (DOC) problem of computing optimal state and control trajectories for a multiscale dynamical system comprised of many interacting dynamical systems, or agents. A generalized reduced gradient (GRG) approach is presented in which the agent dynamics are described by a small system of stochastic dierential equations (SDEs). A set of optimality conditions is derived using calculus of variations, and used to compute the optimal macroscopic state and microscopic control laws. An indirect GRG approach is used to solve the optimality conditions numerically for large systems of agents. By assuming a parametric control law obtained from the superposition of linear basis functions, the agent control laws can be determined via set-point regulation, such</p><p>that the macroscopic behavior of the agents is optimized over time, based on multiple, interactive navigation objectives.</p><p>Lastly, the CINT method is used to identify optimal root profiles in water limited ecosystems. Knowledge of root depths and distributions is vital in order to accurately model and predict hydrological ecosystem dynamics. Therefore, there is interest in accurately predicting distributions for various vegetation types, soils, and climates. Numerical experiments were were performed that identify root profiles that maximize transpiration over a 10 year period across a transect of the Kalahari. Storm types were varied to show the dependence of the optimal profile on storm frequency and intensity. It is shown that more deeply distributed roots are optimal for regions where</p><p>storms are more intense and less frequent, and shallower roots are advantageous in regions where storms are less intense and more frequent.</p>Dissertatio

    Extreme learning machine collocation for the numerical solution of elliptic PDEs with sharp gradients

    Get PDF
    We address a new numerical method based on machine learning and in particular based on the concept of the so-called Extreme Learning Machines, to approximate the solution of linear elliptic partial differential equations with collocation. We show that a feedforward neural network with a single hidden layer and sigmoidal transfer functions and fixed, random, internal weights and biases can be used to compute accurately enough a collocated solution for such problems. We discuss how one can set the range of values for both the weights between the input and hidden layer and the biases of the hidden layer in order to obtain a good underlying approximating subspace, and we explore the required number of collocation points. We demonstrate the efficiency of the proposed method with several one-dimensional diffusion–advection–reaction benchmark problems that exhibit steep behaviors, such as boundary layers. We point out that there is no need of iterative training of the network, as the proposed numerical approach results to a linear problem that can be easily solved using least-squares and regularization. Numerical results show that the proposed machine learning method achieves a good numerical accuracy, outperforming central Finite Differences, thus bypassing the time-consuming training phase of other machine learning approaches

    Extreme learning machine collocation for the numerical solution of elliptic PDEs with sharp gradients

    Full text link
    We introduce a new numerical method based on machine learning to approximate the solution of elliptic partial differential equations with collocation using a set of sigmoidal functions. We show that a feedforward neural network with a single hidden layer with sigmoidal functions and fixed, random, internal weights and biases can be used to compute accurately a collocation solution. The choice to fix internal weights and bias leads to the so-called Extreme Learning Machine network. We discuss how to determine the range for both internal weights and biases in order to obtain a good underlining approximating space, and we explore the required number of collocation points. We demonstrate the efficiency of the proposed method with several one-dimensional diffusion-advection-reaction problems that exhibit steep behaviors, such as boundary layers. The boundary conditions are imposed directly as collocation equations. We point out that there is no need of training the network, as the proposed numerical approach results to a linear problem that can be easily solved using least-squares. Numerical results show that the proposed method achieves a good accuracy. Finally, we compare the proposed method with finite differences and point out the significant improvements in terms of computational cost, thus avoiding the time-consuming training phase
    • …
    corecore