8 research outputs found

    Extreme learning machine collocation for the numerical solution of elliptic PDEs with sharp gradients

    Get PDF
    We address a new numerical method based on machine learning and in particular based on the concept of the so-called Extreme Learning Machines, to approximate the solution of linear elliptic partial differential equations with collocation. We show that a feedforward neural network with a single hidden layer and sigmoidal transfer functions and fixed, random, internal weights and biases can be used to compute accurately enough a collocated solution for such problems. We discuss how one can set the range of values for both the weights between the input and hidden layer and the biases of the hidden layer in order to obtain a good underlying approximating subspace, and we explore the required number of collocation points. We demonstrate the efficiency of the proposed method with several one-dimensional diffusion–advection–reaction benchmark problems that exhibit steep behaviors, such as boundary layers. We point out that there is no need of iterative training of the network, as the proposed numerical approach results to a linear problem that can be easily solved using least-squares and regularization. Numerical results show that the proposed machine learning method achieves a good numerical accuracy, outperforming central Finite Differences, thus bypassing the time-consuming training phase of other machine learning approaches

    Extreme learning machine collocation for the numerical solution of elliptic PDEs with sharp gradients

    Full text link
    We introduce a new numerical method based on machine learning to approximate the solution of elliptic partial differential equations with collocation using a set of sigmoidal functions. We show that a feedforward neural network with a single hidden layer with sigmoidal functions and fixed, random, internal weights and biases can be used to compute accurately a collocation solution. The choice to fix internal weights and bias leads to the so-called Extreme Learning Machine network. We discuss how to determine the range for both internal weights and biases in order to obtain a good underlining approximating space, and we explore the required number of collocation points. We demonstrate the efficiency of the proposed method with several one-dimensional diffusion-advection-reaction problems that exhibit steep behaviors, such as boundary layers. The boundary conditions are imposed directly as collocation equations. We point out that there is no need of training the network, as the proposed numerical approach results to a linear problem that can be easily solved using least-squares. Numerical results show that the proposed method achieves a good accuracy. Finally, we compare the proposed method with finite differences and point out the significant improvements in terms of computational cost, thus avoiding the time-consuming training phase

    An overview on deep learning-based approximation methods for partial differential equations

    Full text link
    It is one of the most challenging problems in applied mathematics to approximatively solve high-dimensional partial differential equations (PDEs). Recently, several deep learning-based approximation algorithms for attacking this problem have been proposed and tested numerically on a number of examples of high-dimensional PDEs. This has given rise to a lively field of research in which deep learning-based methods and related Monte Carlo methods are applied to the approximation of high-dimensional PDEs. In this article we offer an introduction to this field of research, we review some of the main ideas of deep learning-based approximation methods for PDEs, we revisit one of the central mathematical results for deep neural network approximations for PDEs, and we provide an overview of the recent literature in this area of research.Comment: 23 page
    corecore