134,266 research outputs found

    An Ensemble-Proper Orthogonal Decomposition Method for the Nonstationary Navier-Stokes Equations

    Get PDF
    The definition of partial differential equation (PDE) models usually involves a set of parameters whose values may vary over a wide range. The solution of even a single set of parameter values may be quite expensive. In many cases, e.g., optimization, control, uncertainty quantification, and other settings, solutions are needed for many sets of parameter values. We consider the case of the time-dependent Navier-Stokes equations for which a recently developed ensemble-based method allows for the efficient determination of the multiple solutions corresponding to many parameter sets. The method uses the average of the multiple solutions at any time step to define a linear set of equations that determines the solutions at the next time step. To significantly further reduce the costs of determining multiple solutions of the Navier-Stokes equations, we incorporate a proper orthogonal decomposition (POD) reduced-order model into the ensemble-based method. The stability and convergence results for the ensemble-based method are extended to the ensemble-POD approach. Numerical experiments are provided that illustrate the accuracy and efficiency of computations determined using the new approach

    On Compact Finite Difference Schemes With Applications To Moving Boundary Problems

    Get PDF
    Compact finite differences are introduced with the purpose of developing compact methods of higher order for the numerical solution of ordinary and elliptic partial differential equations.;The notion of poisedness of a compact finite difference is introduced. It is shown that if the incidence matrix of the underlying interpolation problem contains no odd unsupported sequences then the Polya conditions are necessary and sufficient for poisedness.;A Pade Operator method is used to construct compact formulae valid for uniform three point grids. A second Function-Theoretic method extends compact formulae to variably-spaced three point grids with no deterioration in the order of the truncation error.;A new fourth order compact method (CI4) leading to matrix systems with block tridiagonal structure, is applied to boundary value problems associated with second order ordinary differential equations. Numerical experiments with both linear and nonlinear problems and on uniform and nonuniform grids indicate rates of convergence of four.;An application is considered to the time-dependent one-dimensional nonlinear Burgers\u27 equation in which an initial sinusoidal disturbance develops a very sharp boundary layer. It is found that the CI4 method, with a small number of points placed on a highly stretched grid, is capable of accurately resolving the boundary layer.;A new method (LCM) based on local polynomial collocation and Gauss-type quadrature and leading to matrix systems with block tridiagonal structure, is used to generate high order compact methods for ordinary differential equations. A tenth order method is shown to be considerably more efficient than the CI4 method.;A new fourth order compact method, based on the CI4 method, is developed for the solution, on variable grids, of two-dimensional, time independent elliptic partial differential equations. The method is applied to the ill-posed problem of calculating the interface in receding Hele-Shaw flow. Comparisons with exact solutions indicate that the numerical method behaves as expected for early times.;Finally, in an application to the simulation of contaminant transport within a porous medium under an evolving free surface, new fourth order explicit compact expressions for mixed derivatives are developed

    Efficient variants of the CMRH method for solving a sequence of multi-shifted non-Hermitian linear systems simultaneously

    Get PDF
    Multi-shifted linear systems with non-Hermitian coefficient matrices arise in numerical solutions of time-dependent partial/fractional differential equations (PDEs/FDEs), in control theory, PageRank problems, and other research fields. We derive efficient variants of the restarted Changing Minimal Residual method based on the cost-effective Hessenberg procedure (CMRH) for this problem class. Then, we introduce a flexible variant of the algorithm that allows to use variable preconditioning at each iteration to further accelerate the convergence of shifted CMRH. We analyse the performance of the new class of methods in the numerical solution of PDEs and FDEs, also against other multi-shifted Krylov subspace methods.Comment: Techn. Rep., Univ. of Groningen, 34 pages. 11 Tables, 2 Figs. This manuscript was submitted to a journal at 20 Jun. 2016. Updated version-1: 31 pages, 10 tables, 2 figs. The manuscript was resubmitted to the journal at 9 Jun. 2018. Updated version-2: 29 pages, 10 tables, 2 figs. Make it concise. Updated version-3: 27 pages, 10 tables, 2 figs. Updated version-4: 28 pages, 10 tables, 2 fig

    Connectionist Learning Based Numerical Solution of Differential Equations

    Get PDF
    It is well known that the differential equations are back bone of different physical systems. Many real world problems of science and engineering may be modeled by various ordinary or partial differential equations. These differential equations may be solved by different approximate methods such as Euler, Runge-Kutta, predictor-corrector, finite difference, finite element, boundary element and other numerical techniques when the problems cannot be solved by exact/analytical methods. Although these methods provide good approximations to the solution, they require a discretization of the domain via meshing, which may be challenging in two or higher dimension problems. These procedures provide solutions at the pre-defined points and computational complexity increases with the number of sampling points.In recent decades, various machine intelligence methods in particular connectionist learning or Artificial Neural Network (ANN) models are being used to solve a variety of real-world problems because of its excellent learning capacity. Recently, a lot of attention has been given to use ANN for solving differential equations. The approximate solution of differential equations by ANN is found to be advantageous but it depends upon the ANN model that one considers. Here our target is to solve ordinary as well as partial differential equations using ANN. The approximate solution of differential equations by ANN method has various inherent benefits in comparison with other numerical methods such as (i) the approximate solution is differentiable in the given domain, (ii) computational complexity does not increase considerably with the increase in number of sampling points and dimension of the problem, (iii) it can be applied to solve linear as well as non linear Ordinary Differential Equations (ODEs) and Partial Differential Equations (PDEs). Moreover, the traditional numerical methods are usually iterative in nature, where we fix the step size before the start of the computation. After the solution is obtained, if we want to know the solution in between steps then again the procedure is to be repeated from initial stage. ANN may be one of the ways where we may overcome this repetition of iterations. Also, we may use it as a black box to get numerical results at any arbitrary point in the domain after training of the model.Few authors have solved ordinary and partial differential equations by combining the feed forward neural network and optimization technique. As said above that the objective of this thesis is to solve various types of ODEs and PDEs using efficient neural network. Algorithms are developed where no desired values are known and the output of the model can be generated by training only. The architectures of the existing neural models are usually problem dependent and the number of nodes etc. are taken by trial and error method. Also, the training depends upon the weights of the connecting nodes. In general, these weights are taken as random number which dictates the training. In this investigation, firstly a new method viz. Regression Based Neural Network (RBNN) has been developed to handle differential equations. In RBNN model, the number of nodes in hidden layer may be fixed by using the regression method. For this, the input and output data are fitted first with various degree polynomials using regression analysis and the coefficients involved are taken as initial weights to start with the neural training. Fixing of the hidden nodes depends upon the degree of the polynomial.We have considered RBNN model for solving different ODEs with initial/boundary conditions. Feed forward neural model and unsupervised error back propagation algorithm have been used for minimizing the error function and modification of the parameters (weights and biases) without use of any optimization technique. Next, single layer Functional Link Artificial Neural Network (FLANN) architecture has been developed for solving differential equations for the first time. In FLANN, the hidden layer is replaced by a functional expansion block for enhancement of the input patterns using orthogonal polynomials such as Chebyshev, Legendre, Hermite, etc. The computations become efficient because the procedure does not need to have hidden layer. Thus, the numbers of network parameters are less than the traditional ANN model. Varieties of differential equations are solved here using the above mentioned methods to show the reliability, powerfulness, and easy computer implementation of the methods. As such singular nonlinear initial value problems such as Lane-Emden and Emden-Fowler type equations have been solved using Chebyshev Neural Network (ChNN) model. Single layer Legendre Neural Network (LeNN) model has also been developed to handle Lane-Emden equation, Boundary Value Problem (BVP) and system of coupled ordinary differential equations. Unforced Duffing oscillator and unforced Van der Pol-Duffing oscillator equations are solved by developing Simple Orthogonal Polynomial based Neural Network (SOPNN) model. Further, Hermite Neural Network (HeNN) model is proposed to handle the Van der Pol-Duffing oscillator equation. Finally, a single layer Chebyshev Neural Network (ChNN) model has also been implemented to solve partial differential equations

    Splitting and composition methods in the numerical integration of differential equations

    Get PDF
    We provide a comprehensive survey of splitting and composition methods for the numerical integration of ordinary differential equations (ODEs). Splitting methods constitute an appropriate choice when the vector field associated with the ODE can be decomposed into several pieces and each of them is integrable. This class of integrators are explicit, simple to implement and preserve structural properties of the system. In consequence, they are specially useful in geometric numerical integration. In addition, the numerical solution obtained by splitting schemes can be seen as the exact solution to a perturbed system of ODEs possessing the same geometric properties as the original system. This backward error interpretation has direct implications for the qualitative behavior of the numerical solution as well as for the error propagation along time. Closely connected with splitting integrators are composition methods. We analyze the order conditions required by a method to achieve a given order and summarize the different families of schemes one can find in the literature. Finally, we illustrate the main features of splitting and composition methods on several numerical examples arising from applications.Comment: Review paper; 56 pages, 6 figures, 8 table
    corecore