23,425 research outputs found

    A family of iterative methods with accelerated eighth-order convergence

    Full text link
    We propose a family of eighth-order iterative methods without memory for solving nonlinear equations. The new iterative methods are developed by using weight function method and using an approximation for the last derivative, which reduces the required number of functional evaluations per step. Their efficiency indices are all found to be 1.682. Several examples allow us to compare our algorithms with known ones and confirm the theoretical results.The authors would like to thank the referee for the valuable comments and for the suggestions to improve the readability of the paper. This research was supported by Ministerio de Ciencia y Tecnologia MTM2011-28636-C02-02 and by Vicerrectorado de Investigacion, Universitat Politecnica de Valencia PAID-06-2010-2285.Cordero Barbero, A.; Fardi, M.; Ghasemi, M.; Torregrosa Sánchez, JR. (2012). A family of iterative methods with accelerated eighth-order convergence. Journal of Applied Mathematics. 2012. https://doi.org/10.1155/2012/2825612012Jarratt, P. (1966). Some fourth order multipoint iterative methods for solving equations. Mathematics of Computation, 20(95), 434-434. doi:10.1090/s0025-5718-66-99924-8Homeier, H. H. H. (2005). On Newton-type methods with cubic convergence. Journal of Computational and Applied Mathematics, 176(2), 425-432. doi:10.1016/j.cam.2004.07.027Kung, H. T., & Traub, J. F. (1974). Optimal Order of One-Point and Multipoint Iteration. Journal of the ACM, 21(4), 643-651. doi:10.1145/321850.321860King, R. F. (1973). A Family of Fourth Order Methods for Nonlinear Equations. SIAM Journal on Numerical Analysis, 10(5), 876-879. doi:10.1137/0710072Chun, C. (2007). Some variants of King’s fourth-order family of methods for nonlinear equations. Applied Mathematics and Computation, 190(1), 57-62. doi:10.1016/j.amc.2007.01.006Chun, C. (2008). Some fourth-order iterative methods for solving nonlinear equations. Applied Mathematics and Computation, 195(2), 454-459. doi:10.1016/j.amc.2007.04.105Chun, C., Lee, M. Y., Neta, B., & Džunić, J. (2012). On optimal fourth-order iterative methods free from second derivative and their dynamics. Applied Mathematics and Computation, 218(11), 6427-6438. doi:10.1016/j.amc.2011.12.013Maheshwari, A. K. (2009). A fourth order iterative method for solving nonlinear equations. Applied Mathematics and Computation, 211(2), 383-391. doi:10.1016/j.amc.2009.01.047Neta, B. (1981). On a family of multipoint methods for non-linear equations. International Journal of Computer Mathematics, 9(4), 353-361. doi:10.1080/00207168108803257Bi, W., Ren, H., & Wu, Q. (2009). Three-step iterative methods with eighth-order convergence for solving nonlinear equations. Journal of Computational and Applied Mathematics, 225(1), 105-112. doi:10.1016/j.cam.2008.07.004Cordero, A., Torregrosa, J. R., & Vassileva, M. P. (2011). Three-step iterative methods with optimal eighth-order convergence. Journal of Computational and Applied Mathematics, 235(10), 3189-3194. doi:10.1016/j.cam.2011.01.004Liu, L., & Wang, X. (2010). Eighth-order methods with high efficiency index for solving nonlinear equations. Applied Mathematics and Computation, 215(9), 3449-3454. doi:10.1016/j.amc.2009.10.040Cordero, A., & Torregrosa, J. R. (2007). Variants of Newton’s Method using fifth-order quadrature formulas. Applied Mathematics and Computation, 190(1), 686-698. doi:10.1016/j.amc.2007.01.06

    ADI splitting schemes for a fourth-order nonlinear partial differential equation from image processing

    Get PDF
    We present directional operator splitting schemes for the numerical solution of a fourth-order, nonlinear partial differential evolution equation which arises in image processing. This equation constitutes the H−1-gradient flow of the total variation and represents a prototype of higher-order equations of similar type which are popular in imaging for denoising, deblurring and inpainting problems. The efficient numerical solution of this equation is very challenging due to the stiffness of most numerical schemes. We show that the combination of directional splitting schemes with implicit time-stepping provides a stable and computationally cheap numerical realisation of the equation

    An unconditionally energy stable finite difference scheme for a stochastic Cahn-Hilliard equation

    Full text link
    In this work, the MMC-TDGL equation, a stochastic Cahn-Hilliard equation is solved numerically by using the finite difference method in combination with a convex splitting technique of the energy functional. For the non-stochastic case, we develop an unconditionally energy stable difference scheme which is proved to be uniquely solvable. For the stochastic case, by adopting the same splitting of the energy functional, we construct a similar and uniquely solvable difference scheme with the discretized stochastic term. The resulted schemes are nonlinear and solved by Newton iteration. For the long time simulation, an adaptive time stepping strategy is developed based on both first- and second-order derivatives of the energy. Numerical experiments are carried out to verify the energy stability, the efficiency of the adaptive time stepping and the effect of the stochastic term.Comment: This paper has been accepted for publication in SCIENCE CHINA Mathematic

    Fast Solvers for Cahn-Hilliard Inpainting

    Get PDF
    We consider the efficient solution of the modified Cahn-Hilliard equation for binary image inpainting using convexity splitting, which allows an unconditionally gradient stable time-discretization scheme. We look at a double-well as well as a double obstacle potential. For the latter we get a nonlinear system for which we apply a semi-smooth Newton method combined with a Moreau-Yosida regularization technique. At the heart of both methods lies the solution of large and sparse linear systems. We introduce and study block-triangular preconditioners using an efficient and easy to apply Schur complement approximation. Numerical results indicate that our preconditioners work very well for both problems and show that qualitatively better results can be obtained using the double obstacle potential

    A multigrid continuation method for elliptic problems with folds

    Get PDF
    We introduce a new multigrid continuation method for computing solutions of nonlinear elliptic eigenvalue problems which contain limit points (also called turning points or folds). Our method combines the frozen tau technique of Brandt with pseudo-arc length continuation and correction of the parameter on the coarsest grid. This produces considerable storage savings over direct continuation methods,as well as better initial coarse grid approximations, and avoids complicated algorithms for determining the parameter on finer grids. We provide numerical results for second, fourth and sixth order approximations to the two-parameter, two-dimensional stationary reaction-diffusion problem: Δu+λ exp(u/(1+au)) = 0. For the higher order interpolations we use bicubic and biquintic splines. The convergence rate is observed to be independent of the occurrence of limit points

    Second order adjoints for solving PDE-constrained optimization problems

    Get PDF
    Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as PDE-constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost functional subject to the constraints posed by the model equations. The numerical solution of such optimization problems requires the computation of derivatives of the model output with respect to model parameters. The first order derivatives of a cost functional (defined on the model output) with respect to a large number of model parameters can be calculated efficiently through first order adjoint sensitivity analysis. Second order adjoint models give second derivative information in the form of matrix-vector products between the Hessian of the cost functional and user defined vectors. Traditionally, the construction of second order derivatives for large scale models has been considered too costly. Consequently, data assimilation applications employ optimization algorithms that use only first order derivative information, like nonlinear conjugate gradients and quasi-Newton methods. In this paper we discuss the mathematical foundations of second order adjoint sensitivity analysis and show that it provides an efficient approach to obtain Hessian-vector products. We study the benefits of using of second order information in the numerical optimization process for data assimilation applications. The numerical studies are performed in a twin experiment setting with a two-dimensional shallow water model. Different scenarios are considered with different discretization approaches, observation sets, and noise levels. Optimization algorithms that employ second order derivatives are tested against widely used methods that require only first order derivatives. Conclusions are drawn regarding the potential benefits and the limitations of using high-order information in large scale data assimilation problems

    The chebop system for automatic solution of differential equations

    Get PDF
    In MATLAB, it would be good to be able to solve a linear differential equation by typing u = L\f, where f, u, and L are representations of the right-hand side, the solution, and the differential operator with boundary conditions. Similarly it would be good to be able to exponentiate an operator with expm(L) or determine eigenvalues and eigenfunctions with eigs(L). A system is described in which such calculations are indeed possible, based on the previously developed chebfun system in object-oriented MATLAB. The algorithms involved amount to spectral collocation methods on Chebyshev grids of automatically determined resolution
    corecore