41 research outputs found

    Numerically stable improved Chebyshev-Halley type schemes for matrix sign function

    Full text link
    [EN] A general family of iterative methods including a free parameter is derived and proved to be convergent for computing matrix sign function under some restrictions on the parameter. Several special cases including global convergence behavior are dealt with. It is analytically shown that they are asymptotically stable. A variety of numerical experiments for matrices with different sizes is considered to show the effectiveness of the proposed members of the family. (C) 2016 Elsevier B.V. All rights reserved.This research was supported by Ministerio de Economia y Competitividad MTM2014-52016-C2-2-P and by Generalitat Valenciana PROME-TEO/2016/089.Cordero Barbero, A.; Soleymani, F.; Torregrosa Sánchez, JR.; Ullah, MZ. (2017). Numerically stable improved Chebyshev-Halley type schemes for matrix sign function. Journal of Computational and Applied Mathematics. 318:189-198. https://doi.org/10.1016/j.cam.2016.10.025S18919831

    A numerically stable high-order Chebyshev-Halley type multipoint iterative method for calculating matrix sign function

    Get PDF
    A new eighth-order Chebyshev-Halley type iteration is proposed for solving nonlinear equations and matrix sign function. Basins of attraction show that several special cases of the new method are globally convergent. It is analytically proven that the new method is asymptotically stable and the new method has the order of convergence eight as well. The effectiveness of the theoretical results are illustrated by numerical experiments. In numerical experiments, the new method is applied to a random matrix, Wilson matrix and continuous-time algebraic Riccati equation. Numerical results show that, compared with some well-known methods, the new method achieves the accuracy requirement in the minimum computing time and the minimum number of iterations

    Adomian decomposition method, nonlinear equations and spectral solutions of burgers equation

    Get PDF
    Tese de doutoramento. Ciências da Engenharia. 2006. Faculdade de Engenharia. Universidade do Porto, Instituto Superior Técnico. Universidade Técnica de Lisbo

    Aproximación de ecuaciones diferenciales mediante una nueva técnica variacional y aplicaciones

    Get PDF
    [SPA] En esta Tesis presentamos el estudio teórico y numérico de sistemas de ecuaciones diferenciales basado en el análisis de un funcional asociado de forma natural al problema original. Probamos que cuando se utiliza métodos del descenso para minimizar dicho funcional, el algoritmo decrece el error hasta obtener la convergencia dada la no existencia de mínimos locales diferentes a la solución original. En cierto sentido el algoritmo puede considerarse un método tipo Newton globalmente convergente al estar basado en una linearización del problema. Se han estudiado la aproximación de ecuaciones diferenciales rígidas, de ecuaciones rígidas con retardo, de ecuaciones algebraico‐diferenciales y de problemas hamiltonianos. Esperamos que esta nueva técnica variacional pueda usarse en otro tipo de problemas diferenciales. [ENG] This thesis is devoted to the study and approximation of systems of differential equations based on an analysis of a certain error functional associated, in a natural way, with the original problem. We prove that in seeking to minimize the error by using standard descent schemes, the procedure can never get stuck in local minima, but will always and steadily decrease the error until getting to the original solution. One main step in the procedure relies on a very particular linearization of the problem, in some sense it is like a globally convergent Newton type method. We concentrate on the approximation of stiff systems of ODEs, DDEs, DAEs and Hamiltonian systems. In all these problems we need to use implicit schemes. We believe that this approach can be used in a systematic way to examine other situations and other types of equations.Universidad Politécnica de Cartagen

    Numerical iterative methods for nonlinear problems.

    Get PDF
    The primary focus of research in this thesis is to address the construction of iterative methods for nonlinear problems coming from different disciplines. The present manuscript sheds light on the development of iterative schemes for scalar nonlinear equations, for computing the generalized inverse of a matrix, for general classes of systems of nonlinear equations and specific systems of nonlinear equations associated with ordinary and partial differential equations. Our treatment of the considered iterative schemes consists of two parts: in the first called the ’construction part’ we define the solution method; in the second part we establish the proof of local convergence and we derive convergence-order, by using symbolic algebra tools. The quantitative measure in terms of floating-point operations and the quality of the computed solution, when real nonlinear problems are considered, provide the efficiency comparison among the proposed and the existing iterative schemes. In the case of systems of nonlinear equations, the multi-step extensions are formed in such a way that very economical iterative methods are provided, from a computational viewpoint. Especially in the multi-step versions of an iterative method for systems of nonlinear equations, the Jacobians inverses are avoided which make the iterative process computationally very fast. When considering special systems of nonlinear equations associated with ordinary and partial differential equations, we can use higher-order Frechet derivatives thanks to the special type of nonlinearity: from a computational viewpoint such an approach has to be avoided in the case of general systems of nonlinear equations due to the high computational cost. Aside from nonlinear equations, an efficient matrix iteration method is developed and implemented for the calculation of weighted Moore-Penrose inverse. Finally, a variety of nonlinear problems have been numerically tested in order to show the correctness and the computational efficiency of our developed iterative algorithms

    Equations and systems of nonlinear equations: from high order numerical methods to fast Eigensolvers for structured matrices and applications

    Get PDF
    A parametrized multi-step Newton method is constructed for widening the region of convergence of classical multi-step Newton method. The second improvement is proposed in the context of multistep Newton methods, by introducing preconditioners to enhance their accuracy, without disturbing their original order of convergence and the related computational cost (in most of the cases). To find roots with unknown multiplicities preconditioners are also effective when they are applied to the Newton method for roots with unknown multiplicities. Frozen Jacobian higher order multistep iterative method for the solution of systems of nonlinear equations are developed and the related results better than those obtained when employing the classical frozen Jacobian multi-step Newton method. To get benefit from the past information that is produced by the iterative method, we constructed iterative methods with memory for solving systems of nonlinear equations. Iterative methods with memory have a greater rate of convergence, if compared with the iterative method without memory. In terms of computational cost, iterative methods with memory are marginally superior comparatively. Numerical methods are also introduced for approximating all the eigenvalues of banded symmetric Toeplitz and preconditioned Toeplitz matrices. Our proposed numerical methods work very efficiently, when the generating symbols of the considered Toeplitz matrices are bijective

    Equations and systems of nonlinear equations: from high order numerical methods to fast Eigensolvers for structured matrices and applications

    Get PDF
    A parametrized multi-step Newton method is constructed for widening the region of convergence of classical multi-step Newton method. The second improvement is proposed in the context of multistep Newton methods, by introducing preconditioners to enhance their accuracy, without disturbing their original order of convergence and the related computational cost (in most of the cases). To find roots with unknown multiplicities preconditioners are also effective when they are applied to the Newton method for roots with unknown multiplicities. Frozen Jacobian higher order multistep iterative method for the solution of systems of nonlinear equations are developed and the related results better than those obtained when employing the classical frozen Jacobian multi-step Newton method. To get benefit from the past information that is produced by the iterative method, we constructed iterative methods with memory for solving systems of nonlinear equations. Iterative methods with memory have a greater rate of convergence, if compared with the iterative method without memory. In terms of computational cost, iterative methods with memory are marginally superior comparatively. Numerical methods are also introduced for approximating all the eigenvalues of banded symmetric Toeplitz and preconditioned Toeplitz matrices. Our proposed numerical methods work very efficiently, when the generating symbols of the considered Toeplitz matrices are bijective

    Efficient Homomorphic Comparison Methods with Optimal Complexity

    Get PDF
    Comparison of two numbers is one of the most frequently used operations, but it has been a challenging task to efficiently compute the comparison function in homomorphic encryption (HE) which basically support addition and multiplication. Recently, Cheon et al. (Asiacrypt 2019) introduced a new approximate representation of the comparison function with a rational function, and showed that this rational function can be evaluated by an iterative algorithm. Due to this iterative feature, their method achieves a logarithmic computational complexity compared to previous polynomial approximation methods; however, the computational complexity is still not optimal, and the algorithm is quite slow for large-bit inputs in HE implementation. In this work, we propose new comparison methods with optimal asymptotic complexity based on composite polynomial approximation. The main idea is to systematically design a constant-degree polynomial ff by identifying the \emph{core properties} to make a composite polynomial ffff\circ f \circ \cdots \circ f get close to the sign function (equivalent to the comparison function) as the number of compositions increases. We additionally introduce an acceleration method applying a mixed polynomial composition ffggf\circ \cdots \circ f\circ g \circ \cdots \circ g for some other polynomial gg with different properties instead of ffff\circ f \circ \cdots \circ f. Utilizing the devised polynomials ff and gg, our new comparison algorithms only require Θ(log(1/ϵ))+Θ(logα)\Theta(\log(1/\epsilon)) + \Theta(\log\alpha) computational complexity to obtain an approximate comparison result of a,b[0,1]a,b\in[0,1] satisfying abϵ|a-b|\ge \epsilon within 2α2^{-\alpha} error. The asymptotic optimality results in substantial performance enhancement: our comparison algorithm on encrypted 2020-bit integers for α=20\alpha = 20 takes 1.431.43 milliseconds in amortized running time, which is 3030 times faster than the previous work
    corecore