39 research outputs found

    SOME MODIFICATIONS OF CHEBYSHEV-HALLEY’S METHODS FREE FROM SECOND DERIVATIVE WITH EIGHTH-ORDER OF CONVERGENCE

    Get PDF
    The variant of Chebyshev-Halley’s method is an iterative method used for solving a nonlinear equation with third order of convergence. In this paper, we present some new variants of three steps Chebyshev-Halley’s method free from second derivative with two parameters. The proposed methods have eighth-order of convergence for  and  and require four evaluations of functions per iteration with index efficiency equal to . Numerical simulation will be presented by using several functions to show the performance of the proposed methods

    Two New Predictor-Corrector Iterative Methods with Third- and Ninth-Order Convergence for Solving Nonlinear Equations

    Get PDF
    In this paper, we suggest and analyze two new predictor-corrector iterative methods with third and ninth-order convergence for solving nonlinear equations. The first method is a development of [M. A. Noor, K. I. Noor and K. Aftab, Some New Iterative Methods for Solving Nonlinear Equations, World Applied Science Journal, 20(6),(2012):870-874.] based on the trapezoidal integration rule and the centroid mean. The second method is an improvement of the first new proposed method by using the technique of updating the solution. The order of convergence and corresponding error equations of new proposed methods are proved. Several numerical examples are given to illustrate the efficiency and performance of these new methods and compared them with the Newton's method and other relevant iterative methods. Keywords: Nonlinear equations, Predictor–corrector methods, Trapezoidal integral rule, Centroid mean, Technique of updating the solution; Order of convergence

    Some new efficient multipoint iterative methods for solving nonlinear systems of equations

    Full text link
    It is attempted to put forward a new multipoint iterative method of sixth-order convergence for approximating solutions of nonlinear systems of equations. It requires the evaluation of two vector-function and two Jacobian matrices per iteration. Furthermore, we use it as a predictor to derive a general multipoint method. Convergence error analysis, estimating computational complexity, numerical implementation and comparisons are given to verify applicability and validity for the proposed methods.This research was supported by Islamic Azad University - Hamedan Branch, Ministerio de Ciencia y Tecnologia MTM2011-28636-C02-02 and Universitat Politecnica de Valencia SP20120474.Lotfi, T.; Bakhtiari, P.; Cordero Barbero, A.; Mahdiani, K.; Torregrosa Sánchez, JR. (2015). Some new efficient multipoint iterative methods for solving nonlinear systems of equations. International Journal of Computer Mathematics. 92(9):1921-1934. https://doi.org/10.1080/00207160.2014.946412S1921193492

    Estudio sobre convergencia y dinámica de los métodos de Newton, Stirling y alto orden

    Get PDF
    Las matemáticas, desde el origen de esta ciencia, han estado al servicio de la sociedad tratando de dar respuesta a los problemas que surgían. Hoy en día sigue siendo así, el desarrollo de las matemáticas está ligado a la demanda de otras ciencias que necesitan dar solución a situaciones concretas y reales. La mayoría de los problemas de ciencia e ingeniería no pueden resolverse usando ecuaciones lineales, es por tanto que hay que recurrir a las ecuaciones no lineales para modelizar dichos problemas (Amat, 2008; véase también Argyros y Magreñán, 2017, 2018), entre otros. El conflicto que presentan las ecuaciones no lineales es que solo en unos pocos casos es posible encontrar una solución única, por tanto, en la mayor parte de los casos, para resolverlas hay que recurrir a los métodos iterativos. Los métodos iterativos generan, a partir de un punto inicial, una sucesión que puede converger o no a la solución

    Aproximación de ecuaciones diferenciales mediante una nueva técnica variacional y aplicaciones

    Get PDF
    [SPA] En esta Tesis presentamos el estudio teórico y numérico de sistemas de ecuaciones diferenciales basado en el análisis de un funcional asociado de forma natural al problema original. Probamos que cuando se utiliza métodos del descenso para minimizar dicho funcional, el algoritmo decrece el error hasta obtener la convergencia dada la no existencia de mínimos locales diferentes a la solución original. En cierto sentido el algoritmo puede considerarse un método tipo Newton globalmente convergente al estar basado en una linearización del problema. Se han estudiado la aproximación de ecuaciones diferenciales rígidas, de ecuaciones rígidas con retardo, de ecuaciones algebraico‐diferenciales y de problemas hamiltonianos. Esperamos que esta nueva técnica variacional pueda usarse en otro tipo de problemas diferenciales. [ENG] This thesis is devoted to the study and approximation of systems of differential equations based on an analysis of a certain error functional associated, in a natural way, with the original problem. We prove that in seeking to minimize the error by using standard descent schemes, the procedure can never get stuck in local minima, but will always and steadily decrease the error until getting to the original solution. One main step in the procedure relies on a very particular linearization of the problem, in some sense it is like a globally convergent Newton type method. We concentrate on the approximation of stiff systems of ODEs, DDEs, DAEs and Hamiltonian systems. In all these problems we need to use implicit schemes. We believe that this approach can be used in a systematic way to examine other situations and other types of equations.Universidad Politécnica de Cartagen

    Algoritmos basados en los Polinomios de Adomian e Interación Variacional para la resolución de ecuaciones no lineales

    Get PDF
    Esta tesis aborda las técnicas de los polinomios de Adomian e Iteración Variacional, que son métodos iterativos para resolver ecuaciones no lineales de la forma f (x) = 0: El objetivo principal es generar nuevos algoritmos y nuevos esquemas iterativos que permitan obtener nuevas fórmulas y métodos iterativos. Se estudian los polinomios de Adomian y se construyen nuevas variantes del método de Newton. También se estudian la técnica iterativa variacional y se obtienen algunos resultados conocidos, como también, nuevos esquemas y por ende, nuevos métodos iterativos. En el presente estudio se realiza una revisión de las diversas fórmulas existentes y se crean nuevas fórmulas mediante procedimientos matemáticos basados en los polinomios de Adomian y la técnica iterativa variacional. Se desarrolla la construcción de los principales esquemas iterativos, asi como el análisis de su convergencia, enfatizando en el orden de convergencia de dicho método. Este estudio permitió obtener los principales esquemas iterativos de cada método, mediante la deducción de su método constructivo, asi como el análisis de convergencia del mismo. Se ejemplifican y se calculan raíces de funciones no lineales de algunas funciones bases, utilizadas en los artículos científicos consultado. También, se realiza una comparación entre los algoritmos existentes y los diseñado en nuestra investigación, utilizando los criterios de: orden de convergencia, e-ciencia computacional, índice operacional, así como el máximo y mínimo número de evaluaciones funcionales e índice de e-ciencia computacional. Según los resultados obtenidos después de las comparaciones, nuestros algoritmos presentan un excelente funcionamiento con respecto a los existentes en la literatura sobre este área de conocimient

    Solving Large Dense Symmetric Eigenproblem on Hybrid Architectures

    Get PDF
    Dense symmetric eigenproblem is one of the most significant problems in the numerical linear algebra that arises in numerous research fields such as bioinformatics, computational chemistry, and meteorology. In the past years, the problems arising in these fields become bigger than ever resulting in growing demands in both computational power as well as the storage capacities. In such problems, the eigenproblem becomes the main computational bottleneck for which solution is required an extremely high computational power. Modern computing architectures that can meet these growing demands are those that combine the power of the traditional multi-core processors and the general-purpose GPUs and are called hybrid systems. These systems exhibit very high performance when the data fits into the GPU memory ; however, if the volume of the data exceeds the total GPU memory, i.e. the data is out-of-core from the GPU perspective, the performance rapidly decreases. This dissertation is focused on the development of the algorithms that solve dense symmetric eigenproblems on the hybrid GPU-based architectures. In particular, it aims at developing the eigensolvers that exhibit very high performance even if a problem is out- of-core for the GPU. The developed out-of-core eigensolvers are evaluated and compared on real problems that arise in the simulation of molecular motions. In such problems the data, usually too large to fit into the GPU memory, are stored in the main memory and copied to the GPU memory in pieces. That approach results in the performance drop due to a slow interconnection and a high memory latency. To overcome this problem an approach that applies blocking strategy and re- designs the existing eigensolvers, in order to decrease the volume of data transferred and the number of memory transfers, is presented. This approach designs and implements a set of the block- oriented, communication-avoiding BLAS routines that overlap the data transfers with the number of computations performed. Next, these routines are applied to speed-up the following eigensolvers: the solver based on the multi-stage reduction to a tridiagonal form, the Krylov subspace-based method, and the spectral divide-and-conquer method. Although the out-of-core BLAS routines significantly improve the performance of these three eigensolvers, a careful re-design is required in order to tackle the solution of the large eigenproblems on the hybrid CPU-GPU systems. In the out-of-core multi-stage reduction approach, the factor that mostly influences the performance is the band size of the obtained band matrix. On the other hand, the Krylov subspace- based method, although it is based on the memory- bound BLAS-2 operations, is the fastest method if only a small subset of the eigenpairs is required. Finally, the spectral divide-and- conquer algorithm, which exhibits significantly higher arithmetic cost than the other two eigensolvers, achieves extremely high performance since it can be performed completely in terms of the compute-bound BLAS-3 operations. Furthermore, its high arithmetic cost is further reduced by exploiting the special structure of a matrix. Finally, the results presented in the dissertation show that the three out-of-core eigen- solvers, for a set of the specific macromolecular problems, significantly overcome the multi-core variants and attain high flops rate even if data do not fit into the GPU memory. This proves that it is possible to solve large eigenproblems on modest computing systems equipped with a single GPU
    corecore