6,942 research outputs found

    A new three-step class of iterative methods for solving nonlinear systems

    Full text link
    [EN] In this work, a new class of iterative methods for solving nonlinear equations is presented and also its extension for nonlinear systems of equations. This family is developed by using a scalar and matrix weight function procedure, respectively, getting sixth-order of convergence in both cases. Several numerical examples are given to illustrate the efficiency and performance of the proposed methods.This research has been partially supported by both Generalitat Valenciana and Ministerio de Ciencia, Investigacion y Universidades, under grants PROMETEO/2016/089 and PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE), respectively.Capdevila-Brown, RR.; Cordero Barbero, A.; Torregrosa Sánchez, JR. (2019). A new three-step class of iterative methods for solving nonlinear systems. Mathematics. 7(12):1-14. https://doi.org/10.3390/math712122111471

    Design, Analysis, and Applications of Iterative Methods for Solving Nonlinear Systems

    Get PDF
    In this chapter, we present an overview of some multipoint iterative methods for solving nonlinear systems obtained by using different techniques such as composition of known methods, weight function procedure, and pseudo-composition, etc. The dynamical study of these iterative schemes provides us valuable information about their stability and reliability. A numerical test on a specific problem coming from chemistry is performed to compare the described methods with classical ones and to confirm the theoretical results

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Generalizing Traub's method to a parametric iterative class for solving multidimensional nonlinear problems

    Full text link
    [EN] In this work, we modify the iterative structure of Traub's method to include a real parameter alphaα \alpha . A parametric family of iterative methods is obtained as a generalization of Traub, which is also a member of it. The cubic order of convergence is proved for any value of alphaα \alpha . Then, a dynamical analysis is performed after applying the family for solving a system cubic polynomials by means of multidimensional real dynamics. This analysis allows to select the best members of the family in terms of stability as a preliminary study to be generalized to any nonlinear function. Finally, some iterative schemes of the family are used to check numerically the previous developments when they are used to approximate the solutions of academic nonlinear problems and a chemical diffusion reaction problem.ERDF A way of making Europe, Grant/Award Number: PGC2018-095896-B-C22; MICoCo of Universidad Internacional de La Rioja (UNIR), Grant/Award Number: PGC2018-095896-B-C22Chicharro, FI.; Cordero Barbero, A.; Garrido-Saez, N.; Torregrosa Sánchez, JR. (2023). Generalizing Traub's method to a parametric iterative class for solving multidimensional nonlinear problems. Mathematical Methods in the Applied Sciences. 1-14. https://doi.org/10.1002/mma.937111

    Stable high-order iterative methods for solving nonlinear models

    Full text link
    [EN] There are several problems of pure and applied science which can be studied in the unified framework of the scalar and vectorial nonlinear equations. In this paper, we propose a sixth-order family of Jarratt type methods for solving nonlinear equations. Further, we extend this family to the multidimensional case preserving the order of convergence. Their theoretical and computational properties are fully investigated along with two main theorems describing the order of convergence. We use complex dynamics techniques in order to select, among the elements of this class of iterative methods, those more stable. This process is made by analyzing the conjugacy class, calculating the fixed and critical points and getting conclusions from parameter and dynamical planes. For the implementation of the proposed schemes for system of nonlinear equations, we consider some applied science problems namely, Van der Pol problem, kinematic syntheses, etc. Further, we compare them with existing sixth-order methods to check the validity of the theoretical results. From the numerical experiments, we find that our proposed schemes perform better than the existing ones. Further, we also consider a variety of nonlinear equations to check the performance of the proposed methods for scalar equations.This research was partially supported by Ministerio de Economia y Competitividad MTM2014-52016-C2-2-P and by Generalitat Valenciana PROMETEO/2016/089.Behl, R.; Cordero Barbero, A.; Motsa, SS.; Torregrosa Sánchez, JR. (2017). Stable high-order iterative methods for solving nonlinear models. Applied Mathematics and Computation. 303:70-88. https://doi.org/10.1016/j.amc.2017.01.029S708830
    corecore