12,555 research outputs found
Convergence and dynamics of improved Chebyshev-Secant-type methods for non differentiable operators
[EN] In this paper, the convergence and dynamics of improved Chebyshev-Secant-type iterative methods are studied for solving nonlinear equations in Banach space settings. Their semilocal convergence is established using recurrence relations under weaker continuity conditions on first-order divided differences. Convergence theorems are established for the existence-uniqueness of the solutions. Next, center-Lipschitz condition is defined on the first-order divided differences and its influence on the domain of starting iterates is compared with those corresponding to the domain of Lipschitz conditions. Several numerical examples including Automotive Steering problems and nonlinear mixed Hammerstein-type integral equations are analyzed, and the output results are compared with those obtained by some of similar existing iterative methods. It is found that improved results are obtained for all the numerical examples. Further, the dynamical analysis of the iterative method is carried out. It confirms that the proposed iterative method has better stability properties than its competitors.This research was partially supported by Ministerio de Economia y Competitividad under grant PGC2018-095896-B-C22.Kumar, A.; Gupta, DK.; Martínez Molada, E.; Hueso, JL. (2021). Convergence and dynamics of improved Chebyshev-Secant-type methods for non differentiable operators. Numerical Algorithms. 86(3):1051-1070. https://doi.org/10.1007/s11075-020-00922-9S10511070863Hernández, M.A.: Chebyshev’s approximation algorithms and applications. Comput. Math. Appl. 41(3-4), 433–445 (2001)Ezquerro, J.A., Grau-Sánchez, Miquel, Hernández, M.A.: Solving non-differentiable equations by a new one-point iterative method with memory. J. Complex. 28(1), 48–58 (2012)Ioannis , K.A., Ezquerro, J.A., Gutiérrez, J.M., hernández, M.A., saïd Hilout: On the semilocal convergence of efficient Chebyshev-Secant-type methods. J. Comput. Appl. Math. 235(10), 3195–3206 (2011)Hongmin, R., Ioannis, K.A.: Local convergence of efficient Secant-type methods for solving nonlinear equations. Appl. Math. comput. 218(14), 7655–7664 (2012)Ioannis, Ioannis K.A., Hongmin, R.: On the semilocal convergence of derivative free methods for solving nonlinear equations. J. Numer. Anal. Approx. Theory 41 (1), 3–17 (2012)Hongmin, R., Ioannis, K.A.: On the convergence of King-Werner-type methods of order free of derivatives. Appl. Math. Comput. 256, 148–159 (2015)Kumar, A., Gupta, D.K., Martínez, E., Sukhjit, S.: Semilocal convergence of a Secant-type method under weak Lipschitz conditions in Banach spaces. J. Comput. Appl. Math. 330, 732–741 (2018)Grau-Sánchez, M., Noguera, M., Gutiérrez, J.M.: Frozen iterative methods using divided differences “à la Schmidt–Schwetlick”. J. Optim. Theory Appl. 160 (3), 931–948 (2014)Louis, B.R.: Computational Solution of Nonlinear Operator Equations. Wiley, New York (1969)Blanchard, P.: The dynamics of Newton’s method. Proc. Symp. Appl. Math. 49, 139–154 (1994)Parisa, B., Cordero, A., Taher, L., Kathayoun, M., Torregrosa, J.R.: Widening basins of attraction of optimal iterative methods. Nonlinear Dynamics 87 (2), 913–938 (2017)Chun, C., Neta, B.: The basins of attraction of Murakami’s fifth order family of methods. Appl. Numer. Math. 110, 14–25 (2016)Magreñán, Á. A.: A new tool to study real dynamics: the convergence plane. Appl. Math. Comput. 248, 215–224 (2014)Ramandeep, B., Cordero, A., Motsa, S.S., Torregrosa, J.R.: Stable high-order iterative methods for solving nonlinear models. Appl. Math. Comput. 303, 70–88 (2017)Pramanik, S.: Kinematic synthesis of a six-member mechanism for automotive steering. Trans Ame Soc. Mech. Eng. J. Mech. Des. 124(4), 642–645 (2002
Fixed-point and coordinate descent algorithms for regularized kernel methods
In this paper, we study two general classes of optimization algorithms for
kernel methods with convex loss function and quadratic norm regularization, and
analyze their convergence. The first approach, based on fixed-point iterations,
is simple to implement and analyze, and can be easily parallelized. The second,
based on coordinate descent, exploits the structure of additively separable
loss functions to compute solutions of line searches in closed form. Instances
of these general classes of algorithms are already incorporated into state of
the art machine learning software for large scale problems. We start from a
solution characterization of the regularized problem, obtained using
sub-differential calculus and resolvents of monotone operators, that holds for
general convex loss functions regardless of differentiability. The two
methodologies described in the paper can be regarded as instances of non-linear
Jacobi and Gauss-Seidel algorithms, and are both well-suited to solve large
scale problems
Preconditioned fully implicit PDE solvers for monument conservation
Mathematical models for the description, in a quantitative way, of the
damages induced on the monuments by the action of specific pollutants are often
systems of nonlinear, possibly degenerate, parabolic equations. Although some
the asymptotic properties of the solutions are known, for a short window of
time, one needs a numerical approximation scheme in order to have a
quantitative forecast at any time of interest. In this paper a fully implicit
numerical method is proposed, analyzed and numerically tested for parabolic
equations of porous media type and on a systems of two PDEs that models the
sulfation of marble in monuments. Due to the nonlinear nature of the underlying
mathematical model, the use of a fixed point scheme is required and every step
implies the solution of large, locally structured, linear systems. A special
effort is devoted to the spectral analysis of the relevant matrices and to the
design of appropriate iterative or multi-iterative solvers, with special
attention to preconditioned Krylov methods and to multigrid procedures.
Numerical experiments for the validation of the analysis complement this
contribution.Comment: 26 pages, 13 figure
Solving ill-posed inverse problems using iterative deep neural networks
We propose a partially learned approach for the solution of ill posed inverse
problems with not necessarily linear forward operators. The method builds on
ideas from classical regularization theory and recent advances in deep learning
to perform learning while making use of prior information about the inverse
problem encoded in the forward operator, noise model and a regularizing
functional. The method results in a gradient-like iterative scheme, where the
"gradient" component is learned using a convolutional network that includes the
gradients of the data discrepancy and regularizer as input in each iteration.
We present results of such a partially learned gradient scheme on a non-linear
tomographic inversion problem with simulated data from both the Sheep-Logan
phantom as well as a head CT. The outcome is compared against FBP and TV
reconstruction and the proposed method provides a 5.4 dB PSNR improvement over
the TV reconstruction while being significantly faster, giving reconstructions
of 512 x 512 volumes in about 0.4 seconds using a single GPU
- …