19 research outputs found

    A New Hybrid Approach for Solving Large-scale Monotone Nonlinear Equations

    Get PDF
    In this paper, a new hybrid conjugate gradient method for solving monotone nonlinear equations is introduced. The scheme is a combination of the Fletcher-Reeves (FR) and Polak-Ribiére-Polyak (PRP) conjugate gradient methods with the Solodov and Svaiter projection strategy. Using suitable assumptions, the global convergence of the scheme with monotone line search is provided. Lastly, a numerical experiment was used to enumerate the suitability of the proposed scheme for large-scale problems

    A dynamical view of nonlinear conjugate gradient methods with applications to FFT-based computational micromechanics

    Get PDF
    For fast Fourier transform (FFT)-based computational micromechanics, solvers need to be fast, memory-efficient, and independent of tedious parameter calibration. In this work, we investigate the benefits of nonlinear conjugate gradient (CG) methods in the context of FFT-based computational micromechanics. Traditionally, nonlinear CG methods require dedicated line-search procedures to be efficient, rendering them not competitive in the FFT-based context. We contribute to nonlinear CG methods devoid of line searches by exploiting similarities between nonlinear CG methods and accelerated gradient methods. More precisely, by letting the step-size go to zero, we exhibit the Fletcher–Reeves nonlinear CG as a dynamical system with state-dependent nonlinear damping. We show how to implement nonlinear CG methods for FFT-based computational micromechanics, and demonstrate by numerical experiments that the Fletcher–Reeves nonlinear CG represents a competitive, memory-efficient and parameter-choice free solution method for linear and nonlinear homogenization problems, which, in addition, decreases the residual monotonically

    Extension of Modified Polak-Ribière-Polyak Conjugate Gradient Method to Linear Equality Constraints Minimization Problems

    Get PDF
    Combining the Rosen gradient projection method with the two-term Polak-Ribière-Polyak (PRP) conjugate gradient method, we propose a two-term Polak-Ribière-Polyak (PRP) conjugate gradient projection method for solving linear equality constraints optimization problems. The proposed method possesses some attractive properties: (1) search direction generated by the proposed method is a feasible descent direction; consequently the generated iterates are feasible points; (2) the sequences of function are decreasing. Under some mild conditions, we show that it is globally convergent with Armijio-type line search. Preliminary numerical results show that the proposed method is promising

    Effective Modified Hybrid Conjugate Gradient Method for Large-Scale Symmetric Nonlinear Equations

    Get PDF
    In this paper, we proposed hybrid conjugate gradient method using the convex combination of FR and PRP conjugate gradient methods for solving Large-scale symmetric nonlinear equations via Andrei approach with nonmonotone line search. Logical formula for obtaining the convex parameter using Newton and our proposed directions was also proposed. Under appropriate conditions global convergence was established. Reported numerical results show that the proposed method is very promising

    A Family of Hybrid Stochastic Conjugate Gradient Algorithms for Local and Global Minimization Problems

    Get PDF
    This paper contains two main parts, Part I and Part II, which discuss the local and global minimization problems, respectively. In Part I, a fresh conjugate gradient (CG) technique is suggested and then combined with a line-search technique to obtain a globally convergent algorithm. The finite difference approximations approach is used to compute the approximate values of the first derivative of the function f. The convergence analysis of the suggested method is established. The comparisons between the performance of the new CG method and the performance of four other CG methods demonstrate that the proposed CG method is promising and competitive for finding a local optimum point. In Part II, three formulas are designed by which a group of solutions are generated. This set of random formulas is hybridized with the globally convergent CG algorithm to obtain a hybrid stochastic conjugate gradient algorithm denoted by HSSZH. The HSSZH algorithm finds the approximate value of the global solution of a global optimization problem. Five combined stochastic conjugate gradient algorithms are constructed. The performance profiles are used to assess and compare the rendition of the family of hybrid stochastic conjugate gradient algorithms. The comparison results between our proposed HSSZH algorithm and four other hybrid stochastic conjugate gradient techniques demonstrate that the suggested HSSZH method is competitive with, and in all cases superior to, the four algorithms in terms of the efficiency, reliability and effectiveness to find the approximate solution of the global optimization problem that contains a non-convex function

    Robust parallel nonlinear solvers for implicit time discretizations of the Bidomain equations

    Full text link
    In this work, we study the convergence and performance of nonlinear solvers for the Bidomain equations after decoupling the ordinary and partial differential equations of the cardiac system. Firstly, we provide a rigorous proof of the global convergence of Quasi-Newton methods, such as BFGS, and nonlinear Conjugate-Gradient methods, such as Fletcher--Reeves, for the Bidomain system, by analyzing an auxiliary variational problem under physically reasonable hypotheses. Secondly, we compare several nonlinear Bidomain solvers in terms of execution time, robustness with respect to the data and parallel scalability. Our findings indicate that Quasi-Newton methods are the best choice for nonlinear Bidomain systems, since they exhibit faster convergence rates compared to standard Newton-Krylov methods, while maintaining robustness and scalability. Furthermore, first-order methods also demonstrate competitiveness and serve as a viable alternative, particularly for matrix-free implementations that are well-suited for GPU computing

    Don't be so Monotone: Relaxing Stochastic Line Search in Over-Parameterized Models

    Full text link
    Recent works have shown that line search methods can speed up Stochastic Gradient Descent (SGD) and Adam in modern over-parameterized settings. However, existing line searches may take steps that are smaller than necessary since they require a monotone decrease of the (mini-)batch objective function. We explore nonmonotone line search methods to relax this condition and possibly accept larger step sizes. Despite the lack of a monotonic decrease, we prove the same fast rates of convergence as in the monotone case. Our experiments show that nonmonotone methods improve the speed of convergence and generalization properties of SGD/Adam even beyond the previous monotone line searches. We propose a POlyak NOnmonotone Stochastic (PoNoS) method, obtained by combining a nonmonotone line search with a Polyak initial step size. Furthermore, we develop a new resetting technique that in the majority of the iterations reduces the amount of backtracks to zero while still maintaining a large initial step size. To the best of our knowledge, a first runtime comparison shows that the epoch-wise advantage of line-search-based methods gets reflected in the overall computational time

    Nonlinear hyperelasticity-based mesh optimisation

    Get PDF
    In this work, various aspects of PDE-based mesh optimisation are treated. Different existing methods are presented, with the focus on a class of nonlinear mesh quality functionals that can guarantee the orientation preserving property. This class is extended from simplex to hypercube meshes in 2d and 3d. The robustness of the resulting mesh optimisation method allows the incorporation of unilateral boundary conditions of place and r-adaptivity with direct control over the resulting cell sizes. Also, alignment to (implicit) surfaces is possible, but in all cases, the resulting functional is hard to treat analytically and numerically. Using theoretical results from mathematical elasticity for hyperelastic materials, the existence and non-uniqueness of minimisers can be established. This carries over to the discrete case, for the solution of which tools from nonlinear optimisation are used. Because of the considerable numerical effort, a class of linear preconditioners is developed that helps to speed up the solution process
    corecore