180,023 research outputs found

    General Form of Nonmonotone Line Search Techniques For Unconstrained Optimization

    Get PDF
    By using the forcing function, we proposed a general form of nonmonotone line search technique for unconstrained optimization. The technique includes some well known nonmonotone line search as special cases while independence on the nonmonotone parameter. We establish the global convergence of the method under weak conditions and we report the numerical test with a modified BFGS method to show the effectiveness of the proposed method

    Global Convergence of a Nonlinear Conjugate Gradient Method

    Get PDF
    A modified PRP nonlinear conjugate gradient method to solve unconstrained optimization problems is proposed. The important property of the proposed method is that the sufficient descent property is guaranteed independent of any line search. By the use of the Wolfe line search, the global convergence of the proposed method is established for nonconvex minimization. Numerical results show that the proposed method is effective and promising by comparing with the VPRP, CG-DESCENT, and DL+ methods

    Practical Inexact Proximal Quasi-Newton Method with Global Complexity Analysis

    Full text link
    Recently several methods were proposed for sparse optimization which make careful use of second-order information [10, 28, 16, 3] to improve local convergence rates. These methods construct a composite quadratic approximation using Hessian information, optimize this approximation using a first-order method, such as coordinate descent and employ a line search to ensure sufficient descent. Here we propose a general framework, which includes slightly modified versions of existing algorithms and also a new algorithm, which uses limited memory BFGS Hessian approximations, and provide a novel global convergence rate analysis, which covers methods that solve subproblems via coordinate descent

    Modification of Nonlinear Conjugate Gradient Method with Weak Wolfe-Powell Line Search

    Get PDF
    Conjugate gradient (CG) method is used to find the optimum solution for the large scale unconstrained optimization problems. Based on its simple algorithm, low memory requirement, and the speed of obtaining the solution, this method is widely used in many fields, such as engineering, computer science, and medical science. In this paper, we modified CG method to achieve the global convergence with various line searches. In addition, it passes the sufficient descent condition without any line search. The numerical computations under weak Wolfe-Powell line search shows that the efficiency of the new method is superior to other conventional methods

    Two Modified Three-Term Type Conjugate Gradient Methods and Their Global Convergence for Unconstrained Optimization

    Get PDF
    Two modified three-term type conjugate gradient algorithms which satisfy both the descent condition and the Dai-Liao type conjugacy condition are presented for unconstrained optimization. The first algorithm is a modification of the Hager and Zhang type algorithm in such a way that the search direction is descent and satisfies Dai-Liao’s type conjugacy condition. The second simple three-term type conjugate gradient method can generate sufficient decent directions at every iteration; moreover, this property is independent of the steplength line search. Also, the algorithms could be considered as a modification of the MBFGS method, but with different zk. Under some mild conditions, the given methods are global convergence, which is independent of the Wolfe line search for general functions. The numerical experiments show that the proposed methods are very robust and efficient

    Parallel Variable Distribution Algorithm for Constrained Optimization with Nonmonotone Technique

    Get PDF
    A modified parallel variable distribution (PVD) algorithm for solving large-scale constrained optimization problems is developed, which modifies quadratic subproblem QPl at each iteration instead of the QPl0 of the SQP-type PVD algorithm proposed by C. A. Sagastizábal and M. V. Solodov in 2002. The algorithm can circumvent the difficulties associated with the possible inconsistency of QPl0 subproblem of the original SQP method. Moreover, we introduce a nonmonotone technique instead of the penalty function to carry out the line search procedure with more flexibly. Under appropriate conditions, the global convergence of the method is established. In the final part, parallel numerical experiments are implemented on CUDA based on GPU (Graphics Processing unit)

    Optimization via Chebyshev Polynomials

    Full text link
    This paper presents for the first time a robust exact line-search method based on a full pseudospectral (PS) numerical scheme employing orthogonal polynomials. The proposed method takes on an adaptive search procedure and combines the superior accuracy of Chebyshev PS approximations with the high-order approximations obtained through Chebyshev PS differentiation matrices (CPSDMs). In addition, the method exhibits quadratic convergence rate by enforcing an adaptive Newton search iterative scheme. A rigorous error analysis of the proposed method is presented along with a detailed set of pseudocodes for the established computational algorithms. Several numerical experiments are conducted on one- and multi-dimensional optimization test problems to illustrate the advantages of the proposed strategy.Comment: 26 pages, 6 figures, 2 table
    • …
    corecore