242 research outputs found

    Nonmonotone globalization of the finite-difference Newton-GMRES method for nonlinear equations.

    Get PDF
    In this paper, we study nonmonotone globalization strategies, in connection with the finite-difference inexact Newton-GMRES method for nonlinear equations. We first define a globalization algorithm that combines nonmonotone watchdog rules and nonmonotone derivative-free linesearches related to a merit function, and prove its global convergence under the assumption that the Jacobian is nonsingular and that the iterations of the GMRES subspace method can be completed at each step. Then we introduce a hybrid stabilization scheme employing occasional line searches along positive bases, and establish global convergence towards a solution of the system, under the less demanding condition that the Jacobian is nonsingular at stationary points of the merit function. Through a set of numerical examples, we show that the proposed techniques may constitute useful options to be added in solvers for nonlinear systems of equations. © 2010 Taylor & Francis

    Nonmonotone Barzilai-Borwein Gradient Algorithm for 1\ell_1-Regularized Nonsmooth Minimization in Compressive Sensing

    Full text link
    This paper is devoted to minimizing the sum of a smooth function and a nonsmooth 1\ell_1-regularized term. This problem as a special cases includes the 1\ell_1-regularized convex minimization problem in signal processing, compressive sensing, machine learning, data mining, etc. However, the non-differentiability of the 1\ell_1-norm causes more challenging especially in large problems encountered in many practical applications. This paper proposes, analyzes, and tests a Barzilai-Borwein gradient algorithm. At each iteration, the generated search direction enjoys descent property and can be easily derived by minimizing a local approximal quadratic model and simultaneously taking the favorable structure of the 1\ell_1-norm. Moreover, a nonmonotone line search technique is incorporated to find a suitable stepsize along this direction. The algorithm is easily performed, where the values of the objective function and the gradient of the smooth term are required at per-iteration. Under some conditions, the proposed algorithm is shown to be globally convergent. The limited experiments by using some nonconvex unconstrained problems from CUTEr library with additive 1\ell_1-regularization illustrate that the proposed algorithm performs quite well. Extensive experiments for 1\ell_1-regularized least squares problems in compressive sensing verify that our algorithm compares favorably with several state-of-the-art algorithms which are specifically designed in recent years.Comment: 20 page

    A NONMONOTONE MEMORY GRADIENT METHOD FOR UNCONSTRAINED OPTIMIZATION

    Get PDF
    Abstract Memory gradient methods are used for unconstrained optimization, especially large scale problems. They were first proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we propose a nonmonotone memory gradient method based on this work. We show that our method converges globally to the solution. Our numerical results show that the proposed method is efficient for some standard test problems if we choose a parameter included in the method suitably

    Nonmonotone globalization techniques for the Barzilai-Borwein gradient method.

    Get PDF
    In this paper we propose new globalization strategies for the Barzilai and Borwein gradient method, based on suitable relaxations of the monotonicity requirements. In particular, we define a class of algorithms that combine nonmonotone watchdog techniques with nonmonotone linesearch rules and we prove the global convergence of these schemes. Then we perform an extensive computational study, which shows the effectiveness of the proposed approach in the solution of large dimensional unconstrained optimization problems
    corecore