11 research outputs found

    On memory gradient method with trust region for unconstrained optimization

    Full text link
    In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45437/1/11075_2005_Article_9008.pd

    Scaling rank-one updating formula and its application in unconstrained optimization

    No full text
    This thesis deals with algorithms used to solve unconstrained optimization problems. We analyse the properties of a scaling symmetric rank one (SSRl) update, prove the convergence of the matrices generated by SSRl to the true Hessian matrix and show that algorithm SSRl possesses the quadratic termination property with inexact line search. A new algorithm (OCSSRl) is presented, in which the scaling parameter in SSRl is choosen automatically by satisfying Davidon's criterion for an optimaly conditioned Hessian estimate. Numerical tests show that the new method compares favourably with BFGS. Using the OCSSRl update, we propose a hybrid QN algorithm which does not need to store any matrix. Numerical results show that it is a very promising method for solving large scale optimization problems. In addition, some popular technologies in unconstrained optimization are also discussed, for example, the trust region step, the descent direction with supermemory and. the detection of large residual in nonlinear least squares problems. The thesis consists of two parts. The first part gives a brief survey of unconstrained optimization. It contains four chapters, and introduces basic results on unconstrained optimization, some popular methods and their properties based on quadratic approximations to the objective function, some methods which are suitable for solving large scale optimization problems and some methods for solving nonlinear least squares problems. The second part outlines the new research results, and containes five chapters, In Chapter 5, the scaling rank one updating formula is analysed and studied. Chapter 6, Chapter 7 and Chapter 8 discuss the applications for the trust region method, large scale optimization problems and nonlinear least squares. A final chapter summarizes the problems used in numerical testing

    A NONMONOTONE MEMORY GRADIENT METHOD FOR UNCONSTRAINED OPTIMIZATION

    Get PDF
    Abstract Memory gradient methods are used for unconstrained optimization, especially large scale problems. They were first proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we propose a nonmonotone memory gradient method based on this work. We show that our method converges globally to the solution. Our numerical results show that the proposed method is efficient for some standard test problems if we choose a parameter included in the method suitably

    EmiR: Evolutionary minimization for R

    Get PDF
    Classical minimization methods, like the steepest descent or quasi-Newton techniques, have been proved to struggle in dealing with optimization problems with a high-dimensional search space or subject to complex nonlinear constraints. In the last decade, the interest on metaheuristic nature-inspired algorithms has been growing steadily, due to their flexibility and effectiveness. In this paper we present EmiR, a package for R which implements several metaheuristic algorithms for optimization problems. Unlike other available tools, EmiR can be used not only for unconstrained problems, but also for problems subjected to inequality constraints and for integer or mixed-integer problems. Main features of EmiR, its usage and the comparison with other available tools are presented

    New Inexact Line Search Method for Unconstrained Optimization

    Full text link
    We propose a new inexact line search rule and analyze the global convergence and convergence rate of related descent methods. The new line search rule is similar to the Armijo line-search rule and contains it as a special case. We can choose a larger stepsize in each line-search procedure and maintain the global convergence of related line-search methods. This idea can make us design new line-search methods in some wider sense. In some special cases, the new descent method can reduce to the Barzilai and Borewein method. Numerical results show that the new line-search methods are efficient for solving unconstrained optimization problems.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45195/1/10957_2005_Article_6553.pd

    Efficient Variational Bayesian Approximation Method Based on Subspace optimization

    No full text
    International audienceVariational Bayesian approximations have been widely used in fully Bayesian inference for approx- imating an intractable posterior distribution by a separable one. Nevertheless, the classical variational Bayesian approximation (VBA) method suffers from slow convergence to the approximate solution when tackling large-dimensional problems. To address this problem, we propose in this paper an improved VBA method. Actually, variational Bayesian issue can be seen as a convex functional optimization problem. The proposed method is based on the adaptation of subspace optimization methods in Hilbert spaces to the function space involved, in order to solve this optimization problem in an iterative way. The aim is to determine an optimal direction at each iteration in order to get a more efficient method. We highlight the efficiency of our new VBA method and its application to image processing by considering an ill-posed linear inverse problem using a total variation prior. Comparisons with state of the art variational Bayesian methods through a numerical example show the notable improved computation time

    An acceleration technique for a conjugate direction algorithm for nonlinear regression

    Get PDF
    A linear acceleration technique, LAT, is developed which is applied to three conjugate direction algorithms: (1) Fletcher-Reeves algorithm, (2) Davidon-Fletcher-Powell algorithm and (3) Grey\u27s Orthonormal Optimization Procedure (GOOP). Eight problems are solved by the three algorithms mentioned above and the Levenberg-Marquardt algorithm. The addition of the LAT algorithm improves the rate of convergence for the GOOP algorithm in all problems attempted and for some problems using the Fletcher-Reeves algorithm and the Davidon-Fletcher-Powell algorithm. Using the number of operations to perform function and derivative evaluations, the algorithms mentioned above are compared. Although the GOOP algorithm is relatively unknown outside of the optics literature, it was found to be competitive with the other successful algorithms. A proof of convergence of the accelerated GOOP algorithm for nonquadratic problems is also developed --Abstract, page ii

    An investigation of derivative-based methods for solving nonlinear problems with bounded variables

    Get PDF
    M.S.Mokhtar S. Bazara

    Mixed nonderivative algorithms for unconstrained optimization

    Get PDF
    A general technique is developed to restart nonderivative algorithms in unconstrained optimization. Application of the technique is shown to result in mixed algorithms which are considerably more robust than their component procedures. A general mixed algorithm is developed and its convergence is demonstrated. A uniform computational comparison is given for the new mixed algorithms and for a collection of procedures from the literature --Abstract, page ii
    corecore