667 research outputs found

    On the realization of the wolfe conditions in reduced quasi-Newton methods for equality constrained optimization

    Get PDF
    We propose a piecewise line-search technique for reduced quasi-Newton methods, which are designed for minimizing functions when nonlinear equality constraints are present. The search aims at realized Wolfe conditions. These conditions are suitable for the methods considered because thay allow the algorithm to maintain naturally the positive definiteness of the matrices approximating the reduced Hessian of the Lagrangian

    Global optimization: techniques and applications

    Get PDF
    Optimization problems arise in a wide variety of scientific disciplines. In many practical problems, a global optimum is desired, yet the objective function has multiple local optima. A number of techniques aimed at solving the global optimization problem have emerged in the last 30 years of research. This thesis first reviews techniques for local optimization and then discusses many of the stochastic and deterministic methods for global optimization that are in use today. Finally, this thesis shows how to apply global optimization techniques to two practical problems: the image segmentation problem (from imaging science) and the 3-D registration problem (from computer vision)

    Linear convergence of accelerated conditional gradient algorithms in spaces of measures

    Full text link
    A class of generalized conditional gradient algorithms for the solution of optimization problem in spaces of Radon measures is presented. The method iteratively inserts additional Dirac-delta functions and optimizes the corresponding coefficients. Under general assumptions, a sub-linear O(1/k)\mathcal{O}(1/k) rate in the objective functional is obtained, which is sharp in most cases. To improve efficiency, one can fully resolve the finite-dimensional subproblems occurring in each iteration of the method. We provide an analysis for the resulting procedure: under a structural assumption on the optimal solution, a linear O(ζk)\mathcal{O}(\zeta^k) convergence rate is obtained locally.Comment: 30 pages, 7 figure

    A hybrid global optimization method: The multi-dimensional case

    Get PDF
    AbstractWe extend the hybrid global optimization method proposed by Xu (J. Comput. Appl. Math. 147 (2002) 301–314) for the one-dimensional case to the multi-dimensional case. The method consists of two basic components: local optimizers and feasible point finders. Local optimizers guarantee efficiency and speed of producing a local optimal solution in the neighbourhood of a feasible point. Feasible point finders provide the theoretical guarantee for the new method to always produce the global optimal solution(s) correctly. If a nonlinear nonconvex inverse problem has multiple global optimal solutions, our algorithm is capable of finding all of them correctly. Three synthetic examples, which have failed simulated annealing and genetic algorithms, are used to demonstrate the proposed method

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
    • …
    corecore