2,619 research outputs found

    A dynamic gradient approach to Pareto optimization with nonsmooth convex objective functions

    Full text link
    In a general Hilbert framework, we consider continuous gradient-like dynamical systems for constrained multiobjective optimization involving non-smooth convex objective functions. Our approach is in the line of a previous work where was considered the case of convex di erentiable objective functions. Based on the Yosida regularization of the subdi erential operators involved in the system, we obtain the existence of strong global trajectories. We prove a descent property for each objective function, and the convergence of trajectories to weak Pareto minima. This approach provides a dynamical endogenous weighting of the objective functions. Applications are given to cooperative games, inverse problems, and numerical multiobjective optimization

    Phase retrieval and saddle-point optimization

    Full text link
    Iterative algorithms with feedback are amongst the most powerful and versatile optimization methods for phase retrieval. Among these, the hybrid input-output algorithm has demonstrated practical solutions to giga-element nonlinear phase retrieval problems, escaping local minima and producing images at resolutions beyond the capabilities of lens-based optical methods. Here, the input-output iteration is improved by a lower dimensional subspace saddle-point optimization.Comment: 8 pages, 4 figures, revte

    Study of hybrid strategies for multi-objective optimization using gradient based methods and evolutionary algorithms

    Get PDF
    Most of the optimization problems encountered in engineering have conflicting objectives. In order to solve these problems, genetic algorithms (GAs) and gradient-based methods are widely used. GAs are relatively easy to implement, because these algorithms only require first-order information of the objectives and constraints. On the other hand, GAs do not have a standard termination condition and therefore they may not converge to the exact solutions. Gradient-based methods, on the other hand, are based on first- and higher-order information of the objectives and constraints. These algorithms converge faster to the exact solutions in solving single-objective optimization problems, but are inefficient for multi-objective optimization problems (MOOPs) and unable to solve those with non-convex objective spaces. The work in this dissertation focuses on developing a hybrid strategy for solving MOOPs based on feasible sequential quadratic programming (FSQP) and nondominated sorting genetic algorithm II (NSGA-II). The hybrid algorithms developed in this dissertation are tested using benchmark problems and evaluated based on solution distribution, solution accuracy, and execution time. Based on these performance factors, the best hybrid strategy is determined and found to be generally efficient with good solution distributions in most of the cases studied. The best hybrid algorithm is applied to the design of a crushing tube and is shown to have relatively well-distributed solutions and good efficiency compared to solutions obtained by NSGA-II and FSQP alone

    A Switching Criterion in Hybrid Quasi-Newton BFGS - Steepest Descent Direction

    Get PDF
    Two modified methods for unconstrained optimization are presented. The methods employ a hybrid descent direction strategy which uses a linear convex combination of quasi-Newton BFGS and steepest descent as search direction. A switching criterion is derived based on the First and Second order Kuhn-Tucker condition. The switching criterion can be viewed as a way to change between quasi-Newton and steepest descent step by matching the Kuhn-Tucker condition. This is to ensure that no potential feasible moves away from the current descent step to the other one that reduced the value of the objective function. Numerical results are also presented, which suggest that an improvement has been achieved compared with the BFGS algorithm

    Extensions to the Proximal Distance of Method of Constrained Optimization

    Full text link
    The current paper studies the problem of minimizing a loss f(x)f(\boldsymbol{x}) subject to constraints of the form DxS\boldsymbol{D}\boldsymbol{x} \in S, where SS is a closed set, convex or not, and D\boldsymbol{D} is a fusion matrix. Fusion constraints can capture smoothness, sparsity, or more general constraint patterns. To tackle this generic class of problems, we combine the Beltrami-Courant penalty method of optimization with the proximal distance principle. The latter is driven by minimization of penalized objectives f(x)+ρ2dist(Dx,S)2f(\boldsymbol{x})+\frac{\rho}{2}\text{dist}(\boldsymbol{D}\boldsymbol{x},S)^2 involving large tuning constants ρ\rho and the squared Euclidean distance of Dx\boldsymbol{D}\boldsymbol{x} from SS. The next iterate xn+1\boldsymbol{x}_{n+1} of the corresponding proximal distance algorithm is constructed from the current iterate xn\boldsymbol{x}_n by minimizing the majorizing surrogate function f(x)+ρ2DxPS(Dxn)2f(\boldsymbol{x})+\frac{\rho}{2}\|\boldsymbol{D}\boldsymbol{x}-\mathcal{P}_S(\boldsymbol{D}\boldsymbol{x}_n)\|^2. For fixed ρ\rho and convex f(x)f(\boldsymbol{x}) and SS, we prove convergence, provide convergence rates, and demonstrate linear convergence under stronger assumptions. We also construct a steepest descent (SD) variant to avoid costly linear system solves. To benchmark our algorithms, we adapt the alternating direction method of multipliers (ADMM) and compare on extensive numerical tests including problems in metric projection, convex regression, convex clustering, total variation image denoising, and projection of a matrix to one that has a good condition number. Our experiments demonstrate the superior speed and acceptable accuracy of the steepest variant on high-dimensional problems. Julia code to replicate all of our experiments can be found at https://github.com/alanderos91/ProximalDistanceAlgorithms.jl.Comment: 35 pages (22 main text, 10 appendices, 3 references), 9 tables, 1 figur
    corecore