639 research outputs found

    Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity

    Full text link
    We consider nonconvex constrained optimization problems and propose a new approach to the convergence analysis based on penalty functions. We make use of classical penalty functions in an unconventional way, in that penalty functions only enter in the theoretical analysis of convergence while the algorithm itself is penalty-free. Based on this idea, we are able to establish several new results, including the first general analysis for diminishing stepsize methods in nonconvex, constrained optimization, showing convergence to generalized stationary points, and a complexity study for SQP-type algorithms.Comment: To appear on Mathematics of Operations Researc

    Multilevel algorithms for the optimization of structured problems

    Get PDF
    Although large scale optimization problems are very difficult to solve in general, problems that arise from practical applications often exhibit particular structure. In this thesis we study and improve algorithms that can efficiently solve structured problems. Three separate settings are considered. The first part concerns the topic of singularly perturbed Markov decision processes (MDPs). When a MDP is singularly perturbed, one can construct an aggregate model in which the solution is asymptotically optimal. We develop an algorithm that takes advantage of existing results to compute the solution of the original model. The proposed algorithm can compute the optimal solution with a reduction in complexity without any penalty in accuracy. In the second part, the class of empirical risk minimization (ERM) problems is studied. When using a first order method, the Lipschitz constant of the empirical risk plays a crucial role in the convergence analysis and stepsize strategy of these problems. We derive the probabilistic bounds for such Lipschitz constants using random matrix theory. Our results are used to derive the probabilistic complexity and develop a new stepsize strategy for first order methods. The proposed stepsize strategy, Probabilistic Upper-bound Guided stepsize strategy (PUG), has a strong theoretical guarantee on its performance compared to the standard stepsize strategy. In the third part, we extend the existing results on multilevel methods for unconstrained convex optimization. We study a special case where the hierarchy of models is created by approximating first and second order information of the exact model. This is known as Galerkin approximation, and we named the corresponding algorithm Galerkin-based Algebraic Multilevel Algorithm (GAMA). Three case studies are conducted to show how the structure of a problem could affect the convergence of GAMA.Open Acces
    • …
    corecore