196 research outputs found

    An adaptive sampling sequential quadratic programming method for nonsmooth stochastic optimization with upper-C2\mathcal{C}^2 objective

    Full text link
    We propose an optimization algorithm that incorporates adaptive sampling for stochastic nonsmooth nonconvex optimization problems with upper-C2\mathcal{C}^2 objective functions. Upper-C2\mathcal{C}^2 is a weakly concave property that exists naturally in many applications, particularly certain classes of solutions to parametric optimization problems, e.g., recourse of stochastic programming and projection into closed sets. Our algorithm is a stochastic sequential quadratic programming (SQP) method extended to nonsmooth problems with upperC2\mathcal{C}^2 objectives and is globally convergent in expectation with bounded algorithmic parameters. The capabilities of our algorithm are demonstrated by solving a joint production, pricing and shipment problem, as well as a realistic optimal power flow problem as used in current power grid industry practice.Comment: arXiv admin note: text overlap with arXiv:2204.0963

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Using second-order information in gradient sampling methods for nonsmooth optimization

    Full text link
    In this article, we show how second-order derivative information can be incorporated into gradient sampling methods for nonsmooth optimization. The second-order information we consider is essentially the set of coefficients of all second-order Taylor expansions of the objective in a closed ball around a given point. Based on this concept, we define a model of the objective as the maximum of these Taylor expansions. Iteratively minimizing this model (constrained to the closed ball) results in a simple descent method, for which we prove convergence to minimal points in case the objective is convex. To obtain an implementable method, we construct an approximation scheme for the second-order information based on sampling objective values, gradients and Hessian matrices at finitely many points. Using a set of test problems, we compare the resulting method to five other available solvers. Considering the number of function evaluations, the results suggest that the method we propose is superior to the standard gradient sampling method, and competitive compared to other methods
    corecore