3 research outputs found

    Corporative Stochastic Approximation with Random Constraint Sampling for Semi-Infinite Programming

    Full text link
    We developed a corporative stochastic approximation (CSA) type algorithm for semi-infinite programming (SIP), where the cut generation problem is solved inexactly. First, we provide general error bounds for inexact CSA. Then, we propose two specific random constraint sampling schemes to approximately solve the cut generation problem. When the objective and constraint functions are generally convex, we show that our randomized CSA algorithms achieve an O(1/N)\mathcal{O}(1/\sqrt{N}) rate of convergence in expectation (in terms of optimality gap as well as SIP constraint violation). When the objective and constraint functions are all strongly convex, this rate can be improved to O(1/N)\mathcal{O}(1/N)

    Distributionally robust second-order stochastic dominance constrained optimization with Wasserstein ball

    Full text link
    We consider a distributionally robust second-order stochastic dominance constrained optimization problem. We require the dominance constraints hold with respect to all probability distributions in a Wasserstein ball centered at the empirical distribution. We adopt the sample approximation approach to develop a linear programming formulation that provides a lower bound. We propose a novel split-and-dual decomposition framework which provides an upper bound. We establish quantitative convergency for both lower and upper approximations given some constraint qualification conditions. To efficiently solve the non-convex upper bound problem, we use a sequential convex approximation algorithm. Numerical evidences on a portfolio selection problem valid the convergency and effectiveness of the proposed two approximation methods

    A Randomized Nonlinear Rescaling Method in Large-Scale Constrained Convex Optimization

    Full text link
    We propose a new randomized algorithm for solving convex optimization problems that have a large number of constraints (with high probability). Existing methods like interior-point or Newton-type algorithms are hard to apply to such problems because they have expensive computation and storage requirements for Hessians and matrix inversions. Our algorithm is based on nonlinear rescaling (NLR), which is a primal-dual-type algorithm by Griva and Polyak {[{Math. Program., 106(2):237-259, 2006}]}. NLR introduces an equivalent problem through a transformation of the constraint functions, minimizes the corresponding augmented Lagrangian for given dual variables, and then uses this minimizer to update the dual variables for the next iteration. The primal update at each iteration is the solution of an unconstrained finite sum minimization problem where the terms are weighted by the current dual variables. We use randomized first-order algorithms to do these primal updates, for which they are especially well suited. In particular, we use the scaled dual variables as the sampling distribution for each primal update, and we show that this distribution is the optimal one among all probability distributions. We conclude by demonstrating the favorable numerical performance of our algorithm
    corecore