5 research outputs found

    A Nonlinear Lagrange Algorithm for Stochastic Minimax Problems Based on Sample Average Approximation Method

    Get PDF
    An implementable nonlinear Lagrange algorithm for stochastic minimax problems is presented based on sample average approximation method in this paper, in which the second step minimizes a nonlinear Lagrange function with sample average approximation functions of original functions and the sample average approximation of the Lagrange multiplier is adopted. Under a set of mild assumptions, it is proven that the sequences of solution and multiplier obtained by the proposed algorithm converge to the Kuhn-Tucker pair of the original problem with probability one as the sample size increases. At last, the numerical experiments for five test examples are performed and the numerical results indicate that the algorithm is promising

    Stochastic mathematical programs with hybrid equilibrium constraints

    Get PDF
    AbstractThis paper considers a stochastic mathematical program with hybrid equilibrium constraints (SMPHEC), which includes either “here-and-now” or “wait-and-see” type complementarity constraints. An example is given to describe the necessity to study SMPHEC. In order to solve the problem, the sampling average approximation techniques are employed to approximate the expectations and smoothing and penalty techniques are used to deal with the complementarity constraints. Limiting behaviors of the proposed approach are discussed. Preliminary numerical experiments show that the proposed approach is applicable

    Stochastic convex optimization with multiple objectives

    Get PDF
    Abstract In this paper, we are interested in the development of efficient algorithms for convex optimization problems in the simultaneous presence of multiple objectives and stochasticity in the first-order information. We cast the stochastic multiple objective optimization problem into a constrained optimization problem by choosing one function as the objective and try to bound other objectives by appropriate thresholds. We first examine a two stages exploration-exploitation based algorithm which first approximates the stochastic objectives by sampling and then solves a constrained stochastic optimization problem by projected gradient method. This method attains a suboptimal convergence rate even under strong assumption on the objectives. Our second approach is an efficient primal-dual stochastic algorithm. It leverages on the theory of Lagrangian method in constrained optimization and attains the optimal convergence rate of O(1/ √ T ) in high probability for general Lipschitz continuous objectives

    Convergence analysis of sample average approximation methods for a class of stochastic mathematical programs with equality constraints

    No full text
    In this paper we discuss the sample average approximation (SAA) method for a class of stochastic programs with nonsmooth equality constraints. We derive a uniform Strong Law of Large Numbers for random compact set-valued mappings and use it to investigate the convergence of Karush-Kuhn-Tucker points of SAA programs as the sample size increases. We also study the exponential convergence of global minimizers of the SAA problems to their counterparts of the true problem. The convergence analysis is extended to a smoothed SAA program. Finally, we apply the established results to a class of stochastic mathematical programs with complementarity constraints and report some preliminary numerical test results
    corecore