3 research outputs found

    Consistency of sample-based stationary points for infinite-dimensional stochastic optimization

    Full text link
    We consider stochastic optimization problems with possibly nonsmooth integrands posed in Banach spaces and approximate these stochastic programs via a sample-based approaches. We establish the consistency of approximate Clarke stationary points of the sample-based approximations. Our framework is applied to risk-averse semilinear PDE-constrained optimization using the average value-at-risk and to risk-neutral bilinear PDE-constrained optimization.Comment: 20 page

    Generic consistency for approximate stochastic programming and statistical problems

    No full text
    In stochastic programming, statistics, or econometrics, the aim is in general the optimization of a criterion function that depends on a decision variable theta and reads as an expectation with respect to a probability P. When this function cannot be computed in closed form, it is customary to approximate it through an empirical mean function based on a random sample. On the other hand, several other methods have been proposed, such as quasi-Monte Carlo integration and numerical integration rules. In this paper, we propose a general approach for approximating such a function, in the sense of epigraphical convergence, using a sequence of functions of simpler type which can be expressed as expectations with respect to probability measures P-n that, in some sense, approximate P. The main difference with the existing results lies in the fact that our main theorem does not impose conditions directly on the approximating probabilities but only on some integrals with respect to them. In addition, the P-n's can be transition probabilities, i.e., are allowed to depend on a further parameter, xi, whose value results from deterministic or stochastic operations, depending on the underlying model. This framework allows us to deal with a large variety of approximation procedures such as Monte Carlo, quasi-Monte Carlo, numerical integration, quantization, several variations on Monte Carlo sampling, and some density approximation algorithms. As by-products, we discuss convergence results for stochastic programming and statistical inference based on dependent data, for programming with estimated parameters, and for robust optimization; we also provide a general result about the consistency of the bootstrap for M-estimators
    corecore