21,134 research outputs found

    Optimization under uncertainty and risk: Quadratic and copositive approaches

    Get PDF
    Robust optimization and stochastic optimization are the two main paradigms for dealing with the uncertainty inherent in almost all real-world optimization problems. The core principle of robust optimization is the introduction of parameterized families of constraints. Sometimes, these complicated semi-infinite constraints can be reduced to finitely many convex constraints, so that the resulting optimization problem can be solved using standard procedures. Hence flexibility of robust optimization is limited by certain convexity requirements on various objects. However, a recent strain of literature has sought to expand applicability of robust optimization by lifting variables to a properly chosen matrix space. Doing so allows to handle situations where convexity requirements are not met immediately, but rather intermediately. In the domain of (possibly nonconvex) quadratic optimization, the principles of copositive optimization act as a bridge leading to recovery of the desired convex structures. Copositive optimization has established itself as a powerful paradigm for tackling a wide range of quadratically constrained quadratic optimization problems, reformulating them into linear convex-conic optimization problems involving only linear constraints and objective, plus constraints forcing membership to some matrix cones, which can be thought of as generalizations of the positive-semidefinite matrix cone. These reformulations enable application of powerful optimization techniques, most notably convex duality, to problems which, in their original form, are highly nonconvex. In this text we want to offer readers an introduction and tutorial on these principles of copositive optimization, and to provide a review and outlook of the literature that applies these to optimization problems involving uncertainty

    Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso

    Full text link
    We present exponential finite-sample nonasymptotic deviation inequalities for the SAA estimator's near-optimal solution set over the class of stochastic optimization problems with heavy-tailed random \emph{convex} functions in the objective and constraints. Such setting is better suited for problems where a sub-Gaussian data generating distribution is less expected, e.g., in stochastic portfolio optimization. One of our contributions is to exploit \emph{convexity} of the perturbed objective and the perturbed constraints as a property which entails \emph{localized} deviation inequalities for joint feasibility and optimality guarantees. This means that our bounds are significantly tighter in terms of diameter and metric entropy since they depend only on the near-optimal solution set but not on the whole feasible set. As a result, we obtain a much sharper sample complexity estimate when compared to a general nonconvex problem. In our analysis, we derive some localized deterministic perturbation error bounds for convex optimization problems which are of independent interest. To obtain our results, we only assume a metric regular convex feasible set, possibly not satisfying the Slater condition and not having a metric regular solution set. In this general setting, joint near feasibility and near optimality are guaranteed. If in addition the set satisfies the Slater condition, we obtain finite-sample simultaneous \emph{exact} feasibility and near optimality guarantees (for a sufficiently small tolerance). Another contribution of our work is to present, as a proof of concept of our localized techniques, a persistent result for a variant of the LASSO estimator under very weak assumptions on the data generating distribution.Comment: 34 pages. Some correction

    Polynomial approximation method for stochastic programming.

    Get PDF
    Two stage stochastic programming is an important part in the whole area of stochastic programming, and is widely spread in multiple disciplines, such as financial management, risk management, and logistics. The two stage stochastic programming is a natural extension of linear programming by incorporating uncertainty into the model. This thesis solves the two stage stochastic programming using a novel approach. For most two stage stochastic programming model instances, both the objective function and constraints are convex but non-differentiable, e.g. piecewise-linear, and thereby solved by the first gradient-type methods. When encountering large scale problems, the performance of known methods, such as the stochastic decomposition (SD) and stochastic approximation (SA), is poor in practice. This thesis replaces the objective function and constraints with their polynomial approximations. That is because polynomial counterpart has the following benefits: first, the polynomial approximation will preserve the convexity; Second, the polynomial approximation will uniformly converge to the original objective/constraints with arbitrary accuracy; and third, the polynomial approximation will not only provide good estimation on the original objectives/functions but also their gradients/sub-gradients. All these features enable us to apply convex optimization techniques for large scale problems. Hence, the thesis applies SAA, polynomial approximation method and then steepest descent method in combination to solve the large-scale problems effectively and efficiently
    • …
    corecore