12,413 research outputs found

    Sequential Randomized Algorithms for Convex Optimization in the Presence of Uncertainty

    Full text link
    In this paper, we propose new sequential randomized algorithms for convex optimization problems in the presence of uncertainty. A rigorous analysis of the theoretical properties of the solutions obtained by these algorithms, for full constraint satisfaction and partial constraint satisfaction, respectively, is given. The proposed methods allow to enlarge the applicability of the existing randomized methods to real-world applications involving a large number of design variables. Since the proposed approach does not provide a priori bounds on the sample complexity, extensive numerical simulations, dealing with an application to hard-disk drive servo design, are provided. These simulations testify the goodness of the proposed solution.Comment: 18 pages, Submitted for publication to IEEE Transactions on Automatic Contro

    Lift & Project Systems Performing on the Partial-Vertex-Cover Polytope

    Full text link
    We study integrality gap (IG) lower bounds on strong LP and SDP relaxations derived by the Sherali-Adams (SA), Lovasz-Schrijver-SDP (LS+), and Sherali-Adams-SDP (SA+) lift-and-project (L&P) systems for the t-Partial-Vertex-Cover (t-PVC) problem, a variation of the classic Vertex-Cover problem in which only t edges need to be covered. t-PVC admits a 2-approximation using various algorithmic techniques, all relying on a natural LP relaxation. Starting from this LP relaxation, our main results assert that for every epsilon > 0, level-Theta(n) LPs or SDPs derived by all known L&P systems that have been used for positive algorithmic results (but the Lasserre hierarchy) have IGs at least (1-epsilon)n/t, where n is the number of vertices of the input graph. Our lower bounds are nearly tight. Our results show that restricted yet powerful models of computation derived by many L&P systems fail to witness c-approximate solutions to t-PVC for any constant c, and for t = O(n). This is one of the very few known examples of an intractable combinatorial optimization problem for which LP-based algorithms induce a constant approximation ratio, still lift-and-project LP and SDP tightenings of the same LP have unbounded IGs. We also show that the SDP that has given the best algorithm known for t-PVC has integrality gap n/t on instances that can be solved by the level-1 LP relaxation derived by the LS system. This constitutes another rare phenomenon where (even in specific instances) a static LP outperforms an SDP that has been used for the best approximation guarantee for the problem at hand. Finally, one of our main contributions is that we make explicit of a new and simple methodology of constructing solutions to LP relaxations that almost trivially satisfy constraints derived by all SDP L&P systems known to be useful for algorithmic positive results (except the La system).Comment: 26 page

    Data-driven Distributionally Robust Optimization Using the Wasserstein Metric: Performance Guarantees and Tractable Reformulations

    Full text link
    We consider stochastic programs where the distribution of the uncertain parameters is only observable through a finite training dataset. Using the Wasserstein metric, we construct a ball in the space of (multivariate and non-discrete) probability distributions centered at the uniform distribution on the training samples, and we seek decisions that perform best in view of the worst-case distribution within this Wasserstein ball. The state-of-the-art methods for solving the resulting distributionally robust optimization problems rely on global optimization techniques, which quickly become computationally excruciating. In this paper we demonstrate that, under mild assumptions, the distributionally robust optimization problems over Wasserstein balls can in fact be reformulated as finite convex programs---in many interesting cases even as tractable linear programs. Leveraging recent measure concentration results, we also show that their solutions enjoy powerful finite-sample performance guarantees. Our theoretical results are exemplified in mean-risk portfolio optimization as well as uncertainty quantification.Comment: 42 pages, 10 figure

    Optimal Uncertainty Quantification

    Get PDF
    We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop \emph{Optimal Concentration Inequalities} (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the non-propagation of uncertainties or information across scales. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems. The introduction of this paper provides both an overview of the paper and a self-contained mini-tutorial about basic concepts and issues of UQ.Comment: 90 pages. Accepted for publication in SIAM Review (Expository Research Papers). See SIAM Review for higher quality figure

    Generalized decomposition and cross entropy methods for many-objective optimization

    Get PDF
    Decomposition-based algorithms for multi-objective optimization problems have increased in popularity in the past decade. Although their convergence to the Pareto optimal front (PF) is in several instances superior to that of Pareto-based algorithms, the problem of selecting a way to distribute or guide these solutions in a high-dimensional space has not been explored. In this work, we introduce a novel concept which we call generalized decomposition. Generalized decomposition provides a framework with which the decision maker (DM) can guide the underlying evolutionary algorithm toward specific regions of interest or the entire Pareto front with the desired distribution of Pareto optimal solutions. Additionally, it is shown that generalized decomposition simplifies many-objective problems by unifying the three performance objectives of multi-objective evolutionary algorithms – convergence to the PF, evenly distributed Pareto optimal solutions and coverage of the entire front – to only one, that of convergence. A framework, established on generalized decomposition, and an estimation of distribution algorithm (EDA) based on low-order statistics, namely the cross-entropy method (CE), is created to illustrate the benefits of the proposed concept for many objective problems. This choice of EDA also enables the test of the hypothesis that low-order statistics based EDAs can have comparable performance to more elaborate EDAs
    • …
    corecore