7,294 research outputs found

    Data-Driven Chance Constrained Optimization under Wasserstein Ambiguity Sets

    Get PDF
    We present a data-driven approach for distributionally robust chance constrained optimization problems (DRCCPs). We consider the case where the decision maker has access to a finite number of samples or realizations of the uncertainty. The chance constraint is then required to hold for all distributions that are close to the empirical distribution constructed from the samples (where the distance between two distributions is defined via the Wasserstein metric). We first reformulate DRCCPs under data-driven Wasserstein ambiguity sets and a general class of constraint functions. When the feasibility set of the chance constraint program is replaced by its convex inner approximation, we present a convex reformulation of the program and show its tractability when the constraint function is affine in both the decision variable and the uncertainty. For constraint functions concave in the uncertainty, we show that a cutting-surface algorithm converges to an approximate solution of the convex inner approximation of DRCCPs. Finally, for constraint functions convex in the uncertainty, we compare the feasibility set with other sample-based approaches for chance constrained programs.Comment: A shorter version is submitted to the American Control Conference, 201

    K-Adaptability in Two-Stage Distributionally Robust Binary Programming

    Get PDF
    We propose to approximate two-stage distributionally robust programs with binary recourse decisions by their associated K-adaptability problems, which pre-select K candidate secondstage policies here-and-now and implement the best of these policies once the uncertain parameters have been observed. We analyze the approximation quality and the computational complexity of the K-adaptability problem, and we derive explicit mixed-integer linear programming reformulations. We also provide efficient procedures for bounding the probabilities with which each of the K second-stage policies is selected

    Meshfree finite differences for vector Poisson and pressure Poisson equations with electric boundary conditions

    Full text link
    We demonstrate how meshfree finite difference methods can be applied to solve vector Poisson problems with electric boundary conditions. In these, the tangential velocity and the incompressibility of the vector field are prescribed at the boundary. Even on irregular domains with only convex corners, canonical nodal-based finite elements may converge to the wrong solution due to a version of the Babuska paradox. In turn, straightforward meshfree finite differences converge to the true solution, and even high-order accuracy can be achieved in a simple fashion. The methodology is then extended to a specific pressure Poisson equation reformulation of the Navier-Stokes equations that possesses the same type of boundary conditions. The resulting numerical approach is second order accurate and allows for a simple switching between an explicit and implicit treatment of the viscosity terms.Comment: 19 pages, 7 figure

    Convex Optimization Methods for Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression

    Full text link
    In this paper, we study convex optimization methods for computing the trace norm regularized least squares estimate in multivariate linear regression. The so-called factor estimation and selection (FES) method, recently proposed by Yuan et al. [22], conducts parameter estimation and factor selection simultaneously and have been shown to enjoy nice properties in both large and finite samples. To compute the estimates, however, can be very challenging in practice because of the high dimensionality and the trace norm constraint. In this paper, we explore a variant of Nesterov's smooth method [20] and interior point methods for computing the penalized least squares estimate. The performance of these methods is then compared using a set of randomly generated instances. We show that the variant of Nesterov's smooth method [20] generally outperforms the interior point method implemented in SDPT3 version 4.0 (beta) [19] substantially . Moreover, the former method is much more memory efficient.Comment: 27 page
    • …
    corecore