1,947 research outputs found

    Domain Decomposition for Stochastic Optimal Control

    Full text link
    This work proposes a method for solving linear stochastic optimal control (SOC) problems using sum of squares and semidefinite programming. Previous work had used polynomial optimization to approximate the value function, requiring a high polynomial degree to capture local phenomena. To improve the scalability of the method to problems of interest, a domain decomposition scheme is presented. By using local approximations, lower degree polynomials become sufficient, and both local and global properties of the value function are captured. The domain of the problem is split into a non-overlapping partition, with added constraints ensuring C1C^1 continuity. The Alternating Direction Method of Multipliers (ADMM) is used to optimize over each domain in parallel and ensure convergence on the boundaries of the partitions. This results in improved conditioning of the problem and allows for much larger and more complex problems to be addressed with improved performance.Comment: 8 pages. Accepted to CDC 201

    Convex Optimal Uncertainty Quantification

    Get PDF
    Optimal uncertainty quantification (OUQ) is a framework for numerical extreme-case analysis of stochastic systems with imperfect knowledge of the underlying probability distribution. This paper presents sufficient conditions under which an OUQ problem can be reformulated as a finite-dimensional convex optimization problem, for which efficient numerical solutions can be obtained. The sufficient conditions include that the objective function is piecewise concave and the constraints are piecewise convex. In particular, we show that piecewise concave objective functions may appear in applications where the objective is defined by the optimal value of a parameterized linear program.Comment: Accepted for publication in SIAM Journal on Optimizatio

    Approximations of Semicontinuous Functions with Applications to Stochastic Optimization and Statistical Estimation

    Get PDF
    Upper semicontinuous (usc) functions arise in the analysis of maximization problems, distributionally robust optimization, and function identification, which includes many problems of nonparametric statistics. We establish that every usc function is the limit of a hypo-converging sequence of piecewise affine functions of the difference-of-max type and illustrate resulting algorithmic possibilities in the context of approximate solution of infinite-dimensional optimization problems. In an effort to quantify the ease with which classes of usc functions can be approximated by finite collections, we provide upper and lower bounds on covering numbers for bounded sets of usc functions under the Attouch-Wets distance. The result is applied in the context of stochastic optimization problems defined over spaces of usc functions. We establish confidence regions for optimal solutions based on sample average approximations and examine the accompanying rates of convergence. Examples from nonparametric statistics illustrate the results

    Convergence of the Forward-Backward Algorithm: Beyond the Worst Case with the Help of Geometry

    Full text link
    We provide a comprehensive study of the convergence of forward-backward algorithm under suitable geometric conditions leading to fast rates. We present several new results and collect in a unified view a variety of results scattered in the literature, often providing simplified proofs. Novel contributions include the analysis of infinite dimensional convex minimization problems, allowing the case where minimizers might not exist. Further, we analyze the relation between different geometric conditions, and discuss novel connections with a priori conditions in linear inverse problems, including source conditions, restricted isometry properties and partial smoothness

    A generalized moment approach to sharp bounds for conditional expectations

    Full text link
    In this paper, we address the problem of bounding conditional expectations when moment information of the underlying distribution and the random event conditioned upon are given. To this end, we propose an adapted version of the generalized moment problem which deals with this conditional information through a simple transformation. By exploiting conic duality, we obtain sharp bounds that can be used for distribution-free decision-making under uncertainty. Additionally, we derive computationally tractable mathematical programs for distributionally robust optimization (DRO) with side information by leveraging core ideas from ambiguity-averse uncertainty quantification and robust optimization, establishing a moment-based DRO framework for prescriptive stochastic programming.Comment: 43 pages, 5 figure
    • …
    corecore