66 research outputs found

    Approximations of Semicontinuous Functions with Applications to Stochastic Optimization and Statistical Estimation

    Get PDF
    Upper semicontinuous (usc) functions arise in the analysis of maximization problems, distributionally robust optimization, and function identification, which includes many problems of nonparametric statistics. We establish that every usc function is the limit of a hypo-converging sequence of piecewise affine functions of the difference-of-max type and illustrate resulting algorithmic possibilities in the context of approximate solution of infinite-dimensional optimization problems. In an effort to quantify the ease with which classes of usc functions can be approximated by finite collections, we provide upper and lower bounds on covering numbers for bounded sets of usc functions under the Attouch-Wets distance. The result is applied in the context of stochastic optimization problems defined over spaces of usc functions. We establish confidence regions for optimal solutions based on sample average approximations and examine the accompanying rates of convergence. Examples from nonparametric statistics illustrate the results

    Stability and Error Analysis for Optimization and Generalized Equations

    Get PDF
    Stability and error analysis remain challenging for problems that lack regularity properties near solutions, are subject to large perturbations, and might be infinite dimensional. We consider nonconvex optimization and generalized equations defined on metric spaces and develop bounds on solution errors using the truncated Hausdorff distance applied to graphs and epigraphs of the underlying set-valued mappings and functions. In the process, we extend the calculus of such distances to cover compositions and other constructions that arise in nonconvex problems. The results are applied to constrained problems with feasible sets that might have empty interiors, solution of KKT systems, and optimality conditions for difference-of-convex functions and composite functions

    Good and Bad Optimization Models: Insights from Rockafellians

    Get PDF
    A basic requirement for a mathematical model is often that its solution (output) shouldn’t change much if the model’s parameters (input) are perturbed. This is important because the exact values of parameters may not be known and one would like to avoid being misled by an output obtained using incorrect values. Thus, it’s rarely enough to address an application by formulating a model, solving the resulting optimization problem and presenting the solution as the answer. One would need to confirm that the model is suitable, i.e., “good,” and this can, at least in part, be achieved by considering a family of optimization problems constructed by perturbing parameters as quantified by a Rockafellian function. The resulting sensitivity analysis uncovers troubling situations with unstable solutions, which we referred to as “bad” models, and indicates better model formulations. Embedding an actual problem of interest within a family of problems via Rockafellians is also a primary path to optimality conditions as well as computationally attractive, alternative problems, which under ideal circumstances, and when properly tuned, may even furnish the minimum value of the actual problem. The tuning of these alternative problems turns out to be intimately tied to finding multipliers in optimality conditions and thus emerges as a main component of several optimization algorithms. In fact, the tuning amounts to solving certain dual optimization problems. In this tutorial, we’ll discuss the opportunities and insights afforded by Rockafellians.Office of Naval ResearchAir Force Office of Scientific ResearchMIPR F4FGA00350G004MIPR N0001421WX0149

    Set-Convergence and Its Application: A Tutorial

    Get PDF
    Optimization problems, generalized equations, and the multitude of other variational problems invariably lead to the analysis of sets and set-valued mappings as well as their approximations. We review the central concept of set-convergence and explain its role in defining a notion of proximity between sets, especially for epigraphs of functions and graphs of set-valued mappings. The development leads to an approximation theory for optimization problems and generalized equations with profound consequences for the construction of algorithms. We also introduce the role of set-convergence in variational geometry and subdifferentiability with applications to optimality conditions. Examples illustrate the importance of set-convergence in stability analysis, error analysis, construction of algorithms, statistical estimation, and probability theory

    Variational Analysis of Constrained M-Estimators

    Get PDF
    We propose a unified framework for establishing existence of nonparametric M-estimators, computing the corresponding estimates, and proving their strong consistency when the class of functions is exceptionally rich. In particular, the framework addresses situations where the class of functions is complex involving information and assumptions about shape, pointwise bounds, location of modes, height at modes, location of level-sets, values of moments, size of subgradients, continuity, distance to a "prior" function, multivariate total positivity, and any combination of the above. The class might be engineered to perform well in a specific setting even in the presence of little data. The framework views the class of functions as a subset of a particular metric space of upper semicontinuous functions under the Attouch-Wets distance. In addition to allowing a systematic treatment of numerous M-estimators, the framework yields consistency of plug-in estimators of modes of densities, maximizers of regression functions, level-sets of classifiers, and related quantities, and also enables computation by means of approximating parametric classes. We establish consistency through a one-sided law of large numbers, here extended to sieves, that relaxes assumptions of uniform laws, while ensuring global approximations even under model misspecification

    Gradients and subgradients of buffered failure probability

    Get PDF
    17 USC 105 interim-entered record; under review.The article of record as published may be found at http://dx.doi.org/10.1016/j.orl.2021.10.004Gradients and subgradients are central to optimization and sensitivity analysis of buffered failure probabilities. We furnish a characterization of subgradients based on subdifferential calculus in the case of finite probability distributions and, under additional assumptions, also a gradient expression for general distributions. Several examples illustrate the application of the results, especially in the context of optimality conditions.Office of Naval ResearchAir Force Office of Scientific Research18RT0599MIPR N0001421WX0149

    Optimal Control of Uncertain Systems Using Sample Average Approximations

    Get PDF
    The article of record as published may be found at http://dx.doi.org/10.1137/140983161In this paper, we introduce the uncertain optimal control problem of determining a control that minimizes the expectation of an objective functional for a system with parameter uncertainty in both dynamics and objective. We present a computational framework for the numerical solution of this problem, wherein an independently drawn random sample is taken from the space of uncertain parameters, and the expectation in the objective functional is approximated by a sample average. The result is a sequence of approximating standard optimal control problems that can be solved using existing techniques. To analyze the performance of this computational framework, we develop necessary conditions for both the original and approximate problems and show that the approximation based on sample averages is consistent in the sense of Polak [Optimization: Algorithms and Consistent Approximations, Springer, New York, 1997]. This property guarantees that accumulation points of a sequence of global minimizers (stationary points) of the approximate problem are global minimizers (stationary points) of the original problem. We show that the uncertain optimal control problem can further be approximated in a consistent manner by a sequence of nonlinear programs under mild regularity assumptions. In numerical examples, we demonstrate that the framework enables the solution of optimal search and optimal ensemble control problems

    Fusion of Hard and Soft Information in Nonparametric Density Estimation

    Get PDF
    This article discusses univariate density estimation in situations when the sample (hard information) is supplemented by “soft” information about the random phenomenon. These situations arise broadly in operations research and management science where practical and computational reasons severely limit the sample size, but problem structure and past experiences could be brought in. In particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum likelihood estimator that incorporates any, possibly random, soft information through an arbitrary collection of constraints. We illustrate the breadth of possibilities by discussing soft information about shape, support, continuity, smoothness, slope, location of modes, symmetry, density values, neighborhood of known density, moments, and distribution functions. The maximization takes place over spaces of extended real-valued semicontinuous functions and therefore allows us to consider essentially any conceivable density as well as convenient exponential transformations. The infinite dimensionality of the optimization problem is overcome by approximating splines tailored to these spaces. To facilitate the treatment of small samples, the construction of these splines is decoupled from the sample. We discuss existence and uniqueness of the estimator, examine consistency under increasing hard and soft information, and give rates of convergence. Numerical examples illustrate the value of soft information, the ability to generate a family of diverse densities, and the effect of misspecification of soft information.U.S. Army Research Laboratory and the U.S. Army Research Office grant 00101-80683U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-10-1-0246U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-12-1-0273U.S. Army Research Laboratory and the U.S. Army Research Office grant 00101-80683U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-10-1-0246U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-12-1-027

    A variational approach to a cumulative distribution function estimation problem under stochastic ambiguity

    Full text link
    We propose a method for finding a cumulative distribution function (cdf) that minimizes the (regularized) distance to a given cdf, while belonging to an ambiguity set constructed relative to another cdf and, possibly, incorporating soft information. Our method embeds the family of cdfs onto the space of upper semicontinuous functions endowed with the hypo-distance. In this setting, we present an approximation scheme based on epi-splines, defined as piecewise polynomial functions, and use bounds for estimating the hypo-distance. Under appropriate hypotheses, we guarantee that the cluster points corresponding to the sequence of minimizers of the resulting approximating problems are solutions to a limiting problem. In addition, we describe a large class of functions that satisfy these hypotheses. The approximating method produces a linear-programming-based approximation scheme, enabling us to develop an algorithm from off-the-shelf solvers. The convergence of our proposed approximation is illustrated by numerical examples for the bivariate case, one of which entails a Lipschitz condition
    • …
    corecore