344 research outputs found

    Stability and Error Analysis for Optimization and Generalized Equations

    Get PDF
    Stability and error analysis remain challenging for problems that lack regularity properties near solutions, are subject to large perturbations, and might be infinite dimensional. We consider nonconvex optimization and generalized equations defined on metric spaces and develop bounds on solution errors using the truncated Hausdorff distance applied to graphs and epigraphs of the underlying set-valued mappings and functions. In the process, we extend the calculus of such distances to cover compositions and other constructions that arise in nonconvex problems. The results are applied to constrained problems with feasible sets that might have empty interiors, solution of KKT systems, and optimality conditions for difference-of-convex functions and composite functions

    Approximations of Semicontinuous Functions with Applications to Stochastic Optimization and Statistical Estimation

    Get PDF
    Upper semicontinuous (usc) functions arise in the analysis of maximization problems, distributionally robust optimization, and function identification, which includes many problems of nonparametric statistics. We establish that every usc function is the limit of a hypo-converging sequence of piecewise affine functions of the difference-of-max type and illustrate resulting algorithmic possibilities in the context of approximate solution of infinite-dimensional optimization problems. In an effort to quantify the ease with which classes of usc functions can be approximated by finite collections, we provide upper and lower bounds on covering numbers for bounded sets of usc functions under the Attouch-Wets distance. The result is applied in the context of stochastic optimization problems defined over spaces of usc functions. We establish confidence regions for optimal solutions based on sample average approximations and examine the accompanying rates of convergence. Examples from nonparametric statistics illustrate the results

    Variational Analysis of Constrained M-Estimators

    Get PDF
    We propose a unified framework for establishing existence of nonparametric M-estimators, computing the corresponding estimates, and proving their strong consistency when the class of functions is exceptionally rich. In particular, the framework addresses situations where the class of functions is complex involving information and assumptions about shape, pointwise bounds, location of modes, height at modes, location of level-sets, values of moments, size of subgradients, continuity, distance to a "prior" function, multivariate total positivity, and any combination of the above. The class might be engineered to perform well in a specific setting even in the presence of little data. The framework views the class of functions as a subset of a particular metric space of upper semicontinuous functions under the Attouch-Wets distance. In addition to allowing a systematic treatment of numerous M-estimators, the framework yields consistency of plug-in estimators of modes of densities, maximizers of regression functions, level-sets of classifiers, and related quantities, and also enables computation by means of approximating parametric classes. We establish consistency through a one-sided law of large numbers, here extended to sieves, that relaxes assumptions of uniform laws, while ensuring global approximations even under model misspecification

    Path Optimization for the Resource-Constrained Searcher

    Get PDF
    Naval Research LogisticsWe formulate and solve a discrete-time path-optimization problem where a single searcher, operating in a discretized 3-dimensional airspace, looks for a moving target in a finite set of cells. The searcher is constrained by maximum limits on the consumption of several resources such as time, fuel, and risk along any path. We develop a special- ized branch-and-bound algorithm for this problem that utilizes several network reduction procedures as well as a new bounding technique based on Lagrangian relaxation and net- work expansion. The resulting algorithm outperforms a state-of-the-art algorithm for solving time-constrained problems and also is the first algorithm to solve multi-constrained problems

    On Solving Large-Scale Finite Minimax Problems using Exponential Smoothing

    Get PDF
    Journal of Optimization Theory and Applications, Vol. 148, No. 2, pp. 390-421

    Fusion of Hard and Soft Information in Nonparametric Density Estimation

    Get PDF
    This article discusses univariate density estimation in situations when the sample (hard information) is supplemented by “soft” information about the random phenomenon. These situations arise broadly in operations research and management science where practical and computational reasons severely limit the sample size, but problem structure and past experiences could be brought in. In particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum likelihood estimator that incorporates any, possibly random, soft information through an arbitrary collection of constraints. We illustrate the breadth of possibilities by discussing soft information about shape, support, continuity, smoothness, slope, location of modes, symmetry, density values, neighborhood of known density, moments, and distribution functions. The maximization takes place over spaces of extended real-valued semicontinuous functions and therefore allows us to consider essentially any conceivable density as well as convenient exponential transformations. The infinite dimensionality of the optimization problem is overcome by approximating splines tailored to these spaces. To facilitate the treatment of small samples, the construction of these splines is decoupled from the sample. We discuss existence and uniqueness of the estimator, examine consistency under increasing hard and soft information, and give rates of convergence. Numerical examples illustrate the value of soft information, the ability to generate a family of diverse densities, and the effect of misspecification of soft information.U.S. Army Research Laboratory and the U.S. Army Research Office grant 00101-80683U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-10-1-0246U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-12-1-0273U.S. Army Research Laboratory and the U.S. Army Research Office grant 00101-80683U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-10-1-0246U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-12-1-027

    Routing Military Aircraft with a Constrained Shortest-Path Algorithm

    Get PDF
    Military Operations Research, to appear.We formulate and solve aircraft-routing problems that arise when planning missions for military aircraft that are subject to ground-based threats such as surface-to-air missiles. We use a constrained-shortest path (CSP) model that discretizes the relevant airspace into a grid of vertices representing potential waypoints, and connects vertices with directed edges to represent potential flight segments. The model is flexible: It can route any type of manned or unmanned aircraft; it can incorporate any number of threats; and it can incorporate, in the objective function or as side constraints, numerous mission-specific metrics such as risk, fuel consuption, and flight time. We apply a new algorithm for solving the CSP problem and present computational results for the routing of a high-altitude F/A-18 strike group, and the routing of a medium-altitude unmanned aerial vehicle. The objectives minimize risk from ground-based threats while constraints limit fuel consumption and / or flight time. Run times to achieve a near-optimal solution range from fractions of a second to 80 seconds on a personal computer. We also demonstrate that our methods easily extend to handle turn-radius constraints and round-trip routing

    Optimal Control of Uncertain Systems Using Sample Average Approximations

    Get PDF
    The article of record as published may be found at http://dx.doi.org/10.1137/140983161In this paper, we introduce the uncertain optimal control problem of determining a control that minimizes the expectation of an objective functional for a system with parameter uncertainty in both dynamics and objective. We present a computational framework for the numerical solution of this problem, wherein an independently drawn random sample is taken from the space of uncertain parameters, and the expectation in the objective functional is approximated by a sample average. The result is a sequence of approximating standard optimal control problems that can be solved using existing techniques. To analyze the performance of this computational framework, we develop necessary conditions for both the original and approximate problems and show that the approximation based on sample averages is consistent in the sense of Polak [Optimization: Algorithms and Consistent Approximations, Springer, New York, 1997]. This property guarantees that accumulation points of a sequence of global minimizers (stationary points) of the approximate problem are global minimizers (stationary points) of the original problem. We show that the uncertain optimal control problem can further be approximated in a consistent manner by a sequence of nonlinear programs under mild regularity assumptions. In numerical examples, we demonstrate that the framework enables the solution of optimal search and optimal ensemble control problems

    Good and Bad Optimization Models: Insights from Rockafellians

    Get PDF
    A basic requirement for a mathematical model is often that its solution (output) shouldn’t change much if the model’s parameters (input) are perturbed. This is important because the exact values of parameters may not be known and one would like to avoid being misled by an output obtained using incorrect values. Thus, it’s rarely enough to address an application by formulating a model, solving the resulting optimization problem and presenting the solution as the answer. One would need to confirm that the model is suitable, i.e., “good,” and this can, at least in part, be achieved by considering a family of optimization problems constructed by perturbing parameters as quantified by a Rockafellian function. The resulting sensitivity analysis uncovers troubling situations with unstable solutions, which we referred to as “bad” models, and indicates better model formulations. Embedding an actual problem of interest within a family of problems via Rockafellians is also a primary path to optimality conditions as well as computationally attractive, alternative problems, which under ideal circumstances, and when properly tuned, may even furnish the minimum value of the actual problem. The tuning of these alternative problems turns out to be intimately tied to finding multipliers in optimality conditions and thus emerges as a main component of several optimization algorithms. In fact, the tuning amounts to solving certain dual optimization problems. In this tutorial, we’ll discuss the opportunities and insights afforded by Rockafellians.Office of Naval ResearchAir Force Office of Scientific ResearchMIPR F4FGA00350G004MIPR N0001421WX0149

    Reliability-based optimal design using sample average approximations

    Get PDF
    corecore