339 research outputs found

    Compact relaxations for polynomial programming problems

    Get PDF
    Reduced RLT constraints are a special class of Reformulation- Linearization Technique (RLT) constraints. They apply to nonconvex (both continuous and mixed-integer) quadratic programming problems subject to systems of linear equality constraints. We present an extension to the general case of polynomial programming problems and discuss the derived convex relaxation. We then show how to perform rRLT constraint generation so as to reduce the number of inequality constraints in the relaxation, thereby making it more compact and faster to solve. We present some computational results validating our approach

    Runway Exit Designs for Capacity Improvement Demonstrations. Phase 1: Algorithm Development

    Get PDF
    A description and results are presented of a study to locate and design rapid runway exits under realistic airport conditions. The study developed a PC-based computer simulation-optimization program called REDIM (runway exit design interactive model) to help future airport designers and planners to locate optimal exits under various airport conditions. The model addresses three sets of problems typically arising during runway exit design evaluations. These are the evaluations of existing runway configurations, addition of new rapid runway turnoffs, and the design of new runway facilities. The model is highly interactive and allows a quick estimation of the expected value of runway occupancy time. Aircraft populations and airport environmental conditions are among the multiple inputs to the model to execute a viable runway location and geometric design solution. The results presented suggest that possible reductions on runway occupancy time (ROT) can be achieved with the use of optimally tailored rapid runway designs for a given aircraft population. Reductions of up to 9 to 6 seconds are possible with the implementation of 30 m/sec variable geometry exits

    Estimating Optimal Thinning and Rotation for Mixed-Species Timber Stands Using a Random Search Algorithm

    Get PDF
    The problem of optimal density over time for even-aged, mixed-species stands is formulated as a nonlinear-integer programming problem with numbers of trees cut by species and diameter class as decision variables. The model is formulated using a stand-table projection growth model to predict mixed-speciesg rowth and stand-structureO. ptimal thinning and final harvest age are estimated simultaneously using heuristic random search algorithms. For sample problemsw ith two speciesr, andom searchm ethodsp rovide near-optimalc uttings trategiesw ith very little computer time or memory. Optimal solutions are estimated for problems with eight initial species/diameter class groups, projected for up to three discrete growth periods. Such solution methods merit further study for evaluating complex stand- and forest-level decisions. FOREST Scl. 31:303-315

    Packing While Traveling: Mixed Integer Programming for a Class of Nonlinear Knapsack Problems

    Full text link
    Packing and vehicle routing problems play an important role in the area of supply chain management. In this paper, we introduce a non-linear knapsack problem that occurs when packing items along a fixed route and taking into account travel time. We investigate constrained and unconstrained versions of the problem and show that both are NP-hard. In order to solve the problems, we provide a pre-processing scheme as well as exact and approximate mixed integer programming (MIP) solutions. Our experimental results show the effectiveness of the MIP solutions and in particular point out that the approximate MIP approach often leads to near optimal results within far less computation time than the exact approach

    A reformulation–linearization–convexification algorithm for optimal correction of an inconsistent system of linear constraints

    Get PDF
    Abstract In this paper, an algorithm is introduced to find an optimal solution for an optimization problem that arises in total least squares with inequality constraints, and in the correction of infeasible linear systems of inequalities. The stated problem is a nonconvex program with a special structure that allows the use of a reformulation-linearization-convexification technique for its solution. A branch-and-bound method for finding a global optimum for this problem is introduced based on this technique. Some computational experiments are included to highlight the efficacy of the proposed methodology

    Optimal Uncertainty Quantification

    Get PDF
    We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop \emph{Optimal Concentration Inequalities} (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the non-propagation of uncertainties or information across scales. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems. The introduction of this paper provides both an overview of the paper and a self-contained mini-tutorial about basic concepts and issues of UQ.Comment: 90 pages. Accepted for publication in SIAM Review (Expository Research Papers). See SIAM Review for higher quality figure

    Computation with Polynomial Equations and Inequalities arising in Combinatorial Optimization

    Full text link
    The purpose of this note is to survey a methodology to solve systems of polynomial equations and inequalities. The techniques we discuss use the algebra of multivariate polynomials with coefficients over a field to create large-scale linear algebra or semidefinite programming relaxations of many kinds of feasibility or optimization questions. We are particularly interested in problems arising in combinatorial optimization.Comment: 28 pages, survey pape

    An FPTAS for optimizing a class of low-rank functions over a polytope

    Get PDF
    We present a fully polynomial time approximation scheme (FPTAS) for optimizing a very general class of non-linear functions of low rank over a polytope. Our approximation scheme relies on constructing an approximate Pareto-optimal front of the linear functions which constitute the given low-rank function. In contrast to existing results in the literature, our approximation scheme does not require the assumption of quasi-concavity on the objective function. For the special case of quasi-concave function minimization, we give an alternative FPTAS, which always returns a solution which is an extreme point of the polytope. Our technique can also be used to obtain an FPTAS for combinatorial optimization problems with non-linear objective functions, for example when the objective is a product of a fixed number of linear functions. We also show that it is not possible to approximate the minimum of a general concave function over the unit hypercube to within any factor, unless P = NP. We prove this by showing a similar hardness of approximation result for supermodular function minimization, a result that may be of independent interest

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page
    • …
    corecore