346,480 research outputs found

    Partially distributed outer approximation

    Get PDF
    This paper presents a novel partially distributed outer approximation algorithm, named PaDOA, for solving a class of structured mixed integer convex programming problems to global optimality. The proposed scheme uses an iterative outer approximation method for coupled mixed integer optimization problems with separable convex objective functions, affine coupling constraints, and compact domain. PaDOA proceeds by alternating between solving large-scale structured mixed-integer linear programming problems and partially decoupled mixed-integer nonlinear programming subproblems that comprise much fewer integer variables. We establish conditions under which PaDOA converges to global minimizers after a finite number of iterations and verify these properties with an application to thermostatically controlled loads and to mixed-integer regression

    Local convergence of a sequential quadratic programming method for a class of nonsmooth nonconvex objectives

    Full text link
    A sequential quadratic programming (SQP) algorithm is designed for nonsmooth optimization problems with upper-C^2 objective functions. Upper-C^2 functions are locally equivalent to difference-of-convex (DC) functions with smooth convex parts. They arise naturally in many applications such as certain classes of solutions to parametric optimization problems, e.g., recourse of stochastic programming, and projection onto closed sets. The proposed algorithm conducts line search and adopts an exact penalty merit function. The potential inconsistency due to the linearization of constraints are addressed through relaxation, similar to that of Sl_1QP. We show that the algorithm is globally convergent under reasonable assumptions. Moreover, we study the local convergence behavior of the algorithm under additional assumptions of Kurdyka-{\L}ojasiewicz (KL) properties, which have been applied to many nonsmooth optimization problems. Due to the nonconvex nature of the problems, a special potential function is used to analyze local convergence. We show that under acceptable assumptions, upper bounds on local convergence can be proven. Additionally, we show that for a large number of optimization problems with upper-C^2 objectives, their corresponding potential functions are indeed KL functions. Numerical experiment is performed with a power grid optimization problem that is consistent with the assumptions and analysis in this paper

    STRUCTURAL OPTIMIZATION USING PARAMETRIC PROGRAMMING METHOD

    Get PDF
    The Parametric Programming method is investigated to consider its applicability to structural optimization problems. It is used to solve optimization problems that have design variables as implicit functions of some independent input parameter(s). It provides optimal solutions as a parametric function of the input parameter(s) for the entire parameter space of interest. It does not require the detailed discrete optimizations needed at a large number of parameter values as in traditional non-parametric optimization. Parametric programming is widely used in optimal controls, model predictive control, scheduling, process synthesis and material design under uncertainty due to the above mentioned benefits. Its benefits are however, still unexplored in the field of structural optimization. Parametric programming could for instance, be used to aid designers in identifying and optimizing for uncertain loading conditions in complex systems. The first objective of this thesis is to identify a suitable multi-parametric programming algorithm among the many available ones in the literature to solve structural optimization problems. Once selected, the second goal is to implement the chosen algorithm and solve single parametric and multi-parametric sizing optimization problems, shape optimization problems, and use multi-parametric programming as a multi-objective optimization tool in structural optimization. In this regard, sizing optimization of truss structures and shape optimization of beams for load magnitude and load directions as varying parameters are solved for single and multi-parameter static and/or dynamic load cases. Parametric programming is also used to solve the multi-objective optimization of a honeycomb panel and the results are compared with those from non-parametric optimization conducted using commercial optimization software. Accuracy of results, and computational time are considered. From these studies, inferences are drawn about the issues and benefits of using parametric programming in structural optimization

    Bicriteria Network Design Problems

    Full text link
    We study a general class of bicriteria network design problems. A generic problem in this class is as follows: Given an undirected graph and two minimization objectives (under different cost functions), with a budget specified on the first, find a <subgraph \from a given subgraph-class that minimizes the second objective subject to the budget on the first. We consider three different criteria - the total edge cost, the diameter and the maximum degree of the network. Here, we present the first polynomial-time approximation algorithms for a large class of bicriteria network design problems for the above mentioned criteria. The following general types of results are presented. First, we develop a framework for bicriteria problems and their approximations. Second, when the two criteria are the same %(note that the cost functions continue to be different) we present a ``black box'' parametric search technique. This black box takes in as input an (approximation) algorithm for the unicriterion situation and generates an approximation algorithm for the bicriteria case with only a constant factor loss in the performance guarantee. Third, when the two criteria are the diameter and the total edge costs we use a cluster-based approach to devise a approximation algorithms --- the solutions output violate both the criteria by a logarithmic factor. Finally, for the class of treewidth-bounded graphs, we provide pseudopolynomial-time algorithms for a number of bicriteria problems using dynamic programming. We show how these pseudopolynomial-time algorithms can be converted to fully polynomial-time approximation schemes using a scaling technique.Comment: 24 pages 1 figur

    Global and Preference-based Optimization with Mixed Variables using Piecewise Affine Surrogates

    Full text link
    Optimization problems involving mixed variables, i.e., variables of numerical and categorical nature, can be challenging to solve, especially in the presence of complex constraints. Moreover, when the objective function is the result of a complicated simulation or experiment, it may be expensive to evaluate. This paper proposes a novel surrogate-based global optimization algorithm to solve linearly constrained mixed-variable problems up to medium-large size (around 100 variables after encoding and 20 constraints) based on constructing a piecewise affine surrogate of the objective function over feasible samples. We introduce two types of exploration functions to efficiently search the feasible domain via mixed-integer linear programming solvers. We also provide a preference-based version of the algorithm, which can be used when only pairwise comparisons between samples can be acquired while the underlying objective function to minimize remains unquantified. The two algorithms are tested on mixed-variable benchmark problems with and without constraints. The results show that, within a small number of acquisitions, the proposed algorithms can often achieve better or comparable results than other existing methods.Comment: code available at https://github.com/mjzhu-p/PWA

    Constraints in Genetic Programming

    Get PDF
    Genetic programming refers to a class of genetic algorithms utilizing generic representation in the form of program trees. For a particular application, one needs to provide the set of functions, whose compositions determine the space of program structures being evolved, and the set of terminals, which determine the space of specific instances of those programs. The algorithm searches the space for the best program for a given problem, applying evolutionary mechanisms borrowed from nature. Genetic algorithms have shown great capabilities in approximately solving optimization problems which could not be approximated or solved with other methods. Genetic programming extends their capabilities to deal with a broader variety of problems. However, it also extends the size of the search space, which often becomes too large to be effectively searched even by evolutionary methods. Therefore, our objective is to utilize problem constraints, if such can be identified, to restrict this space. In this publication, we propose a generic constraint specification language, powerful enough for a broad class of problem constraints. This language has two elements -- one reduces only the number of program instances, the other reduces both the space of program structures as well as their instances. With this language, we define the minimal set of complete constraints, and a set of operators guaranteeing offspring validity from valid parents. We also show that these operators are not less efficient than the standard genetic programming operators if one preprocesses the constraints - the necessary mechanisms are identified
    corecore