129,093 research outputs found
On generalized semi-infinite optimization and bilevel optimization
The paper studies the connections and differences between bilevel problems (BL) and generalized semi-infinite problems (GSIP). Under natural assumptions (GSIP) can be seen as a special case of a (BL). We consider the so-called reduction approach for (BL) and (GSIP) leading to optimality conditions and Newton-type methods for solving the problems. We show by a structural analysis that for (GSIP)-problems the regularity assumptions for the reduction approach can be expected to hold generically at a solution but for general (BL)-problems not. The genericity behavior of (BL) and (GSIP) is in particular studied for linear problems
Linearly Convergent First-Order Algorithms for Semi-definite Programming
In this paper, we consider two formulations for Linear Matrix Inequalities
(LMIs) under Slater type constraint qualification assumption, namely, SDP
smooth and non-smooth formulations. We also propose two first-order linearly
convergent algorithms for solving these formulations. Moreover, we introduce a
bundle-level method which converges linearly uniformly for both smooth and
non-smooth problems and does not require any smoothness information. The
convergence properties of these algorithms are also discussed. Finally, we
consider a special case of LMIs, linear system of inequalities, and show that a
linearly convergent algorithm can be obtained under a weaker assumption
Optimal designs for rational function regression
We consider optimal non-sequential designs for a large class of (linear and
nonlinear) regression models involving polynomials and rational functions with
heteroscedastic noise also given by a polynomial or rational weight function.
The proposed method treats D-, E-, A-, and -optimal designs in a
unified manner, and generates a polynomial whose zeros are the support points
of the optimal approximate design, generalizing a number of previously known
results of the same flavor. The method is based on a mathematical optimization
model that can incorporate various criteria of optimality and can be solved
efficiently by well established numerical optimization methods. In contrast to
previous optimization-based methods proposed for similar design problems, it
also has theoretical guarantee of its algorithmic efficiency; in fact, the
running times of all numerical examples considered in the paper are negligible.
The stability of the method is demonstrated in an example involving high degree
polynomials. After discussing linear models, applications for finding locally
optimal designs for nonlinear regression models involving rational functions
are presented, then extensions to robust regression designs, and trigonometric
regression are shown. As a corollary, an upper bound on the size of the support
set of the minimally-supported optimal designs is also found. The method is of
considerable practical importance, with the potential for instance to impact
design software development. Further study of the optimality conditions of the
main optimization model might also yield new theoretical insights.Comment: 25 pages. Previous version updated with more details in the theory
and additional example
Projection methods in conic optimization
There exist efficient algorithms to project a point onto the intersection of
a convex cone and an affine subspace. Those conic projections are in turn the
work-horse of a range of algorithms in conic optimization, having a variety of
applications in science, finance and engineering. This chapter reviews some of
these algorithms, emphasizing the so-called regularization algorithms for
linear conic optimization, and applications in polynomial optimization. This is
a presentation of the material of several recent research articles; we aim here
at clarifying the ideas, presenting them in a general framework, and pointing
out important techniques
Deciding Quantifier-Free Presburger Formulas Using Parameterized Solution Bounds
Given a formula in quantifier-free Presburger arithmetic, if it has a
satisfying solution, there is one whose size, measured in bits, is polynomially
bounded in the size of the formula. In this paper, we consider a special class
of quantifier-free Presburger formulas in which most linear constraints are
difference (separation) constraints, and the non-difference constraints are
sparse. This class has been observed to commonly occur in software
verification. We derive a new solution bound in terms of parameters
characterizing the sparseness of linear constraints and the number of
non-difference constraints, in addition to traditional measures of formula
size. In particular, we show that the number of bits needed per integer
variable is linear in the number of non-difference constraints and logarithmic
in the number and size of non-zero coefficients in them, but is otherwise
independent of the total number of linear constraints in the formula. The
derived bound can be used in a decision procedure based on instantiating
integer variables over a finite domain and translating the input
quantifier-free Presburger formula to an equi-satisfiable Boolean formula,
which is then checked using a Boolean satisfiability solver. In addition to our
main theoretical result, we discuss several optimizations for deriving tighter
bounds in practice. Empirical evidence indicates that our decision procedure
can greatly outperform other decision procedures.Comment: 26 page
Symmetry Breaking Constraints: Recent Results
Symmetry is an important problem in many combinatorial problems. One way of
dealing with symmetry is to add constraints that eliminate symmetric solutions.
We survey recent results in this area, focusing especially on two common and
useful cases: symmetry breaking constraints for row and column symmetry, and
symmetry breaking constraints for eliminating value symmetryComment: To appear in Proceedings of Twenty-Sixth Conference on Artificial
Intelligence (AAAI-12
Robust optimization with incremental recourse
In this paper, we consider an adaptive approach to address optimization
problems with uncertain cost parameters. Here, the decision maker selects an
initial decision, observes the realization of the uncertain cost parameters,
and then is permitted to modify the initial decision. We treat the uncertainty
using the framework of robust optimization in which uncertain parameters lie
within a given set. The decision maker optimizes so as to develop the best cost
guarantee in terms of the worst-case analysis. The recourse decision is
``incremental"; that is, the decision maker is permitted to change the initial
solution by a small fixed amount. We refer to the resulting problem as the
robust incremental problem. We study robust incremental variants of several
optimization problems. We show that the robust incremental counterpart of a
linear program is itself a linear program if the uncertainty set is polyhedral.
Hence, it is solvable in polynomial time. We establish the NP-hardness for
robust incremental linear programming for the case of a discrete uncertainty
set. We show that the robust incremental shortest path problem is NP-complete
when costs are chosen from a polyhedral uncertainty set, even in the case that
only one new arc may be added to the initial path. We also address the
complexity of several special cases of the robust incremental shortest path
problem and the robust incremental minimum spanning tree problem
- …