22,238 research outputs found
A Review on Bilevel Optimization: From Classical to Evolutionary Approaches and Applications
Bilevel optimization is defined as a mathematical program, where an
optimization problem contains another optimization problem as a constraint.
These problems have received significant attention from the mathematical
programming community. Only limited work exists on bilevel problems using
evolutionary computation techniques; however, recently there has been an
increasing interest due to the proliferation of practical applications and the
potential of evolutionary algorithms in tackling these problems. This paper
provides a comprehensive review on bilevel optimization from the basic
principles to solution strategies; both classical and evolutionary. A number of
potential application problems are also discussed. To offer the readers
insights on the prominent developments in the field of bilevel optimization, we
have performed an automated text-analysis of an extended list of papers
published on bilevel optimization to date. This paper should motivate
evolutionary computation researchers to pay more attention to this practical
yet challenging area
Nonconvex and Nonsmooth Sparse Optimization via Adaptively Iterative Reweighted Methods
We present a general formulation of nonconvex and nonsmooth sparse
optimization problems with a convexset constraint, which takes into account
most existing types of nonconvex sparsity-inducing terms. It thus brings strong
applicability to a wide range of applications. We further design a general
algorithmic framework of adaptively iterative reweighted algorithms for solving
the nonconvex and nonsmooth sparse optimization problems. This is achieved by
solving a sequence of weighted convex penalty subproblems with adaptively
updated weights. The first-order optimality condition is then derived and the
global convergence results are provided under loose assumptions. This makes our
theoretical results a practical tool for analyzing a family of various
iteratively reweighted algorithms. In particular, for the iteratively reweighed
-algorithm, global convergence analysis is provided for cases with
diminishing relaxation parameter. For the iteratively reweighed
-algorithm, adaptively decreasing relaxation parameter is applicable
and the existence of the cluster point to the algorithm is established. The
effectiveness and efficiency of our proposed formulation and the algorithms are
demonstrated in numerical experiments in various sparse optimization problems
Guaranteed Accuracy for Conic Programming Problems in Vector Lattices
This paper presents rigorous forward error bounds for linear conic
optimization problems. The error bounds are formulated in a quite general
framework; the underlying vector spaces are not required to be
finite-dimensional, and the convex cones defining the partial ordering are not
required to be polyhedral. In the case of linear programming, second order cone
programming, and semidefinite programming specialized formulas are deduced
yielding guaranteed accuracy. All computed bounds are completely rigorous
because all rounding errors due to floating point arithmetic are taken into
account. Numerical results, applications and software for linear and
semidefinite programming problems are described
Some remarks on weak generalizations of minima and quasi efficiency
In this note, we remark, with sufficient mathematical rigor, that many weak
generalizations of the usual minimum available in the literature are not true
generalizations. Motivated by the Ekeland Variational Principle, we provide,
first time, the criteria for weaker generalizations of the usual minimum.
Further, we show that the quasi efficiency, recently used in Bhatia et al.
(Optim. Lett. 7, 127-135 (2013)) and introduced in Gupta et al. ( Bull. Aust.
Math. Soc. 74, 207-218 (2006)) is not a true generalization of the usual
efficiency. Since the former paper relies heavily on the results of later one,
so we discuss the later paper. We show that the necessary optimality condition
is a consequence of the local Lipschitzness and sufficiency result is trivial
in the later paper. Consequently, the duality results of the same paper are
also inconsistent.Comment: 9 pages, forum pape
Constrained optimization through fixed point techniques
We introduce an alternative approach for constrained mathematical programming
problems. It rests on two main aspects: an efficient way to compute optimal
solutions for unconstrained problems, and multipliers regarded as variables for
a certain map. Contrary to typical dual strategies, optimal vectors of
multipliers are sought as fixed points for that map. Two distinctive features
are worth highlighting: its simplicity and flexibility for the implementation,
and its convergence properties.Comment: 14 pages, 2 figure
Continuous-Time Inverse Quadratic Optimal Control Problem
In this paper, the problem of finite horizon inverse optimal control (IOC) is
investigated, where the quadratic cost function of a dynamic process is
required to be recovered based on the observation of optimal control sequences.
We propose the first complete result of the necessary and sufficient condition
for the existence of corresponding LQ cost functions. Under feasible cases, the
analytic expression of the whole solution space is derived and the equivalence
of weighting matrices in LQ problems is discussed. For infeasible problems, an
infinite dimensional convex problem is formulated to obtain a best-fit
approximate solution with minimal control residual. And the optimality
condition is solved under a static quadratic programming framework to
facilitate the computation. Finally, numerical simulations are used to
demonstrate the effectiveness and feasibility of the proposed methods.Comment: 16 pages, 2 figure
A Unified Primal Dual Active Set Algorithm for Nonconvex Sparse Recovery
In this paper, we consider the problem of recovering a sparse signal based on
penalized least squares formulations. We develop a novel algorithm of
primal-dual active set type for a class of nonconvex sparsity-promoting
penalties, including , bridge, smoothly clipped absolute deviation,
capped and minimax concavity penalty. First we establish the existence
of a global minimizer for the related optimization problems. Then we derive a
novel necessary optimality condition for the global minimizer using the
associated thresholding operator. The solutions to the optimality system are
coordinate-wise minimizers, and under minor conditions, they are also local
minimizers. Upon introducing the dual variable, the active set can be
determined using the primal and dual variables together. Further, this relation
lends itself to an iterative algorithm of active set type which at each step
involves first updating the primal variable only on the active set and then
updating the dual variable explicitly. When combined with a continuation
strategy on the regularization parameter, the primal dual active set method is
shown to converge globally to the underlying regression target under certain
regularity conditions. Extensive numerical experiments with both simulated and
real data demonstrate its superior performance in efficiency and accuracy
compared with the existing sparse recovery methods
Accelerated Point-wise Maximum Approach to Approximate Dynamic Programming
We describe an approximate dynamic programming approach to compute lower
bounds on the optimal value function for a discrete time, continuous space,
infinite horizon setting. The approach iteratively constructs a family of lower
bounding approximate value functions by using the so-called Bellman inequality.
The novelty of our approach is that, at each iteration, we aim to compute an
approximate value function that maximizes the point-wise maximum taken with the
family of approximate value functions computed thus far. This leads to a
non-convex objective, and we propose a gradient ascent algorithm to find
stationary points by solving a sequence of convex optimization problems. We
provide convergence guarantees for our algorithm and an interpretation for how
the gradient computation relates to the state relevance weighting parameter
appearing in related approximate dynamic programming approaches. We demonstrate
through numerical examples that, when compared to existing approaches, the
algorithm we propose computes tighter sub-optimality bounds with less
computation time.Comment: 14 pages, 3 figure
A First-Order Algorithm for the A-Optimal Experimental Design Problem: A Mathematical Programming Approach
We develop and analyse a first-order algorithm for the A-optimal experimental
design problem. The problem is first presented as a special case of a
parametric family of optimal design problems for which duality results and
optimality conditions are given. Then, two first-order (Frank-Wolfe type)
algorithms are presented, accompanied by a detailed time-complexity analysis of
the algorithms and computational results on various sized problems
Computing Lower Rank Approximations of Matrix Polynomials
Given an input matrix polynomial whose coefficients are floating point
numbers, we consider the problem of finding the nearest matrix polynomial which
has rank at most a specified value. This generalizes the problem of finding a
nearest matrix polynomial that is algebraically singular with a prescribed
lower bound on the dimension given in a previous paper by the authors. In this
paper we prove that such lower rank matrices at minimal distance always exist,
satisfy regularity conditions, and are all isolated and surrounded by a basin
of attraction of non-minimal solutions. In addition, we present an iterative
algorithm which, on given input sufficiently close to a rank-at-most matrix,
produces that matrix. The algorithm is efficient and is proven to converge
quadratically given a sufficiently good starting point. An implementation
demonstrates the effectiveness and numerical robustness of our algorithm in
practice.Comment: 31 Page
- …