1,209 research outputs found
On affine scaling inexact dogleg methods for bound-constrained nonlinear systems
Within the framework of affine scaling trust-region methods for bound constrained problems, we discuss the use of a inexact dogleg method as a tool for simultaneously handling the trust-region and the bound constraints while seeking for an approximate minimizer of the model. Focusing on bound-constrained systems of nonlinear equations, an inexact affine scaling method for large scale problems, employing the inexact dogleg procedure, is described. Global convergence results are established without any Lipschitz assumption on the Jacobian matrix, and locally fast convergence is shown under standard assumptions. Convergence analysis is performed without specifying the scaling matrix used to handle the bounds, and a rather general class of scaling matrices is allowed in actual algorithms. Numerical results showing the performance of the method are also given
Immunizing Conic Quadratic Optimization Problems Against Implementation Errors
We show that the robust counterpart of a convex quadratic constraint with ellipsoidal implementation error is equivalent to a system of conic quadratic constraints. To prove this result we first derive a sharper result for the S-lemma in case the two matrices involved can be simultaneously diagonalized. This extension of the S-lemma may also be useful for other purposes. We extend the result to the case in which the uncertainty region is the intersection of two convex quadratic inequalities. The robust counterpart for this case is also equivalent to a system of conic quadratic constraints. Results for convex conic quadratic constraints with implementation error are also given. We conclude with showing how the theory developed can be applied in robust linear optimization with jointly uncertain parameters and implementation errors, in sequential robust quadratic programming, in Taguchiās robust approach, and in the adjustable robust counterpart.Conic Quadratic Program;hidden convexity;implementation error;robust optimization;simultaneous diagonalizability;S-lemma
Relative Robust Portfolio Optimization
Considering mean-variance portfolio problems with uncertain model parameters, we contrast the classical absolute robust optimization approach with the relative robust approach based on a maximum regret function. Although the latter problems are NP-hard in general, we show that tractable inner and outer approximations exist in several cases that are of central interest in asset management
A recursively feasible and convergent Sequential Convex Programming procedure to solve non-convex problems with linear equality constraints
A computationally efficient method to solve non-convex programming problems
with linear equality constraints is presented. The proposed method is based on
a recursively feasible and descending sequential convex programming procedure
proven to converge to a locally optimal solution. Assuming that the first
convex problem in the sequence is feasible, these properties are obtained by
convexifying the non-convex cost and inequality constraints with inner-convex
approximations. Additionally, a computationally efficient method is introduced
to obtain inner-convex approximations based on Taylor series expansions. These
Taylor-based inner-convex approximations provide the overall algorithm with a
quadratic rate of convergence. The proposed method is capable of solving
problems of practical interest in real-time. This is illustrated with a
numerical simulation of an aerial vehicle trajectory optimization problem on
commercial-of-the-shelf embedded computers
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
Successive Convexification of Non-Convex Optimal Control Problems and Its Convergence Properties
This paper presents an algorithm to solve non-convex optimal control
problems, where non-convexity can arise from nonlinear dynamics, and non-convex
state and control constraints. This paper assumes that the state and control
constraints are already convex or convexified, the proposed algorithm
convexifies the nonlinear dynamics, via a linearization, in a successive
manner. Thus at each succession, a convex optimal control subproblem is solved.
Since the dynamics are linearized and other constraints are convex, after a
discretization, the subproblem can be expressed as a finite dimensional convex
programming subproblem. Since convex optimization problems can be solved very
efficiently, especially with custom solvers, this subproblem can be solved in
time-critical applications, such as real-time path planning for autonomous
vehicles. Several safe-guarding techniques are incorporated into the algorithm,
namely virtual control and trust regions, which add another layer of
algorithmic robustness. A convergence analysis is presented in continuous- time
setting. By doing so, our convergence results will be independent from any
numerical schemes used for discretization. Numerical simulations are performed
for an illustrative trajectory optimization example.Comment: Updates: corrected wordings for LICQ. This is the full version. A
brief version of this paper is published in 2016 IEEE 55th Conference on
Decision and Control (CDC). http://ieeexplore.ieee.org/document/7798816
- ā¦