1,241 research outputs found
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
Multiplier methods for engineering optimization
International audienceMultiplier methods used to solve the constrained engineering optimization problem are described. These methods solve the problem by minimizing a sequence of unconstrained problems defined using the cost and constraint functions. The methods, proposed in 1969, have been determined to be quite robust, although not as efficient as other algorithms. They can be more effective for some engineering applications, such as optimum design and control oflarge scale dynamic systems. Since 1969 several modifications and extensions of the methods have been developed. Therefore, it is important to review the theory and computational procedures of these methods so that more efficient and effective ones can be developed for engineering applications. Recent methods that are similar to the multiplier methods are also discussed. These are continuous multiplier update, exact penalty and exponential penalty methods
Survey of sequential convex programming and generalized Gauss-Newton methods*
We provide an overview of a class of iterative convex approximation methods for nonlinear optimization problems with convex-over-nonlinear substructure. These problems are characterized by outer convexities on the one hand, and nonlinear, generally nonconvex, but differentiable functions on the other hand. All methods from this class use only first order derivatives of the nonlinear functions and sequentially solve convex optimization problems. All of them are different generalizations of the classical Gauss-Newton (GN) method. We focus on the smooth constrained case and on three methods to address it: Sequential Convex Programming (SCP), Sequential Convex Quadratic Programming (SCQP), and Sequential Quadratically Constrained Quadratic Programming (SQCQP). While the first two methods were previously known, the last is newly proposed and investigated in this paper. We show under mild assumptions that SCP, SCQP and SQCQP have exactly the same local linear convergence – or divergence – rate. We then discuss the special case in which the solution is fully determined by the active constraints, and show that for this case the KKT conditions are sufficient for local optimality and that SCP, SCQP and SQCQP even converge quadratically. In the context of parameter estimation with symmetric convex loss functions, the possible divergence of the methods can in fact be an advantage that helps them to avoid some undesirable local minima: generalizing existing results, we show that the presented methods converge to a local minimum if and only if this local minimum is stable against a mirroring operation applied to the measurement data of the estimation problem. All results are illustrated by numerical experiments on a tutorial example
Assessing the reliability of general-purpose Inexact Restoration methods
Inexact Restoration methods have been proved to be effective to solve constrained optimization problems in which some structure of the feasible set induces a natural way of recovering feasibility from arbitrary infeasible points. Sometimes natural ways of dealing with minimization over tangent approximations of the feasible set are also employed. A recent paper Banihashemi and Kaya (2013)] suggests that the Inexact Restoration approach can be competitive with well-established nonlinear programming solvers when applied to certain control problems without any problem-oriented procedure for restoring feasibility. This result motivated us to revisit the idea of designing general-purpose Inexact Restoration methods, especially for large-scale problems. In this paper we introduce affordable algorithms of Inexact Restoration type for solving arbitrary nonlinear programming problems and we perform the first experiments that aim to assess their reliability. Initially, we define a purely local Inexact Restoration algorithm with quadratic convergence. Then, we modify the local algorithm in order to increase the chances of success of both the restoration and the optimization phase. This hybrid algorithm is intermediate between the local algorithm and a globally convergent one for which, under suitable assumptions, convergence to KKT points can be proved28
On convergence of the maximum block improvement method
Abstract. The MBI (maximum block improvement) method is a greedy approach to solving optimization problems where the decision variables can be grouped into a finite number of blocks. Assuming that optimizing over one block of variables while fixing all others is relatively easy, the MBI method updates the block of variables corresponding to the maximally improving block at each iteration, which is arguably a most natural and simple process to tackle block-structured problems with great potentials for engineering applications. In this paper we establish global and local linear convergence results for this method. The global convergence is established under the Lojasiewicz inequality assumption, while the local analysis invokes second-order assumptions. We study in particular the tensor optimization model with spherical constraints. Conditions for linear convergence of the famous power method for computing the maximum eigenvalue of a matrix follow in this framework as a special case. The condition is interpreted in various other forms for the rank-one tensor optimization model under spherical constraints. Numerical experiments are shown to support the convergence property of the MBI method
Recommended from our members
Optimization methods for deadbeat control design: a state space approach
This thesis addresses the synthesis problem of state deadbeat regulator using state space techniques. Deadbeat control is a linear control strategy in discrete time systems and consists of driving the system from any arbitrary initial state to a desired final state infinite number of time steps.
Having described the framework for development of the thesis which is in the form of a lower linear-fractional transformation (LFT), the conditions for internal stability based on the notion of coprime factorization over the set of proper and stable transfer matrices, namely RH, is discussed. This leads to the derivation of the class of all stabilizing linear controllers, which are parameterized affinely in terms of a stable but otherwise free parameter Q, usually known as the Q-parameterization. In this work, the classical Q- parameterization is generalized to deliver a parameterization for the family of deadbeat regulators.
Time response characteristics of the deadbeat system are investigated. In particular, the deadbeat regulator design problem in which the system must satisfy time domain specifications and minimize a quadratic (LQG-type) performance criterion is examined. It is shown that the attained parameterization for deadbeat controllers leads to the formulation of the synthesis problem in a quadratic programming framework with Q regarded as the design variable. The equivalent formulation of this objective as a quadratic integral in the frequency domain provides the means for shaping the frequency response characteristics of the system. Using the LMI characterization of the standard H problem, a new scheme for shaping the system frequency response characteristics by minimizing the infinity norm of an appropriate closed-loop transfer function is introduced. As shown, the derived parameterization of deadbeat compensators simplifies considerably the formulation and solution of this problem.
The last part of the work described in this thesis is devoted to addressing the synthesis problem of deadbeat regulators in a robust way, when the plant is subject to structured norm-bounded parametric uncertainties. A novel approach which is expressed as an LMI feasibility condition has been proposed and analysed
Decision-Focused Learning: Foundations, State of the Art, Benchmark and Future Opportunities
Decision-focused learning (DFL) is an emerging paradigm in machine learning
which trains a model to optimize decisions, integrating prediction and
optimization in an end-to-end system. This paradigm holds the promise to
revolutionize decision-making in many real-world applications which operate
under uncertainty, where the estimation of unknown parameters within these
decision models often becomes a substantial roadblock. This paper presents a
comprehensive review of DFL. It provides an in-depth analysis of the various
techniques devised to integrate machine learning and optimization models,
introduces a taxonomy of DFL methods distinguished by their unique
characteristics, and conducts an extensive empirical evaluation of these
methods proposing suitable benchmark dataset and tasks for DFL. Finally, the
study provides valuable insights into current and potential future avenues in
DFL research.Comment: Experimental Survey and Benchmarkin
- …