312 research outputs found
Differential conditions for constrained nonlinear programming via Pareto optmization
We deal with differential conditions for local optimality. The conditions that
we derive for inequality constrained problems do not require constraint qualifications and are the
broadest conditions based on only first-order and second-order derivatives. A similar result is proved for
equality constrained problems, although the necessary conditions require the regularity of the equality
constraints
Optimal control of the sweeping process over polyhedral controlled sets
The paper addresses a new class of optimal control problems governed by the
dissipative and discontinuous differential inclusion of the sweeping/Moreau
process while using controls to determine the best shape of moving convex
polyhedra in order to optimize the given Bolza-type functional, which depends
on control and state variables as well as their velocities. Besides the highly
non-Lipschitzian nature of the unbounded differential inclusion of the
controlled sweeping process, the optimal control problems under consideration
contain intrinsic state constraints of the inequality and equality types. All
of this creates serious challenges for deriving necessary optimality
conditions. We develop here the method of discrete approximations and combine
it with advanced tools of first-order and second-order variational analysis and
generalized differentiation. This approach allows us to establish constructive
necessary optimality conditions for local minimizers of the controlled sweeping
process expressed entirely in terms of the problem data under fairly
unrestrictive assumptions. As a by-product of the developed approach, we prove
the strong -convergence of optimal solutions of discrete
approximations to a given local minimizer of the continuous-time system and
derive necessary optimality conditions for the discrete counterparts. The
established necessary optimality conditions for the sweeping process are
illustrated by several examples
Global optimality conditions and optimization methods for constrained polynomial programming problems
The general constrained polynomial programming problem (GPP) is considered in this paper. Problem (GPP) has a broad range of applications and is proved to be NP-hard. Necessary global optimality conditions for problem (GPP) are established. Then, a new local optimization method for this problem is proposed by exploiting these necessary global optimality conditions. A global optimization method is proposed for this problem by combining this local optimization method together with an auxiliary function. Some numerical examples are also given to illustrate that these approaches are very efficient. (C) 2015 Elsevier Inc. All rights reserved
Optimality conditions in continuous-time programming problems
ΠΡΠΎΠ±Π»Π΅ΠΌ ΠΎΠΏΡΠΈΠΌΠΈΠ·Π°ΡΠΈΡΠ΅ ΡΠ° Π½Π΅ΠΏΡΠ΅ΠΊΠΈΠ΄Π½ΠΈΠΌ Π²ΡΠ΅ΠΌΠ΅Π½ΠΎΠΌ ΡΠ°ΡΡΠΎΡΠΈ ΡΠ΅ Ρ ΠΌΠΈΠ½ΠΈΠΌΠΈΠ·Π°ΡΠΈΡΠΈ ΠΈΠ½ΡΠ΅Π³ΡΠ°Π»Π½ΠΎΠ³
ΡΡΠ½ΠΊΡΠΈΠΎΠ½Π°Π»Π°, ΡΠ° ΡΠ°Π·Π½ΠΈΠΌ ΠΎΠ³ΡΠ°Π½ΠΈΡΠ΅ΡΠΈΠΌΠ° ΡΠ°Π·Π»ΠΈΡΠΈΡΠΈΡ
ΡΠΈΠΏΠΎΠ²Π°.
ΠΡΠ΅Π΄ΠΌΠ΅Ρ ΠΎΠ²Π΅ Π΄ΠΎΠΊΡΠΎΡΡΠΊΠ΅ Π΄ΠΈΡΠ΅ΡΡΠ°ΡΠΈΡΠ΅ ΡΠ΅ Π΄ΠΎΠ±ΠΈΡΠ°ΡΠ΅ ΡΡΠ»ΠΎΠ²Π° Π΅ΠΊΡΡΡΠ΅ΠΌΡΠΌΠ° ΠΊΠ°ΠΎ ΠΈ ΡΠ΅ΠΎΡΠ΅ΠΌΠ°
Π΄ΡΠ°Π»Π½ΠΎΡΡΠΈ Π·Π° ΠΊΠ»Π°ΡΡ ΠΊΠΎΠ½Π²Π΅ΠΊΡΠ½ΠΈΡ
ΠΈ Π³Π»Π°ΡΠΊΠΈΡ
ΠΏΡΠΎΠ±Π»Π΅ΠΌΠ° ΠΎΠΏΡΠΈΠΌΠΈΠ·Π°ΡΠΈΡΠ΅ ΡΠ° Π½Π΅ΠΏΡΠ΅ΠΊΠΈΠ΄Π½ΠΈΠΌ Π²ΡΠ΅-
ΠΌΠ΅Π½ΠΎΠΌ, ΡΠ° ΡΠ°Π·Π½ΠΈΠΌ ΠΎΠ³ΡΠ°Π½ΠΈΡΠ΅ΡΠΈΠΌΠ° ΡΠΈΠΏΠ° Π½Π΅ΡΠ΅Π΄Π½Π°ΠΊΠΎΡΡΠΈ. ΠΠ°ΠΆΠ°Π»ΠΎΡΡ, Π½Π΅ΠΊΠΈ ΠΎΠ±ΡΠ°Π²ΡΠ΅Π½ΠΈ ΡΠ΅-
Π·ΡΠ»ΡΠ°ΡΠΈ ΠΈΠ· ΠΎΠ²Π΅ ΠΎΠ±Π»Π°ΡΡΠΈ ΡΡ Π½Π΅ΡΠ°ΡΠ½ΠΈ, ΡΡΠΎ ΡΠ΅ ΠΏΠΎΡΠ²ΡΡΠ΅Π½ΠΎ 2019. Π³ΠΎΠ΄ΠΈΠ½Π΅.
Π£ ΡΠ°Π΄Ρ ΡΡ Π΄ΠΎΠ±ΠΈΡΠ΅Π½ΠΈ Π½ΠΎΠ²ΠΈ ΡΡΠ»ΠΎΠ²ΠΈ Π΅ΠΊΡΡΡΠ΅ΠΌΡΠΌΠ° Π·Π° ΠΏΠΎΠΌΠ΅Π½ΡΡΡ ΠΊΠ»Π°ΡΡ ΠΏΡΠΎΠ±Π»Π΅ΠΌΠ°. ΠΠΎΠΊΠ°Π·Π°Π½Π΅
ΡΡ ΡΠ΅ΠΎΡΠ΅ΠΌΠ΅ ΡΠ»Π°Π±Π΅ ΠΈ ΡΠ°ΠΊΠ΅ Π΄ΡΠ°Π»Π½ΠΎΡΡΠΈ. ΠΠ»Π°Π²Π½ΠΈ Π°ΠΏΠ°ΡΠ°Ρ Π·Π° ΠΈΠ·Π²ΠΎΡΠ΅ΡΠ΅ ΠΎΠ²ΠΈΡ
ΡΠ΅Π·ΡΠ»ΡΠ°ΡΠ° ΡΠ΅ Π½ΠΎΠ²Π° ΡΠ΅ΠΎ-
ΡΠ΅ΠΌΠ° Π°Π»ΡΠ΅ΡΠ½Π°ΡΠΈΠ²Π΅ Π·Π° ΠΊΠΎΠ½Π²Π΅ΠΊΡΠ°Π½ ΡΠΈΡΡΠ΅ΠΌ ΡΡΡΠΎΠ³ΠΈΡ
ΠΈ Π½Π΅ΡΡΡΠΎΠ³ΠΈΡ
Π½Π΅ΡΠ΅Π΄Π½Π°ΠΊΠΎΡΡΠΈ Ρ Π±Π΅ΡΠΊΠΎΠ½Π°ΡΠ½ΠΎ-
-Π΄ΠΈΠΌΠ΅Π½Π·ΠΈΠΎΠ½ΠΈΠΌ ΠΏΡΠΎΡΡΠΎΡΠΈΠΌΠ°. ΠΠ° ΠΏΡΠΈΠΌΠ΅Π½Ρ ΠΏΠΎΠΌΠ΅Π½ΡΡΠ΅ ΡΠ΅ΠΎΡΠ΅ΠΌΠ΅, ΠΎΠ΄Π³ΠΎΠ²Π°ΡΠ°ΡΡΡΠΈ ΡΡΠ»ΠΎΠ² ΡΠ΅Π³Ρ-
Π»Π°ΡΠ½ΠΎΡΡΠΈ ΠΌΠΎΡΠ° Π±ΠΈΡΠΈ Π·Π°Π΄ΠΎΠ²ΠΎΡΠ΅Π½. ΠΠ΅ΠΊΠΈ ΡΡΠ»ΠΎΠ²ΠΈ Π΅ΠΊΡΡΡΠ΅ΠΌΡΠΌΠ° ΡΡ ΠΈΠ·Π²Π΅Π΄Π΅Π½ΠΈ ΡΠ· Π΄ΠΎΠ΄Π°ΡΠ½Π΅ ΠΏΡΠ΅Ρ-
ΠΏΠΎΡΡΠ°Π²ΠΊΠ΅ ΡΠ΅Π³ΡΠ»Π°ΡΠ½ΠΎΡΡΠΈ ΠΎΠ³ΡΠ°Π½ΠΈΡΠ΅ΡΠ°. Π’Π΅ΠΎΡΠΈΡΡΠΊΠΈ ΡΠ΅Π·ΡΠ»ΡΠ°ΡΠΈ ΡΡ ΠΏΠΎΡΠ²ΡΡΠ΅Π½ΠΈ ΠΏΡΠ°ΠΊΡΠΈΡΠ½ΠΈΠΌ
ΠΏΡΠΈΠΌΠ΅ΡΠΈΠΌΠ°The continuous-time programming problem consists in minimizing an integral functional, with
phase constraints of different types.
The subject of this doctoral dissertation is to establish extremum conditions as well as
duality theorems for a class of convex and smooth continuous-time programming problems,
with phase constraints of the inequality type. Unfortunately, some of the results in this field
are not valid, which is confirmed in 2019.
In this paper, new optimality conditions for the aforementioned class of problems are ob-
tained. The theorems of weak and strong duality are proved. The main tool for deriving these
results is a new theorem of the alternative for a convex system of strict and nonstrict inequal-
ities in infinite dimensional spaces. In order to apply the aforementioned theorem, a suitable
regularity condition must be satisfied. Some optimality conditions are obtained with additional
constraint regularity qualification. Theoretical results are confirmed by practical examples
Decomposed Structured Subsets for Semidefinite and Sum-of-Squares Optimization
Semidefinite programs (SDPs) are standard convex problems that are frequently
found in control and optimization applications. Interior-point methods can
solve SDPs in polynomial time up to arbitrary accuracy, but scale poorly as the
size of matrix variables and the number of constraints increases. To improve
scalability, SDPs can be approximated with lower and upper bounds through the
use of structured subsets (e.g., diagonally-dominant and scaled-diagonally
dominant matrices). Meanwhile, any underlying sparsity or symmetry structure
may be leveraged to form an equivalent SDP with smaller positive semidefinite
constraints. In this paper, we present a notion of decomposed structured
subsets}to approximate an SDP with structured subsets after an equivalent
conversion. The lower/upper bounds found by approximation after conversion
become tighter than the bounds obtained by approximating the original SDP
directly. We apply decomposed structured subsets to semidefinite and
sum-of-squares optimization problems with examples of H-infinity norm
estimation and constrained polynomial optimization. An existing basis pursuit
method is adapted into this framework to iteratively refine bounds.Comment: 23 pages, 10 figures, 9 table
Group-invariant Semidefinite Programming and Applications
Abstract This essay considers semidefinite programming problems that exhibit a special form of symmetry called group-invariance. We demonstrate the effect of such symmetries on certain path-following interior-point algorithms, and highlight a reduction technique that is particularly useful on certain groupinvariant semidefinite programming problems. Two applications of groupinvariant semidefinite programming problems-one in truss design and the other in graph theory-are presented. ii Acknowledgements To my supervisor, Dr. Chek Beng Chua, for his wise advice, great encouragement and continuous support during my master's study. To Professor Michael Best for his comments and careful reading of the draft. To my dear friends who gave me great help and made my life in Waterloo a wonderful experience
Algebraic Algorithm Design and Local Search
Formal, mathematically-based techniques promise to play an expanding role in the development and maintenance of the software on which our technological society depends. Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS). An algorithm theory formally characterizes the essential components of a family of algorithms. A design tactic is a specialized procedure for recognizing in a problem specification the structures identified in an algorithm theory and then synthesizing a program. Design tactics are hard to write, however, and much of the knowledge they use is encoded procedurally in idiosyncratic ways. Algebraic methods promise a way to represent algorithm design knowledge declaratively and uniformly. We describe a general method for performing algorithm design that is more purely algebraic than that of KIDS. This method is then applied to local search. Local search is a large and diverse class of algorithms applicable to a wide range of problems; it is both intrinsically important and representative of algorithm design as a whole. A general theory of local search is formalized to describe the basic properties common to all local search algorithms, and applied to several variants of hill climbing and simulated annealing. The general theory is then specialized to describe some more advanced local search techniques, namely tabu search and the Kernighan-Lin heuristic
- β¦