50,858 research outputs found
Preconditioning for active set and projected gradient methods as\ud semi-smooth Newton methods for PDE-constrained optimization\ud with control constraints
Optimal control problems with partial differential equations play an important role in many applications. The inclusion of bound constraints for the control poses a significant additional challenge for optimization methods. In this paper we propose preconditioners for the saddle point problems that arise when a primal-dual active set method is used. We also show for this method that the same saddle point system can be derived when the method is considered as a semi-smooth Newton method. In addition, the projected gradient method can be employed to solve optimization problems with simple bounds and we discuss the efficient solution of the linear systems in question. In the case when an acceleration technique is employed for the projected gradient method, this again yields a semi-smooth Newton method that is equivalent to the primal-dual active set method. Numerical results illustrate the competitiveness of this approach
Transition Between Ground State and Metastable States in Classical 2D Atoms
Structural and static properties of a classical two-dimensional (2D) system
consisting of a finite number of charged particles which are laterally confined
by a parabolic potential are investigated by Monte Carlo (MC) simulations and
the Newton optimization technique. This system is the classical analog of the
well-known quantum dot problem. The energies and configurations of the ground
and all metastable states are obtained. In order to investigate the barriers
and the transitions between the ground and all metastable states we first
locate the saddle points between them, then by walking downhill from the saddle
point to the different minima, we find the path in configurational space from
the ground state to the metastable states, from which the geometric properties
of the energy landscape are obtained. The sensitivity of the ground-state
configuration on the functional form of the inter-particle interaction and on
the confinement potential is also investigated
First-order methods of smooth convex optimization with inexact oracle
In this paper, we analyze different first-order methods of smooth convex optimization employing inexact first-order information. We introduce the notion of an approximate first-order oracle. The list of examples of such an oracle includes smoothing technique, Moreau-Yosida regularization, Modified Lagrangians, and many others. For different methods, we derive complexity estimates and study the dependence of the desired accuracy in the objective function and the accuracy of the oracle. It appears that in inexact case, the superiority of the fast gradient methods over the classical ones is not anymore absolute. Contrary to the simple gradient schemes, fast gradient methods necessarily suffer from accumulation of errors. Thus, the choice of the method depends both on desired accuracy and accuracy of the oracle. We present applications of our results to smooth convex-concave saddle point problems, to the analysis of Modified Lagrangians, to the prox-method, and some others.smooth convex optimization, first-order methods, inexact oracle, gradient methods, fast gradient methods, complexity bounds
- …
