20 research outputs found
Comparative study of RPSALG algorithm for convex semi-infinite programming
The Remez penalty and smoothing algorithm (RPSALG) is a unified framework for penalty and smoothing methods for solving min-max convex semi-infinite programing problems, whose convergence was analyzed in a previous paper of three of the authors. In this paper we consider a partial implementation of RPSALG for solving ordinary convex semi-infinite programming problems. Each iteration of RPSALG involves two types of auxiliary optimization problems: the first one consists of obtaining an approximate solution of some discretized convex problem, while the second one requires to solve a non-convex optimization problem involving the parametric constraints as objective function with the parameter as variable. In this paper we tackle the latter problem with a variant of the cutting angle method called ECAM, a global optimization procedure for solving Lipschitz programming problems. We implement different variants of RPSALG which are compared with the unique publicly available SIP solver, NSIPS, on a battery of test problems.This research was partially supported by MINECO of Spain, Grants MTM2011-29064-C03-01/02
Bibliography on Nondifferentiable Optimization
This is a research bibliography with all the advantages and shortcomings that this implies. The author has used it as a bibliographical data base when writing papers, and it is therefore largely a reflection of his own personal research interests. However, it is hoped that this bibliography will nevertheless be of use to others interested in nondifferentiable optimization
A comparative note on the relaxation algorithms for the linear semi-infinite feasibility problem
The problem (LFP) of finding a feasible solution to a given linear
semi-infinite system arises in different contexts. This paper provides
an empirical comparative study of relaxation algorithms for (LFP).
In this study we consider, together with the classical algorithm, imple-
mented with different values of the fixed parameter (the step size), a
new relaxation algorithm with random parameter which outperforms
the classical one in most test problems whatever fixed parameter is
taken. This new algorithm converges geometrically to a feasible so-
lution under mild conditions. The relaxation algorithms under com-
parison have been implemented using the Extended Cutting Angle
Method (ECAM) for solving the global optimization subproblems.Peer ReviewedPreprin
Techniques d'optimisation non lisse avec des applications en automatique et en mécanique des contacts
L'optimisation non lisse est une branche active de programmation non linéaire moderne, où l'objectif et les contraintes sont des fonctions continues mais pas nécessairement différentiables. Les sous-gradients généralisés sont disponibles comme un substitut à l'information dérivée manquante, et sont utilisés dans le cadre des algorithmes de descente pour se rapprocher des solutions optimales locales. Sous des hypothèses réalistes en pratique, nous prouvons des certificats de convergence vers les points optimums locaux ou critiques à partir d'un point de départ arbitraire. Dans cette thèse, nous développons plus particulièrement des techniques d'optimisation non lisse de type faisceaux, où le défi consiste à prouver des certificats de convergence sans hypothèse de convexité. Des résultats satisfaisants sont obtenus pour les deux classes importantes de fonctions non lisses dans des applications, fonctions C1-inférieurement et C1-supérieurement. Nos méthodes sont appliquées à des problèmes de design dans la théorie du système de contrôle et dans la mécanique de contact unilatéral et en particulier, dans les essais mécaniques destructifs pour la délaminage des matériaux composites. Nous montrons comment ces domaines conduisent à des problèmes d'optimisation non lisse typiques, et nous développons des algorithmes de faisceaux appropriés pour traiter ces problèmes avec succèsNonsmooth optimization is an active branch of modern nonlinear programming, where objective and constraints are continuous but not necessarily differentiable functions. Generalized subgradients are available as a substitute for the missing derivative information, and are used within the framework of descent algorithms to approximate local optimal solutions. Under practically realistic hypotheses we prove convergence certificates to local optima or critical points from an arbitrary starting point. In this thesis we develop especially nonsmooth optimization techniques of bundle type, where the challenge is to prove convergence certificates without convexity hypotheses. Satisfactory results are obtained for two important classes of nonsmooth functions in applications, lower- and upper-C1 functions. Our methods are applied to design problems in control system theory and in unilateral contact mechanics and in particular, in destructive mechanical testing for delamination of composite materials. We show how these fields lead to typical nonsmooth optimization problems, and we develop bundle algorithms suited to address these problems successfully