4 research outputs found
Improving the efficiency of DC global optimization methods by improving the DC representation of the objective function
There are infinitely many ways of representing a d.c. function as a difference of convex functions. In this paper we analyze how the computational efficiency of a d.c. optimization algorithm depends on the representation we choose for the objective function, and we address the problem of
characterizing and obtaining a computationally optimal representation. We introduce some theoretical concepts which are necessary for this analysis and report some numerical experiments
Un nou criteri per optimitzar la resolució de problemes matemàtics
Els problemes de programació matemàtica intenten resoldre processos que tenen diferents possibles solucions, però només una d'elles és l'òptima, la que s'ajusta més a unes condicions prestablertes pel mateix enunciat del problema. Els procediments clàssics permeten trobar solucions òptimes en el cas de problemes convexos i no poden assegurar-ho en cap altre cas. Ara bé, si la convexitat és present en alguna forma, per exemple, quan la funció objectiu pot expressar-se com a diferència de funcions convexes, aleshores es poden descriure nous procediments que permeten calcular solucions òptimes. Un nou ús de la convexitat s'ha près com a eix central en la discriminació de les solucions en el present estudi, per millorar l'eficiència en l'obtenció de les solucions òptimes.Los problemas de programación matemática intentan resolver procesos que tienen diferentes posibles soluciones, pero sólo una de ellas es la óptima, la que se ajusta más a unas condiciones preestablecidas por el mismo enunciado del problema. Los procedimientos clásicos permiten encontrar soluciones óptimas en el caso de problemas convexos y no pueden aseguralo en ningún otro caso. No obstante, si la convexidad está presente en alguna forma, por ejemplo, cuando la función objetivo puede expresarse como una diferencia de funciones convexas, entonces se pueden describir nuevos procedimientos que permiten calcular soluciones óptimas. Un nuevo uso de la convexidad se ha tomado como eje central en la discriminación de las soluciones en el presente estudio, mejorando así la eficiencia en la obtención de las soluciones óptimas.Mathematical programming problems try to solve processes which have different solutions by finding the optimal solution, the one that best fits the pre-established conditions of the problem. The classic procedures work towards finding an optimal solution in the case of convex problems, but cannot guarantee it in any other type of problem. However, if the problem involves some kind of convexity, as for example when the objective function can be expressed as a difference in convex functions, then new procedures making it possible to calculate optimal solutions can be described. A new use of convexity was taken as the central axis in the discrimination of solutions in this study, with the aim of improving the efficiency in obtaining optimal solutions
DC Semidefinite Programming and Cone Constrained DC Optimization
In the first part of this paper we discuss possible extensions of the main
ideas and results of constrained DC optimization to the case of nonlinear
semidefinite programming problems (i.e. problems with matrix constraints). To
this end, we analyse two different approaches to the definition of DC
matrix-valued functions (namely, order-theoretic and componentwise), study some
properties of convex and DC matrix-valued functions and demonstrate how to
compute DC decompositions of some nonlinear semidefinite constraints appearing
in applications. We also compute a DC decomposition of the maximal eigenvalue
of a DC matrix-valued function, which can be used to reformulate DC
semidefinite constraints as DC inequality constrains.
In the second part of the paper, we develop a general theory of cone
constrained DC optimization problems. Namely, we obtain local optimality
conditions for such problems and study an extension of the DC algorithm (the
convex-concave procedure) to the case of general cone constrained DC
optimization problems. We analyse a global convergence of this method and
present a detailed study of a version of the DCA utilising exact penalty
functions. In particular, we provide two types of sufficient conditions for the
convergence of this method to a feasible and critical point of a cone
constrained DC optimization problem from an infeasible starting point
Improving the efficiency of DC global optimization methods by improving the DC representation of the objective function
There are infinitely many ways of representing a d.c. function as a difference of convex functions. In this paper we analyze how the computational efficiency of a d.c. optimization algorithm depends on the representation we choose for the objective function, and we address the problem of
characterizing and obtaining a computationally optimal representation. We introduce some theoretical concepts which are necessary for this analysis and report some numerical experiments