114 research outputs found

    Primal and Dual Approximation Algorithms for Convex Vector Optimization Problems

    Full text link
    Two approximation algorithms for solving convex vector optimization problems (CVOPs) are provided. Both algorithms solve the CVOP and its geometric dual problem simultaneously. The first algorithm is an extension of Benson's outer approximation algorithm, and the second one is a dual variant of it. Both algorithms provide an inner as well as an outer approximation of the (upper and lower) images. Only one scalar convex program has to be solved in each iteration. We allow objective and constraint functions that are not necessarily differentiable, allow solid pointed polyhedral ordering cones, and relate the approximations to an appropriate \epsilon-solution concept. Numerical examples are provided

    Set optimization - a rather short introduction

    Full text link
    Recent developments in set optimization are surveyed and extended including various set relations as well as fundamental constructions of a convex analysis for set- and vector-valued functions, and duality for set optimization problems. Extensive sections with bibliographical comments summarize the state of the art. Applications to vector optimization and financial risk measures are discussed along with algorithmic approaches to set optimization problems

    Isolated minimizers and proper efficiency for C0,1 constrained vector optimization problems.

    Get PDF
    We consider the vector optimization problem min(C) f (x), g(x) is an element of - K, where f:R-n -> R-m and g: R-n -> R-p are C-0,C-1 (i.e. locally Lipschitz) functions and C subset of R-m and K subset of R-p are closed convex cones. We give several notions of solution (efficiency concepts), among them the notion of properly efficient point (p-minimizer) of order k and the notion of isolated minimizer of order k. We show that each isolated minimizer of order k >= 1 is a p-minimizer of order k. The possible reversal of this statement in the case k = 1 is studied through first order necessary and sufficient conditions in terms of Dim derivatives. Observing that the optimality conditions for the constrained problem coincide with those for a suitable unconstrained problem, we introduce sense I solutions (those of the initial constrained problem) and sense II solutions (those of the unconstrained problem). Further, we obtain relations between sense I and sense II isolated minimizers and p-minimizers

    On implicit variables in optimization theory

    Full text link
    Implicit variables of a mathematical program are variables which do not need to be optimized but are used to model feasibility conditions. They frequently appear in several different problem classes of optimization theory comprising bilevel programming, evaluated multiobjective optimization, or nonlinear optimization problems with slack variables. In order to deal with implicit variables, they are often interpreted as explicit ones. Here, we first point out that this is a light-headed approach which induces artificial locally optimal solutions. Afterwards, we derive various Mordukhovich-stationarity-type necessary optimality conditions which correspond to treating the implicit variables as explicit ones on the one hand, or using them only implicitly to model the constraints on the other. A detailed comparison of the obtained stationarity conditions as well as the associated underlying constraint qualifications will be provided. Overall, we proceed in a fairly general setting relying on modern tools of variational analysis. Finally, we apply our findings to different well-known problem classes of mathematical optimization in order to visualize the obtained theory.Comment: 33 page

    Isolated minimizers and proper efficiency for C0,1 constrained vector optimization problems

    Get PDF
    AbstractWe consider the vector optimization problem minCf(x), g(x)∈−K, where f:Rn→Rm and g:Rn→Rp are C0,1 (i.e. locally Lipschitz) functions and C⊆Rm and K⊆Rp are closed convex cones. We give several notions of solution (efficiency concepts), among them the notion of properly efficient point (p-minimizer) of order k and the notion of isolated minimizer of order k. We show that each isolated minimizer of order k⩾1 is a p-minimizer of order k. The possible reversal of this statement in the case k=1 is studied through first order necessary and sufficient conditions in terms of Dini derivatives. Observing that the optimality conditions for the constrained problem coincide with those for a suitable unconstrained problem, we introduce sense I solutions (those of the initial constrained problem) and sense II solutions (those of the unconstrained problem). Further, we obtain relations between sense I and sense II isolated minimizers and p-minimizers

    Bundle methods in nonsmooth DC optimization

    Get PDF
    Due to the complexity of many practical applications, we encounter optimization problems with nonsmooth functions, that is, functions which are not continuously differentiable everywhere. Classical gradient-based methods are not applicable to solve such problems, since they may fail in the nonsmooth setting. Therefore, it is imperative to develop numerical methods specifically designed for nonsmooth optimization. To date, bundle methods are considered to be the most efficient and reliable general purpose solvers for this type of problems. The idea in bundle methods is to approximate the subdifferential of the objective function by a bundle of subgradients. This information is then used to build a model for the objective. However, this model is typically convex and, due to this, it may be inaccurate and unable to adequately reflect the behaviour of the objective function in the nonconvex case. These circumstances motivate to design new bundle methods based on nonconvex models of the objective function. In this dissertation, the main focus is on nonsmooth DC optimization that constitutes an important and broad subclass of nonconvex optimization problems. A DC function can be presented as a difference of two convex functions. Thus, we can obtain a model that utilizes explicitly both the convexity and concavity of the objective by approximating separately the convex and concave parts. This way we end up with a nonconvex DC model describing the problem more accurately than the convex one. Based on the new DC model we introduce three different bundle methods. Two of them are designed for unconstrained DC optimization and the third one is capable of solving also multiobjective and constrained DC problems. The finite convergence is proved for each method. The numerical results demonstrate the efficiency of the methods and show the benefits obtained from the utilization of the DC decomposition. Even though the usage of the DC decomposition can improve the performance of the bundle methods, it is not always available or possible to construct. Thus, we present another bundle method for a general objective function implicitly collecting information about the DC structure. This method is developed for large-scale nonsmooth optimization and its convergence is proved for semismooth functions. The efficiency of the method is shown with numerical results. As an application of the developed methods, we consider the clusterwise linear regression (CLR) problems. By applying the support vector machines (SVM) approach a new model for these problems is proposed. The objective in the new formulation of the CLR problem is expressed as a DC function and a method based on one of the presented bundle methods is designed to solve it. Numerical results demonstrate robustness of the new approach to outliers.Monissa käytännön sovelluksissa tarkastelun kohteena oleva ongelma on monimutkainen ja joudutaan näin ollen mallintamaan epäsileillä funktioilla, jotka eivät välttämättä ole jatkuvasti differentioituvia kaikkialla. Klassisia gradienttiin perustuvia optimointimenetelmiä ei voida käyttää epäsileisiin tehtäviin, sillä epäsileillä funktioilla ei ole olemassa klassista gradienttia kaikkialla. Näin ollen epäsileään optimointiin on välttämätöntä kehittää omia numeerisia ratkaisumenetelmiä. Näistä kimppumenetelmiä pidetään tällä hetkellä kaikista tehokkaimpina ja luotettavimpina yleismenetelminä kyseisten tehtävien ratkaisemiseksi. Ideana kimppumenetelmissä on approksimoida kohdefunktion alidifferentiaalia kimpulla, joka on muodostettu keräämällä kohdefunktion aligradientteja edellisiltä iteraatiokierroksilta. Tätä tietoa hyödyntämällä voidaan muodostaa kohdefunktiolle malli, joka on alkuperäistä tehtävää helpompi ratkaista. Käytetty malli on tyypillisesti konveksi ja näin ollen se voi olla epätarkka ja kykenemätön esittämään alkuperäisen tehtävän rakennetta epäkonveksissa tapauksessa. Tästä syystä väitöskirjassa keskitytään kehittämään uusia kimppumenetelmiä, jotka mallinnusvaiheessa muodostavat kohdefunktiolle epäkonveksin mallin. Pääpaino väitöskirjassa on epäsileissä optimointitehtävissä, joissa funktiot voidaan esittää kahden konveksin funktion erotuksena (difference of two convex functions). Kyseisiä funktioita kutsutaan DC-funktioiksi ja ne muodostavat tärkeän ja laajan epäkonveksien funktioiden osajoukon. Tämä valinta mahdollistaa kohdefunktion konveksisuuden ja konkaavisuuden eksplisiittisen hyödyntämisen, sillä uusi malli kohdefunktiolle muodostetaan yhdistämällä erilliset konveksille ja konkaaville osalle rakennetut mallit. Tällä tavalla päädytään epäkonveksiin DC-malliin, joka pystyy kuvaamaan ratkaistavaa tehtävää tarkemmin kuin konveksi arvio. Väitöskirjassa esitetään kolme erilaista uuden DC-mallin pohjalta kehitettyä kimppumenetelmää sekä todistetaan menetelmien konvergenssit. Kaksi näistä menetelmistä on suunniteltu rajoitteettomaan DC-optimointiin ja kolmannella voidaan ratkaista myös monitavoitteisia ja rajoitteellisia DC-optimointitehtäviä. Numeeriset tulokset havainnollistavat menetelmien tehokkuutta sekä DC-hajotelman käytöstä saatuja etuja. Vaikka DC-hajotelman käyttö voi parantaa kimppumenetelmien suoritusta, sitä ei aina ole saatavilla tai mahdollista muodostaa. Tästä syystä väitöskirjassa esitetään myös neljäs kimppumenetelmä konvergenssitodistuksineen yleiselle kohdefunktiolle, jossa kerätään implisiittisesti tietoa kohdefunktion DC-rakenteesta. Menetelmä on kehitetty erityisesti suurille epäsileille optimointitehtäville ja sen tehokkuus osoitetaan numeerisella testauksella Sovelluksena väitöskirjassa tarkastellaan datalle klustereittain tehtävää lineaarista regressiota (clusterwise linear regression). Kyseiselle sovellukselle muodostetaan uusi malli hyödyntäen koneoppimisessa käytettyä SVM-lähestymistapaa (support vector machines approach) ja saatu kohdefunktio esitetään DC-funktiona. Näin ollen yhtä kehitetyistä kimppumenetelmistä sovelletaan tehtävän ratkaisemiseen. Numeeriset tulokset havainnollistavat uuden lähestymistavan robustisuutta ja tehokkuutta

    Local maximizers of generalized convex vector-valued functions

    Get PDF
    Any local maximizer of an explicitly quasiconvex real-valued function is actually a global minimizer, if it belongs to the intrinsic core of the function's domain. In this paper we show that similar properties hold for componentwise explicitly quasiconvex vector-valued functions, with respect to the concepts of ideal, strong and weak optimality. We illustrate these results in the particular framework of linear fractional multicriteria optimization problems.Any local maximizer of an explicitly quasiconvex real-valued function is actually a global minimizer, if it belongs to the intrinsic core of the function's domain. In this paper we show that similar properties hold for componentwise explicitly quasiconvex vector-valued functions, with respect to the concepts of ideal, strong and weak optimality. We illustrate these results in the particular framework of linear fractional multicriteria optimization problems
    corecore