585 research outputs found
Multiobjective Robust Control with HIFOO 2.0
Multiobjective control design is known to be a difficult problem both in
theory and practice. Our approach is to search for locally optimal solutions of
a nonsmooth optimization problem that is built to incorporate minimization
objectives and constraints for multiple plants. We report on the success of
this approach using our public-domain Matlab toolbox HIFOO 2.0, comparing our
results with benchmarks in the literature
Tangential Extremal Principles for Finite and Infinite Systems of Sets, II: Applications to Semi-infinite and Multiobjective Optimization
This paper contains selected applications of the new tangential extremal
principles and related results developed in Part I to calculus rules for
infinite intersections of sets and optimality conditions for problems of
semi-infinite programming and multiobjective optimization with countable
constraint
Solving ill-posed bilevel programs
This paper deals with ill-posed bilevel programs, i.e., problems admitting multiple lower-level solutions for some upper-level parameters. Many publications have been devoted to the standard optimistic case of this problem, where the difficulty is essentially moved from the objective function to the feasible set. This new problem is simpler but there is no guaranty to obtain local optimal solutions for the original optimistic problem by this process. Considering the intrinsic non-convexity of bilevel programs, computing local optimal solutions is the best one can hope to get in most cases. To achieve this goal, we start by establishing an equivalence between the original optimistic problem an a certain set-valued optimization problem. Next, we develop optimality conditions for the latter problem and show that they generalize all the results currently known in the literature on optimistic bilevel optimization. Our approach is then extended to multiobjective bilevel optimization, and completely new results are derived for problems with vector-valued upper- and lower-level objective functions. Numerical implementations of the results of this paper are provided on some examples, in order to demonstrate how the original optimistic problem can be solved in practice, by means of a special set-valued optimization problem
On multiobjective optimization from the nonsmooth perspective
Practical applications usually have multiobjective nature rather than having only one objective to optimize. A multiobjective problem cannot be solved with a single-objective solver as such. On the other hand, optimization of only one objective may lead to an arbitrary bad solutions with respect to other objectives. Therefore, special techniques for multiobjective optimization are vital. In addition to multiobjective nature, many real-life problems have nonsmooth (i.e. not continuously differentiable) structure. Unfortunately, many smooth (i.e. continuously differentiable) methods adopt gradient-based information which cannot be used for nonsmooth problems. Since both of these characteristics are relevant for applications, we focus here on nonsmooth multiobjective optimization. As a research topic, nonsmooth multiobjective optimization has gained only limited attraction while the fields of nonsmooth single-objective and smooth multiobjective optimization distinctively have attained greater interest. This dissertation covers parts of nonsmooth multiobjective optimization in terms of theory, methodology and application.
Bundle methods are widely considered as effective and reliable solvers for single-objective nonsmooth optimization. Therefore, we investigate the use of the bundle idea in the multiobjective framework with three different methods. The first one generalizes the single-objective proximal bundle method for the nonconvex multiobjective constrained problem. The second method adopts the ideas from the classical steepest descent method into the convex unconstrained multiobjective case. The third method is designed for multiobjective problems with constraints where both the objectives and constraints can be represented as a difference of convex (DC) functions. Beside the bundle idea, all three methods are descent, meaning that they produce better values for each objective at each iteration. Furthermore, all of them utilize the improvement function either directly or indirectly. A notable fact is that none of these methods use scalarization in the traditional sense. With the scalarization we refer to the techniques transforming a multiobjective problem into the single-objective one.
As the scalarization plays an important role in multiobjective optimization, we present one special family of achievement scalarizing functions as a representative of this category. In general, the achievement scalarizing functions suit well in the interactive framework. Thus, we propose the interactive method using our special family of achievement scalarizing functions. In addition, this method utilizes the above mentioned descent methods as tools to illustrate the range of optimal solutions. Finally, this interactive method is used to solve the practical case studies of the scheduling the final disposal of the spent nuclear fuel in Finland.KÀytÀnnön optimointisovellukset ovat usein luonteeltaan ennemmin moni- kuin yksitavoitteisia. Erityisesti monitavoitteisille tehtÀville suunnitellut menetelmÀt ovat tarpeen, sillÀ monitavoitteista optimointitehtÀvÀÀ ei sellaisenaan pysty ratkaisemaan yksitavoitteisilla menetelmillÀ eikÀ vain yhden tavoitteen optimointi vÀlttÀmÀttÀ tuota mielekÀstÀ ratkaisua muiden tavoitteiden suhteen. Monitavoitteisuuden lisÀksi useat kÀytÀnnön tehtÀvÀt ovat myös epÀsileitÀ siten, etteivÀt niissÀ esiintyvÀt kohde- ja rajoitefunktiot vÀlttÀmÀttÀ ole kaikkialla jatkuvasti differentioituvia. Kuitenkin monet optimointimenetelmÀt hyödyntÀvÀt gradienttiin pohjautuvaa tietoa, jota ei epÀsileille funktioille ole saatavissa. NÀiden molempien ominaisuuksien ollessa keskeisiÀ sovelluksia ajatellen, keskitytÀÀn tÀssÀ työssÀ epÀsileÀÀn monitavoiteoptimointiin. Tutkimusalana epÀsileÀ monitavoiteoptimointi on saanut vain vÀhÀn huomiota osakseen, vaikka sekÀ sileÀ monitavoiteoptimointi ettÀ yksitavoitteinen epÀsileÀ optimointi erikseen ovat aktiivisia tutkimusaloja. TÀssÀ työssÀ epÀsileÀÀ monitavoiteoptimointia on kÀsitelty niin teorian, menetelmien kuin kÀytÀnnön sovelluksien kannalta.
KimppumenetelmiÀ pidetÀÀn yleisesti tehokkaina ja luotettavina menetelminÀ epÀsileÀn optimointitehtÀvÀn ratkaisemiseen ja siksi tÀtÀ ajatusta hyödynnetÀÀn myös tÀssÀ vÀitöskirjassa kolmessa eri menetelmÀssÀ. EnsimmÀinen nÀistÀ yleistÀÀ yksitavoitteisen proksimaalisen kimppumenetelmÀn epÀkonveksille monitavoitteiselle rajoitteiselle tehtÀvÀlle sopivaksi. Toinen menetelmÀ hyödyntÀÀ klassisen nopeimman laskeutumisen menetelmÀn ideaa konveksille rajoitteettomalle tehtÀvÀlle. Kolmas menetelmÀ on suunniteltu erityisesti monitavoitteisille rajoitteisille tehtÀville, joiden kohde- ja rajoitefunktiot voidaan ilmaista kahden konveksin funktion erotuksena. Kimppuajatuksen lisÀksi kaikki kolme menetelmÀÀ ovat laskevia eli ne tuottavat joka kierroksella paremman arvon jokaiselle tavoitteelle. YhteistÀ on myös se, ettÀ nÀmÀ kaikki hyödyntÀvÀt parannusfunktiota joko suoraan sellaisenaan tai epÀsuorasti. Huomattavaa on, ettei yksikÀÀn nÀistÀ menetelmistÀ hyödynnÀ skalarisointia perinteisessÀ merkityksessÀÀn. Skalarisoinnilla viitataan menetelmiin, joissa usean tavoitteen tehtÀvÀ on muutettu sopivaksi yksitavoitteiseksi tehtÀvÀksi.
Monitavoiteoptimointimenetelmien joukossa skalarisoinnilla on vankka jalansija. EsimerkkinÀ skalarisoinnista tÀssÀ työssÀ esitellÀÀn yksi saavuttavien skalarisointifunktioiden perhe. Yleisesti saavuttavat skalarisointifunktiot soveltuvat hyvin interaktiivisten menetelmien rakennuspalikoiksi. TÀten kuvaillaan myös esiteltyÀ skalarisointifunktioiden perhettÀ hyödyntÀvÀ interaktiivinen menetelmÀ, joka lisÀksi hyödyntÀÀ laskevia menetelmiÀ optimaalisten ratkaisujen havainnollistamisen apuna. Lopuksi tÀtÀ interaktiivista menetelmÀÀ kÀytetÀÀn aikatauluttamaan kÀytetyn ydinpolttoaineen loppusijoitusta Suomessa
Necessary Conditions in Multiobjective Optimization With Equilibrium Constraints
In this paper we study multiobjective optimization problems with equilibrium constraints (MOECs) described by generalized equations in the form 0 is an element of the set G(x,y) + Q(x,y), where both mappings G and Q are set-valued. Such models particularly arise from certain optimization-related problems governed by variational inequalities and first-order optimality conditions in nondifferentiable programming. We establish verifiable necessary conditions for the general problems under consideration and for their important specifications using modern tools of variational analysis and generalized differentiation. The application of the obtained necessary optimality conditions is illustrated by a numerical example from bilevel programming with convex while nondifferentiable data
First-Order Conditions for C0,1 Constrained vector optimization
For a Fritz John type vector optimization problem with C0,1 data we define different type of solutions, give their scalar characterizations applying the so called oriented distance, and give necessary and sufficient first order optimality conditions in terms of the Dini derivative. While establishing the sufficiency, we introduce new type of efficient points referred to as isolated minimizers of first order, and show their relation to properly efficient points. More precisely, the obtained necessary conditions are necessary for weakly efficiency, and the sufficient conditions are both sufficient and necessary for a point to be an isolated minimizer of first order.vector optimization, nonsmooth optimization, C0,1 functions, Dini derivatives, first-order optimality conditions, lagrange multipliers
Optimality conditions in convex multiobjective SIP
The purpose of this paper is to characterize the weak efficient solutions, the efficient solutions, and the isolated efficient solutions of a given vector optimization problem with finitely many convex objective functions and infinitely many convex constraints. To do this, we introduce new and already known data qualifications (conditions involving the constraints and/or the objectives) in order to get optimality conditions which are expressed in terms of either KaruskâKuhnâTucker multipliers or a new gap function associated with the given problem.This research was partially cosponsored by the Ministry of Economy and Competitiveness (MINECO) of Spain, and by the European Regional Development Fund (ERDF) of the European Commission, Project MTM2014-59179-C2-1-P
- âŠ