13 research outputs found

    On the rate of convergence of the Difference-of-Convex Algorithm (DCA)

    Full text link
    In this paper, we study the convergence rate of the DCA (Difference-of-Convex Algorithm), also known as the convex-concave procedure, with two different termination criteria that are suitable for smooth and nonsmooth decompositions respectively. The DCA is a popular algorithm for difference-of-convex (DC) problems, and known to converge to a stationary point of the objective under some assumptions. We derive a worst-case convergence rate of O(1/N)O(1/\sqrt{N}) after NN iterations of the objective gradient norm for certain classes of DC problems, without assuming strong convexity in the DC decomposition, and give an example which shows the convergence rate is exact. We also provide a new convergence rate of O(1/N)O(1/N) for the DCA with the second termination criterion. %In addition, we investigate the DCA with regularization. Moreover, we derive a new linear convergence rate result for the DCA under the assumption of the Polyak-\L ojasiewicz inequality. The novel aspect of our analysis is that it employs semidefinite programming performance estimation

    Applications of non-convex optimization in portfolio selection

    Get PDF
    Die vorgelegte Arbeit befasst sich mit nicht-konvexer Optimierung in dem Gebiet der Portfolio Selection. Thematisch lässt sich die Arbeit in zwei Teilgebiete strukturieren: (1) Das Lösen von Mean-Risk Problemen mit Value-at-Risk als Risikomaß: Es werden Methoden zum Auffinden von effizienten Portfolios für den Fall von diskret verteilten Asset Returns vorgestellt. Die behandelten Probleme sind (wegen der Nicht-Konvexität des Value-at-Risk) nicht konvex und lassen sich als Differenz von konvexen Funktionen darstellen. Es werden sowohl Branch-and-Bound als auch approximative Lösungsverfahren angewandt. Die globalen Lösungen des Branch-and-Bound werden mit den Lösungen der approximativen Verfahren verglichen. (2) Robustifizierung von Portfolio-Selection Problemen: In den letzten Jahren gibt es in der Literatur verstärkt Bemühungen Optimierungsprobleme bezüglich Unsicherheiten in den Parametern zu robustifizieren. Robustifizierte Lösungen haben die Eigenschaft, dass moderate Variationen von Parametern nicht zu dramatischen Verschlechterungen der Lösungen führen. Im Rahmen der robusten Portfolio Optimierung geht es hauptsächlich darum, Lösungen in Bezug auf Abweichungen in den Verteilungen der Gewinne der verwendeten Finanzinstrumente zu kontrollieren. In der gegenständlichen Arbeit werden mit Hilfe von Wahrscheinlichkeitsmetriken sogenannte Ambiguity Mengen definiert, welche alle Verteilungen enthalten, die aufgrund der Datenlage als mögliche Verteilungen in Frage kommen. Die verwendete Metrik, die sogenannte Kantorovich (Wasserstein) Metrik, ermöglicht es mittels Ergebnissen der nichtparametrischen Statistik, die Ambiguity Mengen als Konfidenzmengen um die empirischen Verteilungschätzer zu interpretieren. Mittels der beschriebenen Methoden werden Mean-Risk Probleme robustifiziert. Diese Probleme sind zunächst infinit und werden in einem weiteren Schritt zu nicht konvexen semi-definiten Problemen umformuliert. Die Lösung dieser Probleme basiert einerseits auf einem Algortihmus zum Lösen von semi-definiten Problemen mit unendlich vielen Nebenbedingungen und andererseits auf Methoden zum approximativen Lösen von nicht konvexen Problemen (dem sogenannten Difference of Convex Algorithm).The thesis is concerned with application of non-convex programming to problems of portfolio optimization in a single stage stochastic optimization framework. In particular two different classes of portfolio selection problems are investigated. In both the problems a scenario based approach to modeling uncertainty is pursued, i.e. the randomness in the models is always described by finitely many joint realizations of the asset returns. The thesis is structured into three chapters briefly outlined below: (1) A D.C. Formulation of Value-at-Risk constrained Optimization: In this Chapter the aim is to solve mean risk models with the Value-at-Risk as a risk measure. In the case of finitely supported return distributions, it is shown that the Value-at-Risk can be written as a D.C. function and the mentioned mean risk problem therefore corresponds to a D.C. problem. The non-convex problem of optimizing the Value at Risk is rather extensively treated in the literature and there are various approximative solution techniques as well as some approaches to solve the problem globally. The reformulation as D.C. problem provides an insight into the structure of the problem, which can be exploited to devise a Branch-and-Bound algorithm for finding global solutions for small to medium sized instances. The possibility of refining epsilon-optimal solutions obtained from the Branch-and-Bound framework via local search heuristics is also discussed in this Chapter. (2) Value-at-Risk constrained optimization using the DCA: In this part of the thesis the Value-at-Risk problem is once again investigated with the aim of solving problems of realistic sizes in relatively short time. Since the Value at Risk optimization can be shown to be a NP hard problem, this can only be achieved by sacrificing on the guaranteed globality of the solutions. Therefore a local solution technique for unconstrained D.C. problems called Difference of Convex Algorithm (DCA) is employed. To solve the problem a new variant of the DCA the so called 'hybrid DCA' is proposed, which preserves the favorable convergence properties of the computationally hard 'complete DCA' as well as the computational tractability of the so called 'simple DCA'. The results are tested for small problems and the solutions are shown to actually coincide with the global optima obtained with the Branch-and-Bound algorithm in most of the cases. For realistic problem sizes the proposed method is shown to consistently outperform known heuristic approximations implemented in commercial software. (3) A Framework for Optimization under Ambiguity: The last part of the thesis is devoted to a different topic which received much attention in the recent stochastic programming literature: the topic of robust optimization. More specifically the aim is to robustify single stage stochastic optimization models with respect to uncertainty about the distributions of the random variables involved in the formulation of the stochastic program. The aim is to explore ways of explicitly taking into account ambiguity about the distributions when finding a decision while imposing only very weak restrictions on possible probability models that are taken into consideration. Ambiguity is defined as possible deviation from a discrete reference measure Q (in this work the empirical measure). To this end a so called ambiguity set B, that contains all the measures that can reasonably be assumed to be the real measure P given the available data, is defined. Since the idea is to devise a general approach not restricted by assuming P to be an element of any specific parametric family, we define our ambiguity sets by the use of general probability metrics. Relative to these measures a worst case approach is adopted to robustify the problem with respect to B. The resulting optimization problems turn out to be infinite and are reduced to non-convex semi-definite problems. In the last part of the paper we show how to solve these problems numerically for the example of a mean risk portfolio selection problem with Expected Shortfall under a Threshold as the risk measure. The DCA in combination with an iterative algorithm to approximate the infinite set of constraints by finitely many ones is used to obtain numerical solutions to the problem

    Bundle methods in nonsmooth DC optimization

    Get PDF
    Due to the complexity of many practical applications, we encounter optimization problems with nonsmooth functions, that is, functions which are not continuously differentiable everywhere. Classical gradient-based methods are not applicable to solve such problems, since they may fail in the nonsmooth setting. Therefore, it is imperative to develop numerical methods specifically designed for nonsmooth optimization. To date, bundle methods are considered to be the most efficient and reliable general purpose solvers for this type of problems. The idea in bundle methods is to approximate the subdifferential of the objective function by a bundle of subgradients. This information is then used to build a model for the objective. However, this model is typically convex and, due to this, it may be inaccurate and unable to adequately reflect the behaviour of the objective function in the nonconvex case. These circumstances motivate to design new bundle methods based on nonconvex models of the objective function. In this dissertation, the main focus is on nonsmooth DC optimization that constitutes an important and broad subclass of nonconvex optimization problems. A DC function can be presented as a difference of two convex functions. Thus, we can obtain a model that utilizes explicitly both the convexity and concavity of the objective by approximating separately the convex and concave parts. This way we end up with a nonconvex DC model describing the problem more accurately than the convex one. Based on the new DC model we introduce three different bundle methods. Two of them are designed for unconstrained DC optimization and the third one is capable of solving also multiobjective and constrained DC problems. The finite convergence is proved for each method. The numerical results demonstrate the efficiency of the methods and show the benefits obtained from the utilization of the DC decomposition. Even though the usage of the DC decomposition can improve the performance of the bundle methods, it is not always available or possible to construct. Thus, we present another bundle method for a general objective function implicitly collecting information about the DC structure. This method is developed for large-scale nonsmooth optimization and its convergence is proved for semismooth functions. The efficiency of the method is shown with numerical results. As an application of the developed methods, we consider the clusterwise linear regression (CLR) problems. By applying the support vector machines (SVM) approach a new model for these problems is proposed. The objective in the new formulation of the CLR problem is expressed as a DC function and a method based on one of the presented bundle methods is designed to solve it. Numerical results demonstrate robustness of the new approach to outliers.Monissa käytännön sovelluksissa tarkastelun kohteena oleva ongelma on monimutkainen ja joudutaan näin ollen mallintamaan epäsileillä funktioilla, jotka eivät välttämättä ole jatkuvasti differentioituvia kaikkialla. Klassisia gradienttiin perustuvia optimointimenetelmiä ei voida käyttää epäsileisiin tehtäviin, sillä epäsileillä funktioilla ei ole olemassa klassista gradienttia kaikkialla. Näin ollen epäsileään optimointiin on välttämätöntä kehittää omia numeerisia ratkaisumenetelmiä. Näistä kimppumenetelmiä pidetään tällä hetkellä kaikista tehokkaimpina ja luotettavimpina yleismenetelminä kyseisten tehtävien ratkaisemiseksi. Ideana kimppumenetelmissä on approksimoida kohdefunktion alidifferentiaalia kimpulla, joka on muodostettu keräämällä kohdefunktion aligradientteja edellisiltä iteraatiokierroksilta. Tätä tietoa hyödyntämällä voidaan muodostaa kohdefunktiolle malli, joka on alkuperäistä tehtävää helpompi ratkaista. Käytetty malli on tyypillisesti konveksi ja näin ollen se voi olla epätarkka ja kykenemätön esittämään alkuperäisen tehtävän rakennetta epäkonveksissa tapauksessa. Tästä syystä väitöskirjassa keskitytään kehittämään uusia kimppumenetelmiä, jotka mallinnusvaiheessa muodostavat kohdefunktiolle epäkonveksin mallin. Pääpaino väitöskirjassa on epäsileissä optimointitehtävissä, joissa funktiot voidaan esittää kahden konveksin funktion erotuksena (difference of two convex functions). Kyseisiä funktioita kutsutaan DC-funktioiksi ja ne muodostavat tärkeän ja laajan epäkonveksien funktioiden osajoukon. Tämä valinta mahdollistaa kohdefunktion konveksisuuden ja konkaavisuuden eksplisiittisen hyödyntämisen, sillä uusi malli kohdefunktiolle muodostetaan yhdistämällä erilliset konveksille ja konkaaville osalle rakennetut mallit. Tällä tavalla päädytään epäkonveksiin DC-malliin, joka pystyy kuvaamaan ratkaistavaa tehtävää tarkemmin kuin konveksi arvio. Väitöskirjassa esitetään kolme erilaista uuden DC-mallin pohjalta kehitettyä kimppumenetelmää sekä todistetaan menetelmien konvergenssit. Kaksi näistä menetelmistä on suunniteltu rajoitteettomaan DC-optimointiin ja kolmannella voidaan ratkaista myös monitavoitteisia ja rajoitteellisia DC-optimointitehtäviä. Numeeriset tulokset havainnollistavat menetelmien tehokkuutta sekä DC-hajotelman käytöstä saatuja etuja. Vaikka DC-hajotelman käyttö voi parantaa kimppumenetelmien suoritusta, sitä ei aina ole saatavilla tai mahdollista muodostaa. Tästä syystä väitöskirjassa esitetään myös neljäs kimppumenetelmä konvergenssitodistuksineen yleiselle kohdefunktiolle, jossa kerätään implisiittisesti tietoa kohdefunktion DC-rakenteesta. Menetelmä on kehitetty erityisesti suurille epäsileille optimointitehtäville ja sen tehokkuus osoitetaan numeerisella testauksella Sovelluksena väitöskirjassa tarkastellaan datalle klustereittain tehtävää lineaarista regressiota (clusterwise linear regression). Kyseiselle sovellukselle muodostetaan uusi malli hyödyntäen koneoppimisessa käytettyä SVM-lähestymistapaa (support vector machines approach) ja saatu kohdefunktio esitetään DC-funktiona. Näin ollen yhtä kehitetyistä kimppumenetelmistä sovelletaan tehtävän ratkaisemiseen. Numeeriset tulokset havainnollistavat uuden lähestymistavan robustisuutta ja tehokkuutta

    Procedures for the Establishment of Standards. Final Report. Vol.2

    Get PDF
    This final report summarizes two years of research on analyzing procedures for the establishment of standards. The research was sponsored by the Volkswagenwerk Foundation and jointly carried out at the International Institute for Applied Systems Analysis at Laxenburg and the Kernforschungszentrum Karlsruhe. The final report is meant to be both a problem-oriented review of related work in the area of environmental standard setting and an executive summary of the main research done during the contract period. The following eleven technical papers (Volume II of the Final Report) are reference reports written to accompany Volume I. They describe the studies and findings performed under the contract in more detail, and they have been either published as IIASA Research Memoranda or as outside publications, or were especially written for this report. These technical reports are structured in four parts: (1) policy analyses of standard setting procedures; (2) decision and game theoretic models for standard setting; (3) applications of decision game theoretic models to specific standard setting problems; and (4) biological basis for standard setting

    Proceedings of the 10th Japanese-Hungarian Symposium on Discrete Mathematics and Its Applications

    Get PDF

    Conflicting Objectives in Decisions

    Get PDF
    This book deals with quantitative approaches in making decisions when conflicting objectives are present. This problem is central to many applications of decision analysis, policy analysis, operational research, etc. in a wide range of fields, for example, business, economics, engineering, psychology, and planning. The book surveys different approaches to the same problem area and each approach is discussed in considerable detail so that the coverage of the book is both broad and deep. The problem of conflicting objectives is of paramount importance, both in planned and market economies, and this book represents a cross-cultural mixture of approaches from many countries to the same class of problem

    Eighth International Workshop "What can FCA do for Artificial Intelligence?" (FCA4AI at ECAI 2020)

    Get PDF
    International audienceProceedings of the 8th International Workshop "What can FCA do for Artificial Intelligence?" (FCA4AI 2020)co-located with 24th European Conference on Artificial Intelligence (ECAI 2020), Santiago de Compostela, Spain, August 29, 202
    corecore