108 research outputs found

    Subsampling Algorithms for Semidefinite Programming

    Full text link
    We derive a stochastic gradient algorithm for semidefinite optimization using randomization techniques. The algorithm uses subsampling to reduce the computational cost of each iteration and the subsampling ratio explicitly controls granularity, i.e. the tradeoff between cost per iteration and total number of iterations. Furthermore, the total computational cost is directly proportional to the complexity (i.e. rank) of the solution. We study numerical performance on some large-scale problems arising in statistical learning.Comment: Final version, to appear in Stochastic System

    AN EFFECTIVE OPTIMIZATION ALGORITHM FOR LOCALLY NONCONVEX LIPSCHITZ FUNCTIONS BASED ON MOLLIFIER SUBGRADIENTS

    Get PDF
    We present an effective algorithm for minimization of locally nonconvex Lipschitz functions based on mollifier functions approximating the Clarke generalized gradient. To this aim, first we approximate the Clarke generalized gradient by mollifier subgradients. To construct this approximation, we use a set of averaged functions gradients. Then, we show that the convex hull of this set serves as a good approximation for the Clarke generalized gradient. Using this approximation of the Clarke generalized gradient, we establish an algorithm for minimization of locally Lipschitz functions. Based on mollifier subgradient approximation, we propose a dynamic algorithm for finding a direction satisfying the Armijo condition without needing many subgradient evaluations. We prove that the search direction procedure terminates after finitely many iterations and show how to reduce the objective function value in the obtained search direction. We also prove that the first order optimality conditions are satisfied for any accumulation point of the sequence constructed by the algorithm. Finally, we implement our algorithm with MATLAB codes and approximate averaged functions gradients by the Monte-Carlo method. The numerical results show that our algorithm is effectively more efficient and also more robust than the GS algorithm, currently perceived to be a competitive algorithm for minimization of nonconvex Lipschitz functions

    Uniform exponential convergence of sample average random functions under general sampling with applications in stochastic programming

    Get PDF
    AbstractSample average approximation (SAA) is one of the most popular methods for solving stochastic optimization and equilibrium problems. Research on SAA has been mostly focused on the case when sampling is independent and identically distributed (iid) with exceptions (Dai et al. (2000) [9], Homem-de-Mello (2008) [16]). In this paper we study SAA with general sampling (including iid sampling and non-iid sampling) for solving nonsmooth stochastic optimization problems, stochastic Nash equilibrium problems and stochastic generalized equations. To this end, we first derive the uniform exponential convergence of the sample average of a class of lower semicontinuous random functions and then apply it to a nonsmooth stochastic minimization problem. Exponential convergence of estimators of both optimal solutions and M-stationary points (characterized by Mordukhovich limiting subgradients (Mordukhovich (2006) [23], Rockafellar and Wets (1998) [32])) are established under mild conditions. We also use the unform convergence result to establish the exponential rate of convergence of statistical estimators of a stochastic Nash equilibrium problem and estimators of the solutions to a stochastic generalized equation problem

    Bundle methods in nonsmooth DC optimization

    Get PDF
    Due to the complexity of many practical applications, we encounter optimization problems with nonsmooth functions, that is, functions which are not continuously differentiable everywhere. Classical gradient-based methods are not applicable to solve such problems, since they may fail in the nonsmooth setting. Therefore, it is imperative to develop numerical methods specifically designed for nonsmooth optimization. To date, bundle methods are considered to be the most efficient and reliable general purpose solvers for this type of problems. The idea in bundle methods is to approximate the subdifferential of the objective function by a bundle of subgradients. This information is then used to build a model for the objective. However, this model is typically convex and, due to this, it may be inaccurate and unable to adequately reflect the behaviour of the objective function in the nonconvex case. These circumstances motivate to design new bundle methods based on nonconvex models of the objective function. In this dissertation, the main focus is on nonsmooth DC optimization that constitutes an important and broad subclass of nonconvex optimization problems. A DC function can be presented as a difference of two convex functions. Thus, we can obtain a model that utilizes explicitly both the convexity and concavity of the objective by approximating separately the convex and concave parts. This way we end up with a nonconvex DC model describing the problem more accurately than the convex one. Based on the new DC model we introduce three different bundle methods. Two of them are designed for unconstrained DC optimization and the third one is capable of solving also multiobjective and constrained DC problems. The finite convergence is proved for each method. The numerical results demonstrate the efficiency of the methods and show the benefits obtained from the utilization of the DC decomposition. Even though the usage of the DC decomposition can improve the performance of the bundle methods, it is not always available or possible to construct. Thus, we present another bundle method for a general objective function implicitly collecting information about the DC structure. This method is developed for large-scale nonsmooth optimization and its convergence is proved for semismooth functions. The efficiency of the method is shown with numerical results. As an application of the developed methods, we consider the clusterwise linear regression (CLR) problems. By applying the support vector machines (SVM) approach a new model for these problems is proposed. The objective in the new formulation of the CLR problem is expressed as a DC function and a method based on one of the presented bundle methods is designed to solve it. Numerical results demonstrate robustness of the new approach to outliers.Monissa käytännön sovelluksissa tarkastelun kohteena oleva ongelma on monimutkainen ja joudutaan näin ollen mallintamaan epäsileillä funktioilla, jotka eivät välttämättä ole jatkuvasti differentioituvia kaikkialla. Klassisia gradienttiin perustuvia optimointimenetelmiä ei voida käyttää epäsileisiin tehtäviin, sillä epäsileillä funktioilla ei ole olemassa klassista gradienttia kaikkialla. Näin ollen epäsileään optimointiin on välttämätöntä kehittää omia numeerisia ratkaisumenetelmiä. Näistä kimppumenetelmiä pidetään tällä hetkellä kaikista tehokkaimpina ja luotettavimpina yleismenetelminä kyseisten tehtävien ratkaisemiseksi. Ideana kimppumenetelmissä on approksimoida kohdefunktion alidifferentiaalia kimpulla, joka on muodostettu keräämällä kohdefunktion aligradientteja edellisiltä iteraatiokierroksilta. Tätä tietoa hyödyntämällä voidaan muodostaa kohdefunktiolle malli, joka on alkuperäistä tehtävää helpompi ratkaista. Käytetty malli on tyypillisesti konveksi ja näin ollen se voi olla epätarkka ja kykenemätön esittämään alkuperäisen tehtävän rakennetta epäkonveksissa tapauksessa. Tästä syystä väitöskirjassa keskitytään kehittämään uusia kimppumenetelmiä, jotka mallinnusvaiheessa muodostavat kohdefunktiolle epäkonveksin mallin. Pääpaino väitöskirjassa on epäsileissä optimointitehtävissä, joissa funktiot voidaan esittää kahden konveksin funktion erotuksena (difference of two convex functions). Kyseisiä funktioita kutsutaan DC-funktioiksi ja ne muodostavat tärkeän ja laajan epäkonveksien funktioiden osajoukon. Tämä valinta mahdollistaa kohdefunktion konveksisuuden ja konkaavisuuden eksplisiittisen hyödyntämisen, sillä uusi malli kohdefunktiolle muodostetaan yhdistämällä erilliset konveksille ja konkaaville osalle rakennetut mallit. Tällä tavalla päädytään epäkonveksiin DC-malliin, joka pystyy kuvaamaan ratkaistavaa tehtävää tarkemmin kuin konveksi arvio. Väitöskirjassa esitetään kolme erilaista uuden DC-mallin pohjalta kehitettyä kimppumenetelmää sekä todistetaan menetelmien konvergenssit. Kaksi näistä menetelmistä on suunniteltu rajoitteettomaan DC-optimointiin ja kolmannella voidaan ratkaista myös monitavoitteisia ja rajoitteellisia DC-optimointitehtäviä. Numeeriset tulokset havainnollistavat menetelmien tehokkuutta sekä DC-hajotelman käytöstä saatuja etuja. Vaikka DC-hajotelman käyttö voi parantaa kimppumenetelmien suoritusta, sitä ei aina ole saatavilla tai mahdollista muodostaa. Tästä syystä väitöskirjassa esitetään myös neljäs kimppumenetelmä konvergenssitodistuksineen yleiselle kohdefunktiolle, jossa kerätään implisiittisesti tietoa kohdefunktion DC-rakenteesta. Menetelmä on kehitetty erityisesti suurille epäsileille optimointitehtäville ja sen tehokkuus osoitetaan numeerisella testauksella Sovelluksena väitöskirjassa tarkastellaan datalle klustereittain tehtävää lineaarista regressiota (clusterwise linear regression). Kyseiselle sovellukselle muodostetaan uusi malli hyödyntäen koneoppimisessa käytettyä SVM-lähestymistapaa (support vector machines approach) ja saatu kohdefunktio esitetään DC-funktiona. Näin ollen yhtä kehitetyistä kimppumenetelmistä sovelletaan tehtävän ratkaisemiseen. Numeeriset tulokset havainnollistavat uuden lähestymistavan robustisuutta ja tehokkuutta

    Descent algorithm for nonsmooth stochastic multiobjective optimization

    Get PDF
    International audienceAn algorithm for solving the expectation formulation of stochastic nonsmooth multiobjective optimization problems is proposed. The proposed method is an extension of the classical stochastic gradient algorithm to multi-objective optimization using the properties of a common descent vector defined 10 in the deterministic context. The mean square and the almost sure convergence of the algorithm are proven. The algorithm efficiency is illustrated and assessed on an academic example

    A multiobjective optimization based approach for RBDO

    Get PDF
    In this paper we present a novel algorithm in order to solve multiobjective design optimization problems of a sandwich plate when the objective functions are not smooth and when uncertainty is introduced into the material properties. The algorithm is based on the existence of a common descent vector for each sample of the random objective functions and on an extension of the stochastic gradient algorithm. It will be shown that a chance constraint optimization problem such as a RBDO problem can be written as a multiobjective optimization problem. Chance constraint optimization problems yields optimal designs for a fixed given level of probability for the constraint. However in real life problem it is not realistic to introduce a given probability because it is not known. It is more efficient to solve the problem for a whole range of probability in order to obtain an overview of the probability level appearing in the constraint effect on the solution. We show in this paper how to transform a chance constraint optimization problem into a multiobjective optimization problem and we give an illustration on simple examples

    A decomposition algorithm for two-stage stochastic programs with nonconvex recourse

    Full text link
    In this paper, we have studied a decomposition method for solving a class of nonconvex two-stage stochastic programs, where both the objective and constraints of the second-stage problem are nonlinearly parameterized by the first-stage variable. Due to the failure of the Clarke regularity of the resulting nonconvex recourse function, classical decomposition approaches such as Benders decomposition and (augmented) Lagrangian-based algorithms cannot be directly generalized to solve such models. By exploring an implicitly convex-concave structure of the recourse function, we introduce a novel decomposition framework based on the so-called partial Moreau envelope. The algorithm successively generates strongly convex quadratic approximations of the recourse function based on the solutions of the second-stage convex subproblems and adds them to the first-stage master problem. Convergence under both fixed scenarios and interior samplings is established. Numerical experiments are conducted to demonstrate the effectiveness of the proposed algorithm
    corecore