17 research outputs found

    Nonsmooth and derivative-free optimization based hybrid methods and applications

    Get PDF
    "In this thesis, we develop hybrid methods for solving global and in particular, nonsmooth optimization problems. Hybrid methods are becoming more popular in global optimization since they allow to apply powerful smooth optimization techniques to solve global optimization problems. Such methods are able to efficiently solve global optimization problems with large number of variables. To date global search algorithms have been mainly applied to improve global search properties of the local search methods (including smooth optimization algorithms). In this thesis we apply rather different strategy to design hybrid methods. We use local search algorithms to improve the efficiency of global search methods. The thesis consists of two parts. In the first part we describe hybrid algorithms and in the second part we consider their various applications." -- taken from Abstract.Operational Research and Cybernetic

    Hyperbolic smoothing in nonsmooth optimization and applications

    Get PDF
    Nonsmooth nonconvex optimization problems arise in many applications including economics, business and data mining. In these applications objective functions are not necessarily differentiable or convex. Many algorithms have been proposed over the past three decades to solve such problems. In spite of the significant growth in this field, the development of efficient algorithms for solving this kind of problem is still a challenging task. The subgradient method is one of the simplest methods developed for solving these problems. Its convergence was proved only for convex objective functions. This method does not involve any subproblems, neither for finding search directions nor for computation of step lengths, which are fixed ahead of time. Bundle methods and their various modifications are among the most efficient methods for solving nonsmooth optimization problems. These methods involve a quadratic programming subproblem to find search directions. The size of the subproblem may increase significantly with the number of variables, which makes the bundle-type methods unsuitable for large scale nonsmooth optimization problems. The implementation of bundle-type methods, which require the use of the quadratic programming solvers, is not as easy as the implementation of the subgradient methods. Therefore it is beneficial to develop algorithms for nonsmooth nonconvex optimization which are easy to implement and more efficient than the subgradient methods. In this thesis, we develop two new algorithms for solving nonsmooth nonconvex optimization problems based on the use of the hyperbolic smoothing technique and apply them to solve the pumping cost minimization problem in water distribution. Both algorithms use smoothing techniques. The first algorithm is designed for solving finite minimax problems. In order to apply the hyperbolic smoothing we reformulate the objective function in the minimax problem and study the relationship between the original minimax and reformulated problems. We also study the main properties of the hyperbolic smoothing function. Based on these results an algorithm for solving the finite minimax problem is proposed and this algorithm is implemented in GAMS. We present preliminary results of numerical experiments with well-known nonsmooth optimization test problems. We also compare the proposed algorithm with the algorithm that uses the exponential smoothing function as well as with the algorithm based on nonlinear programming reformulation of the finite minimax problem. The second nonsmooth optimization algorithm we developed was used to demonstrate how smooth optimization methods can be applied to solve general nonsmooth (nonconvex) optimization problems. In order to do so we compute subgradients from some neighborhood of the current point and define a system of linear inequalities using these subgradients. Search directions are computed by solving this system. This system is solved by reducing it to the minimization of the convex piecewise linear function over the unit ball. Then the hyperbolic smoothing function is applied to approximate this minimization problem by a sequence of smooth problems which are solved by smooth optimization methods. Such an approach allows one to apply powerful smooth optimization algorithms for solving nonsmooth optimization problems and extend smoothing techniques for solving general nonsmooth nonconvex optimization problems. The convergence of the algorithm based on this approach is studied. The proposed algorithm was implemented in Fortran 95. Preliminary results of numerical experiments are reported and the proposed algorithm is compared with an other five nonsmooth optimization algorithms. We also implement the algorithm in GAMS and compare it with GAMS solvers using results of numerical experiments.Doctor of Philosoph

    Aggregate subgradient method for nonsmooth DC optimization

    Get PDF
    The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers. © 2020, Springer-Verlag GmbH Germany, part of Springer Nature

    A descent subgradient method using Mifflin line search for nonsmooth nonconvex optimization

    Full text link
    We propose a descent subgradient algorithm for minimizing a real function, assumed to be locally Lipschitz, but not necessarily smooth or convex. To find an effective descent direction, the Goldstein subdifferential is approximated through an iterative process. The method enjoys a new two-point variant of Mifflin line search in which the subgradients are arbitrary. Thus, the line search procedure is easy to implement. Moreover, in comparison to bundle methods, the quadratic subproblems have a simple structure, and to handle nonconvexity the proposed method requires no algorithmic modification. We study the global convergence of the method and prove that any accumulation point of the generated sequence is Clarke stationary, assuming that the objective ff is weakly upper semismooth. We illustrate the efficiency and effectiveness of the proposed algorithm on a collection of academic and semi-academic test problems

    Bundle methods in nonsmooth DC optimization

    Get PDF
    Due to the complexity of many practical applications, we encounter optimization problems with nonsmooth functions, that is, functions which are not continuously differentiable everywhere. Classical gradient-based methods are not applicable to solve such problems, since they may fail in the nonsmooth setting. Therefore, it is imperative to develop numerical methods specifically designed for nonsmooth optimization. To date, bundle methods are considered to be the most efficient and reliable general purpose solvers for this type of problems. The idea in bundle methods is to approximate the subdifferential of the objective function by a bundle of subgradients. This information is then used to build a model for the objective. However, this model is typically convex and, due to this, it may be inaccurate and unable to adequately reflect the behaviour of the objective function in the nonconvex case. These circumstances motivate to design new bundle methods based on nonconvex models of the objective function. In this dissertation, the main focus is on nonsmooth DC optimization that constitutes an important and broad subclass of nonconvex optimization problems. A DC function can be presented as a difference of two convex functions. Thus, we can obtain a model that utilizes explicitly both the convexity and concavity of the objective by approximating separately the convex and concave parts. This way we end up with a nonconvex DC model describing the problem more accurately than the convex one. Based on the new DC model we introduce three different bundle methods. Two of them are designed for unconstrained DC optimization and the third one is capable of solving also multiobjective and constrained DC problems. The finite convergence is proved for each method. The numerical results demonstrate the efficiency of the methods and show the benefits obtained from the utilization of the DC decomposition. Even though the usage of the DC decomposition can improve the performance of the bundle methods, it is not always available or possible to construct. Thus, we present another bundle method for a general objective function implicitly collecting information about the DC structure. This method is developed for large-scale nonsmooth optimization and its convergence is proved for semismooth functions. The efficiency of the method is shown with numerical results. As an application of the developed methods, we consider the clusterwise linear regression (CLR) problems. By applying the support vector machines (SVM) approach a new model for these problems is proposed. The objective in the new formulation of the CLR problem is expressed as a DC function and a method based on one of the presented bundle methods is designed to solve it. Numerical results demonstrate robustness of the new approach to outliers.Monissa käytännön sovelluksissa tarkastelun kohteena oleva ongelma on monimutkainen ja joudutaan näin ollen mallintamaan epäsileillä funktioilla, jotka eivät välttämättä ole jatkuvasti differentioituvia kaikkialla. Klassisia gradienttiin perustuvia optimointimenetelmiä ei voida käyttää epäsileisiin tehtäviin, sillä epäsileillä funktioilla ei ole olemassa klassista gradienttia kaikkialla. Näin ollen epäsileään optimointiin on välttämätöntä kehittää omia numeerisia ratkaisumenetelmiä. Näistä kimppumenetelmiä pidetään tällä hetkellä kaikista tehokkaimpina ja luotettavimpina yleismenetelminä kyseisten tehtävien ratkaisemiseksi. Ideana kimppumenetelmissä on approksimoida kohdefunktion alidifferentiaalia kimpulla, joka on muodostettu keräämällä kohdefunktion aligradientteja edellisiltä iteraatiokierroksilta. Tätä tietoa hyödyntämällä voidaan muodostaa kohdefunktiolle malli, joka on alkuperäistä tehtävää helpompi ratkaista. Käytetty malli on tyypillisesti konveksi ja näin ollen se voi olla epätarkka ja kykenemätön esittämään alkuperäisen tehtävän rakennetta epäkonveksissa tapauksessa. Tästä syystä väitöskirjassa keskitytään kehittämään uusia kimppumenetelmiä, jotka mallinnusvaiheessa muodostavat kohdefunktiolle epäkonveksin mallin. Pääpaino väitöskirjassa on epäsileissä optimointitehtävissä, joissa funktiot voidaan esittää kahden konveksin funktion erotuksena (difference of two convex functions). Kyseisiä funktioita kutsutaan DC-funktioiksi ja ne muodostavat tärkeän ja laajan epäkonveksien funktioiden osajoukon. Tämä valinta mahdollistaa kohdefunktion konveksisuuden ja konkaavisuuden eksplisiittisen hyödyntämisen, sillä uusi malli kohdefunktiolle muodostetaan yhdistämällä erilliset konveksille ja konkaaville osalle rakennetut mallit. Tällä tavalla päädytään epäkonveksiin DC-malliin, joka pystyy kuvaamaan ratkaistavaa tehtävää tarkemmin kuin konveksi arvio. Väitöskirjassa esitetään kolme erilaista uuden DC-mallin pohjalta kehitettyä kimppumenetelmää sekä todistetaan menetelmien konvergenssit. Kaksi näistä menetelmistä on suunniteltu rajoitteettomaan DC-optimointiin ja kolmannella voidaan ratkaista myös monitavoitteisia ja rajoitteellisia DC-optimointitehtäviä. Numeeriset tulokset havainnollistavat menetelmien tehokkuutta sekä DC-hajotelman käytöstä saatuja etuja. Vaikka DC-hajotelman käyttö voi parantaa kimppumenetelmien suoritusta, sitä ei aina ole saatavilla tai mahdollista muodostaa. Tästä syystä väitöskirjassa esitetään myös neljäs kimppumenetelmä konvergenssitodistuksineen yleiselle kohdefunktiolle, jossa kerätään implisiittisesti tietoa kohdefunktion DC-rakenteesta. Menetelmä on kehitetty erityisesti suurille epäsileille optimointitehtäville ja sen tehokkuus osoitetaan numeerisella testauksella Sovelluksena väitöskirjassa tarkastellaan datalle klustereittain tehtävää lineaarista regressiota (clusterwise linear regression). Kyseiselle sovellukselle muodostetaan uusi malli hyödyntäen koneoppimisessa käytettyä SVM-lähestymistapaa (support vector machines approach) ja saatu kohdefunktio esitetään DC-funktiona. Näin ollen yhtä kehitetyistä kimppumenetelmistä sovelletaan tehtävän ratkaisemiseen. Numeeriset tulokset havainnollistavat uuden lähestymistavan robustisuutta ja tehokkuutta

    Aggregate subgradient method for nonsmooth DC optimization

    Get PDF
    The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers

    A New Subgradient Based Method for Nonsmooth DC Programming

    Get PDF
    The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers.</p
    corecore