904 research outputs found

    An inexact conic bundle variant suited to column generation

    Get PDF
    Final version to appear in Mathematical Programming Available in www.springerlink.com DOI 10.1007/s10107-007-0187-4We give a bundle method for constrained convex optimization. Instead of using penalty functions, it shifts iterates towards feasibility, by way of a Slater point, assumed to be known. Besides, the method accepts an oracle delivering function and subgradient values with unknown accuracy. Our approach is motivated by a number of applications in column generation, in which constraints are positively homogeneous -- so that 0 is a natural Slater point -- and an exact oracle may be time consuming. Finally, our convergence analysis employs arguments which have been little used so far in the bundle community. The method is illustrated on a number of cutting-stock problems

    A Stabilized Structured Dantzig-Wolfe Decomposition Method

    Get PDF
    We discuss an algorithmic scheme, which we call the stabilized structured Dantzig-Wolfe decomposition method, for solving large-scale structured linear programs. It can be applied when the subproblem of the standard Dantzig-Wolfe approach admits an alternative master model amenable to column generation, other than the standard one in which there is a variable for each of the extreme points and extreme rays of the corresponding polyhedron. Stabilization is achieved by the same techniques developed for the standard Dantzig-Wolfe approach and it is equally useful to improve the performance, as shown by computational results obtained on an application to the multicommodity capacitated network design problem

    On solving large-scale multistage stochastic problems with a new specialized interior-point approach

    Get PDF
    A novel approach based on a specialized interior-point method (IPM) is presented for solving large-scale stochastic multistage continuous optimization problems, which represent the uncertainty in strategic multistage and operational two-stage scenario trees, the latter being rooted at the strategic nodes. This new solution approach considers a split-variable formulation of the strategic and operational structures, for which copies are made of the strategic nodes and the structures are rooted in the form of nested strategic-operational two-stage trees. The specialized IPM solves the normal equations of the problem’s Newton system by combining Cholesky factorizations with preconditioned conjugate gradients, doing so for, respectively, the constraints of the stochastic formulation and those that equate the split-variables. We show that, for multistage stochastic problems, the preconditioner (i) is a block-diagonal matrix composed of as many shifted tridiagonal matrices as the number of nested strategicoperational two-stage trees, thus allowing the efficient solution of systems of equations; (ii) its complexity in a multistage stochastic problem is equivalent to that of a very large-scale two-stage problem. A broad computational experience is reported for large multistage stochastic supply network design (SND) and revenue management (RM) problems; the mathematical structures vary greatly for those two application types. Some of the most difficult instances of SND had 5 stages, 839 million variables, 13 million quadratic variables, 21 million constraints, and 3750 scenario tree nodes; while those of RM had 8 stages, 278 million variables, 100 million constraints, and 100,000 scenario tree nodes. For those problems, the proposed approach obtained the solution in 2.3 days using 167 gigabytes of memory for SND, and in 1.7 days using 83 gigabytes for RM; while the state-of-the-art solver CPLEX v20.1 required more than 24 days and 526 gigabytes for SND, and more than 19 days and 410 gigabytes for RMPeer ReviewedPreprin

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    On solving large-scale multistage stochastic optimization problems with a new specialized interior-point approach

    Get PDF
    © 2023 The Authors. Published by Elsevier B.VA novel approach based on a specialized interior-point method (IPM) is presented for solving largescale stochastic multistage continuous optimization problems, which represent the uncertainty in strategic multistage and operational two-stage scenario trees. This new solution approach considers a splitvariable formulation of the strategic and operational structures. The specialized IPM solves the normal equations by combining Cholesky factorizations with preconditioned conjugate gradients, doing so for, respectively, the constraints of the stochastic formulation and those that equate the split-variables. We show that, for multistage stochastic problems, the preconditioner (i) is a block-diagonal matrix composed of as many shifted tridiagonal matrices as the number of nested strategic-operational two-stage trees, thus allowing the efficient solution of systems of equations; (ii) its complexity in a multistage stochastic problem is equivalent to that of a very large-scale two-stage problem. A broad computational experience is reported for large multistage stochastic supply network design (SND) and revenue management (RM) problems. Some of the most difficult instances of SND had 5 stages, 839 million linear variables, 13 million quadratic variables, 21 million constraints, and 3750 scenario tree nodes; while those of RM had 8 stages, 278 million linear variables, 100 million constraints, and 100,000 scenario tree nodes. For those problems, the proposed approach obtained the solution in 1.1 days using 174 gigabytes of memory for SND, and in 1.7 days using 83 gigabytes for RM; while CPLEX v20.1 required more than 53 days and 531 gigabytes for SND, and more than 19 days and 410 gigabytes for RM.J. Castro was supported by the MCIN/AEI/FEDER grant RTI2018-097580-B-I00. L.E. Escudero was supported by the MCIN/AEI/10.13039/501100011033 grant PID2021-122640OB-I00. J.F. Monge was supported by the MCIN/AEI/10.13039/501100011033/ERDF grants PID2019-105952GB-I00 and PID2021-122344NB-I00, and by PROMETEO/2021/063 grant funded by the government of the Valencia Community, Spain.Peer ReviewedPostprint (published version

    Bundle methods in nonsmooth DC optimization

    Get PDF
    Due to the complexity of many practical applications, we encounter optimization problems with nonsmooth functions, that is, functions which are not continuously differentiable everywhere. Classical gradient-based methods are not applicable to solve such problems, since they may fail in the nonsmooth setting. Therefore, it is imperative to develop numerical methods specifically designed for nonsmooth optimization. To date, bundle methods are considered to be the most efficient and reliable general purpose solvers for this type of problems. The idea in bundle methods is to approximate the subdifferential of the objective function by a bundle of subgradients. This information is then used to build a model for the objective. However, this model is typically convex and, due to this, it may be inaccurate and unable to adequately reflect the behaviour of the objective function in the nonconvex case. These circumstances motivate to design new bundle methods based on nonconvex models of the objective function. In this dissertation, the main focus is on nonsmooth DC optimization that constitutes an important and broad subclass of nonconvex optimization problems. A DC function can be presented as a difference of two convex functions. Thus, we can obtain a model that utilizes explicitly both the convexity and concavity of the objective by approximating separately the convex and concave parts. This way we end up with a nonconvex DC model describing the problem more accurately than the convex one. Based on the new DC model we introduce three different bundle methods. Two of them are designed for unconstrained DC optimization and the third one is capable of solving also multiobjective and constrained DC problems. The finite convergence is proved for each method. The numerical results demonstrate the efficiency of the methods and show the benefits obtained from the utilization of the DC decomposition. Even though the usage of the DC decomposition can improve the performance of the bundle methods, it is not always available or possible to construct. Thus, we present another bundle method for a general objective function implicitly collecting information about the DC structure. This method is developed for large-scale nonsmooth optimization and its convergence is proved for semismooth functions. The efficiency of the method is shown with numerical results. As an application of the developed methods, we consider the clusterwise linear regression (CLR) problems. By applying the support vector machines (SVM) approach a new model for these problems is proposed. The objective in the new formulation of the CLR problem is expressed as a DC function and a method based on one of the presented bundle methods is designed to solve it. Numerical results demonstrate robustness of the new approach to outliers.Monissa käytännön sovelluksissa tarkastelun kohteena oleva ongelma on monimutkainen ja joudutaan näin ollen mallintamaan epäsileillä funktioilla, jotka eivät välttämättä ole jatkuvasti differentioituvia kaikkialla. Klassisia gradienttiin perustuvia optimointimenetelmiä ei voida käyttää epäsileisiin tehtäviin, sillä epäsileillä funktioilla ei ole olemassa klassista gradienttia kaikkialla. Näin ollen epäsileään optimointiin on välttämätöntä kehittää omia numeerisia ratkaisumenetelmiä. Näistä kimppumenetelmiä pidetään tällä hetkellä kaikista tehokkaimpina ja luotettavimpina yleismenetelminä kyseisten tehtävien ratkaisemiseksi. Ideana kimppumenetelmissä on approksimoida kohdefunktion alidifferentiaalia kimpulla, joka on muodostettu keräämällä kohdefunktion aligradientteja edellisiltä iteraatiokierroksilta. Tätä tietoa hyödyntämällä voidaan muodostaa kohdefunktiolle malli, joka on alkuperäistä tehtävää helpompi ratkaista. Käytetty malli on tyypillisesti konveksi ja näin ollen se voi olla epätarkka ja kykenemätön esittämään alkuperäisen tehtävän rakennetta epäkonveksissa tapauksessa. Tästä syystä väitöskirjassa keskitytään kehittämään uusia kimppumenetelmiä, jotka mallinnusvaiheessa muodostavat kohdefunktiolle epäkonveksin mallin. Pääpaino väitöskirjassa on epäsileissä optimointitehtävissä, joissa funktiot voidaan esittää kahden konveksin funktion erotuksena (difference of two convex functions). Kyseisiä funktioita kutsutaan DC-funktioiksi ja ne muodostavat tärkeän ja laajan epäkonveksien funktioiden osajoukon. Tämä valinta mahdollistaa kohdefunktion konveksisuuden ja konkaavisuuden eksplisiittisen hyödyntämisen, sillä uusi malli kohdefunktiolle muodostetaan yhdistämällä erilliset konveksille ja konkaaville osalle rakennetut mallit. Tällä tavalla päädytään epäkonveksiin DC-malliin, joka pystyy kuvaamaan ratkaistavaa tehtävää tarkemmin kuin konveksi arvio. Väitöskirjassa esitetään kolme erilaista uuden DC-mallin pohjalta kehitettyä kimppumenetelmää sekä todistetaan menetelmien konvergenssit. Kaksi näistä menetelmistä on suunniteltu rajoitteettomaan DC-optimointiin ja kolmannella voidaan ratkaista myös monitavoitteisia ja rajoitteellisia DC-optimointitehtäviä. Numeeriset tulokset havainnollistavat menetelmien tehokkuutta sekä DC-hajotelman käytöstä saatuja etuja. Vaikka DC-hajotelman käyttö voi parantaa kimppumenetelmien suoritusta, sitä ei aina ole saatavilla tai mahdollista muodostaa. Tästä syystä väitöskirjassa esitetään myös neljäs kimppumenetelmä konvergenssitodistuksineen yleiselle kohdefunktiolle, jossa kerätään implisiittisesti tietoa kohdefunktion DC-rakenteesta. Menetelmä on kehitetty erityisesti suurille epäsileille optimointitehtäville ja sen tehokkuus osoitetaan numeerisella testauksella Sovelluksena väitöskirjassa tarkastellaan datalle klustereittain tehtävää lineaarista regressiota (clusterwise linear regression). Kyseiselle sovellukselle muodostetaan uusi malli hyödyntäen koneoppimisessa käytettyä SVM-lähestymistapaa (support vector machines approach) ja saatu kohdefunktio esitetään DC-funktiona. Näin ollen yhtä kehitetyistä kimppumenetelmistä sovelletaan tehtävän ratkaisemiseen. Numeeriset tulokset havainnollistavat uuden lähestymistavan robustisuutta ja tehokkuutta

    On alternating direction methods for monotropic semidefinite programming

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Sectoral portfolio optimization by judicious selection of financial ratios via PCA

    Full text link
    Embedding value investment in portfolio optimization models has always been a challenge. In this paper, we attempt to incorporate it by first employing principal component analysis (PCA) sector wise to filter out dominant financial ratios from each sector and thereafter, use the portfolio optimization model incorporating second order stochastic dominance (SSD) criteria to derive the final optimal investment. We consider a total of 11 well known financial ratios corresponding to each sector representing four categories of ratios, namely liquidity, solvency, profitability, and valuation. PCA is then applied sector wise over a period of 10 years from April 2004 to March 2014 to extract dominant ratios from each sector in two ways, one from the component solution and other from each category on the basis of their communalities. The two step Sectoral Portfolio Optimization (SPO) model integrating the SSD criteria in constraints is then utilized to build an optimal portfolio. The strategy formed using the former extracted ratios is termed as PCA-SPO(A) and the latter one as PCA-SPO(B). The results obtained from the proposed strategies are compared with the SPO model and two nominal SSD models, with and without financial ratios for computational study. Empirical performance of proposed strategies is assessed over the period of 6 years from April 2014 to March 2020 using a rolling window scheme with varying out-of-sample time line of 3, 6, 9, 12 and 24 months for S&P BSE 500 market. We observe that the proposed strategy PCA-SPO(B) outperforms all other models in terms of downside deviation, CVaR, VaR, Sortino ratio, Rachev ratio, and STARR ratios over almost all out-of-sample periods. This highlights the importance of value investment where ratios are carefully selected and embedded quantitatively in portfolio selection process.Comment: 26 pages, 12 table

    Modelling and solution methods for stochastic optimisation

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.In this thesis we consider two research problems, namely, (i) language constructs for modelling stochastic programming (SP) problems and (ii) solution methods for processing instances of different classes of SP problems. We first describe a new design of an SP modelling system which provides greater extensibility and reuse. We implement this enhanced system and develop solver connections. We also investigate in detail the following important classes of SP problems: singlestage SP with risk constraints, two-stage linear and stochastic integer programming problems. We report improvements to solution methods for single-stage problems with second-order stochastic dominance constraints and two-stage SP problems. In both cases we use the level method as a regularisation mechanism. We also develop novel heuristic methods for stochastic integer programming based on variable neighbourhood search. We describe an algorithmic framework for implementing decomposition methods such as the L-shaped method within our SP solver system. Based on this framework we implement a number of established solution algorithms as well as a new regularisation method for stochastic linear programming. We compare the performance of these methods and their scale-up properties on an extensive set of benchmark problems. We also implement several solution methods for stochastic integer programming and report a computational study comparing their performance. The three solution methods, (a) processing of a single-stage problem with second-order stochastic dominance constraints, (b) regularisation by the level method for two-stage SP and (c) method for solving integer SP problems, are novel approaches and each of these makes a contribution to knowledge.Financial support was obtained from OptiRisk Systems
    corecore