56 research outputs found

    Multistart with early termination of descents

    Get PDF
    Multistart is a celebrated global optimization technique frequently applied in practice. In its pure form, multistart has low efficiency. However, the simplicity of multistart and multitude of possibilities of its generalization make it very attractive especially in high-dimensional problems where e.g. Lipschitzian and Bayesian algorithms are not applicable. We propose a version of multistart where most of the local descents are terminated very early; we will call it METOD as an abbreviation for multistart with early termination of descents. The performance of the proposed algorithm is demonstrated on randomly generated test functions with 100 variables and a modest number of local minimizers

    Problèmes d'optimisation globale en statistique robuste

    Get PDF
    La statistique robuste est une branche de la statistique qui s'intéresse à l'analyse de données contenant une proportion significative d'observations contaminées avec des erreurs dont l'ampleur et la structure peuvent être arbitraires. Les estimateurs robustes au sens du point de rupture sont généralement définis comme le minimum global d'une certaine mesure non-convexe des erreurs, leur calcul est donc un problème d'optimisation globale très couteux. L'objectif de cette thèse est d'étudier les contributions possibles des méthodes d'optimisation globales modernes à l'étude de cette classe de problème. La première partie de la thèse est consacrée au tau-estimateur pour la régression linéaire robuste, qui est défini comme étant un minimum global d'une fonction non-convexe et dérivable. Nous étudions l'impact des techniques d'agglomération et des conditions d'arrêt sur l'efficacité des algorithmes existants. Les conséquences de certains phénomènes liés au voisin le plus proche en grande dimension sur ces algorithmes agglomératifs d'optimisation globale sont aussi mises en évidence. Dans la deuxième partie de la thèse, nous étudions des algorithmes déterministes pour le calcul de l'estimateur de moindres carrés tronqués, qui est défini à l'aide d'un programme en Nombres entiers non linéaire. En raison de sa nature combinatoire, nous avons dirigé nos efforts vers l'obtention de bornes inférieures pouvant être utilisées dans un algorithme du type branch-and-bound. Plus précisément, nous proposons une relaxation par un programme sur le cône de deuxième ordre, qui peut être renforcée avec des coupes dont nous présentons l'expression explicite. Nous fournissons également des conditions d'optimalité globale.Robust statistics is a branch of statistics dealing with the analysis of data containing contaminated observations. The robustness of an estimator is measured notably by means of the breakdown point. High-breahdown point estimators are usuallly defined as global minima of a non-convex scale of the erros, hence their computation is a challenging global optimization problem. The objective of this dissertation is to investigate the potential distribution of modern global optimization methods to this class of problem. The first part of this thesis is devoted to the tau-estimator for linear regression, which is defined as a global minimum of a nonconvex differentiable function. We investigate the impact of incorporating clustering techniques and stopping conditions in existing stochastic algorithms. The consequences of some phenomena involving the nearest neighbor in high dimension on clustering global optimization algorithms is thoroughly discussed as well. The second part is devoted to deterministic algorithms for computing the least trimmed squares regression estimator, Which is defined through a nonlinear mixed-integer program. Due to the combinatorial nature of this problem, we concentrated on obtaining lower bounds to be used in a branch-and-bound algorithm. In particular, we propose a second-order cone relaxation that can be complemented with concavity cuts that we obtain explicitly. Global optimality conditions are also provided

    Efficient global optimization: analysis, generalizations and extensions

    Get PDF

    Techniques for high-dimensional global optimization and response surface methodology

    Get PDF
    This thesis aims to improve the efficiency and accuracy of optimization algorithms. High-dimensional optimization problems are frequently encountered in many practical situations due to the advancements in technology and availability of big data. Also, the analytical form of a black-box objective function is unknown, adding to the challenging nature of high-dimensional optimization problems. Multistart is a celebrated global optimization algorithm that involves sampling points at random from the feasible domain and applying a local optimization algorithm to find the corresponding local minimizer. The main drawback of multistart is the low efficiency since the same local minimizers may be found repeatedly. A vital research contribution in this thesis improves the efficiency of multistart for high-dimensional optimization problems by reducing the number of local searches to the same local minimizers. Ensuring local optimization methods are reliable and accurate when only objective function values containing errors are available is an important area of research. Specifically, the central focus is on the first phase of the Box-Wilson (BW) algorithm, a response surface methodology (RSM) strategy. The first phase of BW consists of performing a succession of moves toward a subregion of the minimizer. A significant research contribution in this thesis enhances the accuracy of the first phase of BW and RSM, in general, for high-dimensional optimization problems by employing a different choice of search direction. Producing high-quality software is vital to ensure accurate research investigations and to allow other researchers to apply the software, which is an additional research contribution demonstrated in this thesis. Furthermore, increasingly complex high-dimensional optimization problems are encountered in various areas of machine learning. Therefore, the development of advanced optimization methods is essential to the progression of many machine learning algorithms. Consequently, the final research contribution in this thesis outlines the potential enhancements to optimization methods applied within various areas of machine learning
    corecore