9 research outputs found

    Explainable Benchmarking for Iterative Optimization Heuristics

    Full text link
    Benchmarking heuristic algorithms is vital to understand under which conditions and on what kind of problems certain algorithms perform well. In most current research into heuristic optimization algorithms, only a very limited number of scenarios, algorithm configurations and hyper-parameter settings are explored, leading to incomplete and often biased insights and results. This paper presents a novel approach we call explainable benchmarking. Introducing the IOH-Xplainer software framework, for analyzing and understanding the performance of various optimization algorithms and the impact of their different components and hyper-parameters. We showcase the framework in the context of two modular optimization frameworks. Through this framework, we examine the impact of different algorithmic components and configurations, offering insights into their performance across diverse scenarios. We provide a systematic method for evaluating and interpreting the behaviour and efficiency of iterative optimization heuristics in a more transparent and comprehensible manner, allowing for better benchmarking and algorithm design.Comment: Submitted to ACM TEL

    ACO with automatic parameter selection for a scheduling problem with a group cumulative constraint

    Get PDF
    International audienceWe consider a RCPSP (resource constrained project scheduling problem), the goal of which is to schedule jobs on machines in order to minimise job tardiness. This problem comes from a real industrial application, and it requires an additional constraint which is a generalisation of the classical cumulative constraint: jobs are partitioned into groups, and the number of active groups must never exceeds a given capacity (where a group is active when some of its jobs have started while some others are not yet completed).We first study the complexity of this new constraint. Then, we describe an Ant Colony Optimisation algorithm to solve our problem, and we compare three different pheromone structures for it. We study the influence of parameters on the solving process, and show that it varies from an instance to another. Hence, we identify a subset of parameter settings with complementary strengths and weaknesses, and we use a per-instance algorithm selector in order to select the best setting for each new instance to solve. We experimentally compare our approach with a tabu search approach and an exact approach on a data set coming from our industrial application

    Tuning optimization algorithms under multiple objective function evaluation budgets

    Get PDF
    Most sensitivity analysis studies of optimization algorithm control parameters are restricted to a single objective function evaluation (OFE) budget. This restriction is problematic because the optimality of control parameter values is dependent not only on the problem’s fitness landscape, but also on the OFE budget available to explore that landscape. Therefore the OFE budget needs to be taken into consideration when performing control parameter tuning. This article presents a new algorithm (tMOPSO) for tuning the control parameter values of stochastic optimization algorithms under a range of OFE budget constraints. Specifically, for a given problem tMOPSO aims to determine multiple groups of control parameter values, each of which results in optimal performance at a different OFE budget. To achieve this, the control parameter tuning problem is formulated as a multi-objective optimization problem. Additionally, tMOPSO uses a noise-handling strategy and control parameter value assessment procedure, which are specialized for tuning stochastic optimization algorithms. Conducted numerical experiments provide evidence that tMOPSO is effective at tuning under multiple OFE budget constraints.National Research Foundation (NRF) of South Africa.http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=4235hb201

    Parallel bio-inspired methods for model optimization and pattern recognition

    Get PDF
    Nature based computational models are usually inherently parallel. The collaborative intelligence in those models emerges from the simultaneous instruction processing by simple independent units (neurons, ants, swarm members, etc...). This dissertation investigates the benefits of such parallel models in terms of efficiency and accuracy. First, the viability of a parallel implementation of bio-inspired metaheuristics for function optimization on consumer-level graphic cards is studied in detail. Then, in an effort to expose those parallel methods to the research community, the metaheuristic implementations were abstracted and grouped in an open source parameter/function optimization library libCudaOptimize. The library was verified against a well known benchmark for mathematical function minimization, and showed significant gains in both execution time and minimization accuracy. Crossing more into the application side, a parallel model of the human neocortex was developed. This model is able to detect, classify, and predict patterns in time-series data in an unsupervised way. Finally, libCudaOptimize was used to find the best parameters for this neocortex model, adapting it to gesture recognition within publicly available datasets

    Automatic Configuration of Multi-Objective ACO Algorithms

    No full text
    M. Dorigo, M. Birattari, G. A. Di Caro, R. Doursat, A. P. Engelbrecht, D. Floreano, L. M. Gambardella, R. Gro, E. Sahin, H. Sayama, and T. Sttzle, editors. Swarm Intelligence, 7th International Conference, ANTS 2010, Springer, Heidelberg, Germanyinfo:eu-repo/semantics/publishe

    Réagir et s’adapter à son environnement: Concevoir des méthodes autonomes pour l’optimisation combinatoire à plusieurs objectifs

    Get PDF
    Large-scale optimisation problems are usually hard to solve optimally. Approximation algorithms such as metaheuristics, able to quickly find sub-optimal solutions, are often preferred. This thesis focuses on multi-objective local search (MOLS) algorithms, metaheuristics able to deal with the simultaneous optimisation of multiple criteria. As many algorithms, metaheuristics expose many parameters that significantly impact their performance. These parameters can be either predicted and set before the execution of the algorithm, or dynamically modified during the execution itself.While in the last decade many advances have been made on the automatic design of algorithms, the great majority of them only deal with single-objective algorithms and the optimisation of a single performance indicator such as the algorithm running time or the final solution quality. In this thesis, we investigate the relations between automatic algorithm design and multi-objective optimisation, with an application on MOLS algorithms.We first review possible MOLS strategies ans parameters and present a general, highly configurable, MOLS framework. We also propose MO-ParamILS, an automatic configurator specifically designed to deal with multiple performance indicators. Then, we conduct several studies on the automatic offline design of MOLS algorithms on multiple combinatorial bi-objective problems. Finally, we discuss two online extensions of classical algorithm configuration: first the integration of parameter control mechanisms, to benefit from having multiple configuration predictions; then the use of configuration schedules, to sequentially use multiple configurations.Les problèmes d’optimisation à grande échelle sont généralement difficiles à résoudre de façon optimale. Des algorithmes d’approximation tels que les métaheuristiques, capables de trouver rapidement des solutions sous-optimales, sont souvent préférés. Cette thèse porte sur les algorithmes de recherche locale multi-objectif (MOLS), des métaheuristiques capables de traiter l’optimisation simultanée de plusieurs critères. Comme de nombreux algorithmes, les MOLS exposent de nombreux paramètres qui ont un impact important sur leurs performances. Ces paramètres peuvent être soit prédits et définis avant l’exécution de l’algorithme, soit ensuite modifiés dynamiquement.Alors que de nombreux progrès ont récemment été réalisés pour la conception automatique d’algorithmes, la grande majorité d’entre eux ne traitent que d’algorithmes mono-objectif et l’optimisation d’un unique indicateur de performance. Dans cette thèse, nous étudions les relations entre la conception automatique d’algorithmes et l’optimisation multi-objective.Nous passons d’abord en revue les stratégies MOLS possibles et présentons un framework MOLS général et hautement configurable. Nous proposons également MO-ParamILS, un configurateur automatique spécialement conçu pour gérer plusieurs indicateurs de performance. Nous menons ensuite plusieurs études sur la conception automatique de MOLS sur de multiples problèmes combinatoires bi-objectifs. Enfin, nous discutons deux extensions de la configuration d’algorithme classique : d’abord l’intégration des mécanismes de contrôle de paramètres, pour bénéficier de multiples prédictions de configuration; puis l’utilisation séquentielle de plusieurs configurations
    corecore