16,510 research outputs found

    Diversification-based learning in computing and optimization

    Get PDF
    Diversification-based learning (DBL) derives from a collection of principles and methods introduced in the field of metaheuristics that have broad applications in computing and optimization. We show that the DBL framework goes significantly beyond that of the more recent opposition-based learning (OBL) framework introduced in Tizhoosh (in: Proceedings of international conference on computational intelligence for modelling, control and automation, and international conference on intelligent agents, web technologies and internet commerce (CIMCA/IAWTIC-2005), pp 695–701, 2005), which has become the focus of numerous research initiatives in machine learning and metaheuristic optimization. We unify and extend earlier proposals in metaheuristic search (Glover, in Hao J-K, Lutton E, Ronald E, Schoenauer M, Snyers D (eds) Artificial evolution, Lecture notes in computer science, Springer, Berlin, vol 1363, pp 13–54, 1997; Glover and Laguna Tabu search, Springer, Berlin, 1997) to give a collection of approaches that are more flexible and comprehensive than OBL for creating intensification and diversification strategies in metaheuristic search. We also describe potential applications of DBL to various subfields of machine learning and optimization

    Genetic Transfer or Population Diversification? Deciphering the Secret Ingredients of Evolutionary Multitask Optimization

    Full text link
    Evolutionary multitasking has recently emerged as a novel paradigm that enables the similarities and/or latent complementarities (if present) between distinct optimization tasks to be exploited in an autonomous manner simply by solving them together with a unified solution representation scheme. An important matter underpinning future algorithmic advancements is to develop a better understanding of the driving force behind successful multitask problem-solving. In this regard, two (seemingly disparate) ideas have been put forward, namely, (a) implicit genetic transfer as the key ingredient facilitating the exchange of high-quality genetic material across tasks, and (b) population diversification resulting in effective global search of the unified search space encompassing all tasks. In this paper, we present some empirical results that provide a clearer picture of the relationship between the two aforementioned propositions. For the numerical experiments we make use of Sudoku puzzles as case studies, mainly because of their feature that outwardly unlike puzzle statements can often have nearly identical final solutions. The experiments reveal that while on many occasions genetic transfer and population diversity may be viewed as two sides of the same coin, the wider implication of genetic transfer, as shall be shown herein, captures the true essence of evolutionary multitasking to the fullest.Comment: 7 pages, 6 figure

    Basic Enhancement Strategies When Using Bayesian Optimization for Hyperparameter Tuning of Deep Neural Networks

    Get PDF
    Compared to the traditional machine learning models, deep neural networks (DNN) are known to be highly sensitive to the choice of hyperparameters. While the required time and effort for manual tuning has been rapidly decreasing for the well developed and commonly used DNN architectures, undoubtedly DNN hyperparameter optimization will continue to be a major burden whenever a new DNN architecture needs to be designed, a new task needs to be solved, a new dataset needs to be addressed, or an existing DNN needs to be improved further. For hyperparameter optimization of general machine learning problems, numerous automated solutions have been developed where some of the most popular solutions are based on Bayesian Optimization (BO). In this work, we analyze four fundamental strategies for enhancing BO when it is used for DNN hyperparameter optimization. Specifically, diversification, early termination, parallelization, and cost function transformation are investigated. Based on the analysis, we provide a simple yet robust algorithm for DNN hyperparameter optimization - DEEP-BO (Diversified, Early-termination-Enabled, and Parallel Bayesian Optimization). When evaluated over six DNN benchmarks, DEEP-BO mostly outperformed well-known solutions including GP-Hedge, BOHB, and the speed-up variants that use Median Stopping Rule or Learning Curve Extrapolation. In fact, DEEP-BO consistently provided the top, or at least close to the top, performance over all the benchmark types that we have tested. This indicates that DEEP-BO is a robust solution compared to the existing solutions. The DEEP-BO code is publicly available at <uri>https://github.com/snu-adsl/DEEP-BO</uri>
    • 

    corecore