11 research outputs found

    An Analysis of a KNN Perturbation Operator: An Application to the Binarization of Continuous Metaheuristics

    Full text link
    [EN] The optimization methods and, in particular, metaheuristics must be constantly improved to reduce execution times, improve the results, and thus be able to address broader instances. In particular, addressing combinatorial optimization problems is critical in the areas of operational research and engineering. In this work, a perturbation operator is proposed which uses the k-nearest neighbors technique, and this is studied with the aim of improving the diversification and intensification properties of metaheuristic algorithms in their binary version. Random operators are designed to study the contribution of the perturbation operator. To verify the proposal, large instances of the well-known set covering problem are studied. Box plots, convergence charts, and the Wilcoxon statistical test are used to determine the operator contribution. Furthermore, a comparison is made using metaheuristic techniques that use general binarization mechanisms such as transfer functions or db-scan as binarization methods. The results obtained indicate that the KNN perturbation operator improves significantly the results.The first author was supported by the Grant CONICYT/FONDECYT/INICIACION/11180056.García, J.; Astorga, G.; Yepes, V. (2021). An Analysis of a KNN Perturbation Operator: An Application to the Binarization of Continuous Metaheuristics. Mathematics. 9(3):1-20. https://doi.org/10.3390/math9030225S12093Al-Madi, N., Faris, H., & Mirjalili, S. (2019). Binary multi-verse optimization algorithm for global optimization and discrete problems. International Journal of Machine Learning and Cybernetics, 10(12), 3445-3465. doi:10.1007/s13042-019-00931-8García, J., Moraga, P., Valenzuela, M., Crawford, B., Soto, R., Pinto, H., … Astorga, G. (2019). A Db-Scan Binarization Algorithm Applied to Matrix Covering Problems. Computational Intelligence and Neuroscience, 2019, 1-16. doi:10.1155/2019/3238574Guo, H., Liu, B., Cai, D., & Lu, T. (2016). Predicting protein–protein interaction sites using modified support vector machine. International Journal of Machine Learning and Cybernetics, 9(3), 393-398. doi:10.1007/s13042-015-0450-6Korkmaz, S., Babalik, A., & Kiran, M. S. (2017). An artificial algae algorithm for solving binary optimization problems. International Journal of Machine Learning and Cybernetics, 9(7), 1233-1247. doi:10.1007/s13042-017-0772-7García, J., Martí, J. V., & Yepes, V. (2020). The Buttressed Walls Problem: An Application of a Hybrid Clustering Particle Swarm Optimization Algorithm. Mathematics, 8(6), 862. doi:10.3390/math8060862Yepes, V., Martí, J. V., & García, J. (2020). Black Hole Algorithm for Sustainable Design of Counterfort Retaining Walls. Sustainability, 12(7), 2767. doi:10.3390/su12072767Talbi, E.-G. (2015). Combining metaheuristics with mathematical programming, constraint programming and machine learning. Annals of Operations Research, 240(1), 171-215. doi:10.1007/s10479-015-2034-yJuan, A. A., Faulin, J., Grasman, S. E., Rabe, M., & Figueira, G. (2015). A review of simheuristics: Extending metaheuristics to deal with stochastic combinatorial optimization problems. Operations Research Perspectives, 2, 62-72. doi:10.1016/j.orp.2015.03.001Chou, J.-S., & Nguyen, T.-K. (2018). Forward Forecast of Stock Price Using Sliding-Window Metaheuristic-Optimized Machine-Learning Regression. IEEE Transactions on Industrial Informatics, 14(7), 3132-3142. doi:10.1109/tii.2018.2794389Zheng, B., Zhang, J., Yoon, S. W., Lam, S. S., Khasawneh, M., & Poranki, S. (2015). Predictive modeling of hospital readmissions using metaheuristics and data mining. Expert Systems with Applications, 42(20), 7110-7120. doi:10.1016/j.eswa.2015.04.066De León, A. D., Lalla-Ruiz, E., Melián-Batista, B., & Marcos Moreno-Vega, J. (2017). A Machine Learning-based system for berth scheduling at bulk terminals. Expert Systems with Applications, 87, 170-182. doi:10.1016/j.eswa.2017.06.010García, J., Lalla-Ruiz, E., Voß, S., & Droguett, E. L. (2020). Enhancing a machine learning binarization framework by perturbation operators: analysis on the multidimensional knapsack problem. International Journal of Machine Learning and Cybernetics, 11(9), 1951-1970. doi:10.1007/s13042-020-01085-8García, J., Crawford, B., Soto, R., & Astorga, G. (2019). A clustering algorithm applied to the binarization of Swarm intelligence continuous metaheuristics. Swarm and Evolutionary Computation, 44, 646-664. doi:10.1016/j.swevo.2018.08.006García, J., Crawford, B., Soto, R., Castro, C., & Paredes, F. (2017). A k-means binarization framework applied to multidimensional knapsack problem. Applied Intelligence, 48(2), 357-380. doi:10.1007/s10489-017-0972-6Dokeroglu, T., Sevinc, E., Kucukyilmaz, T., & Cosar, A. (2019). A survey on new generation metaheuristic algorithms. Computers & Industrial Engineering, 137, 106040. doi:10.1016/j.cie.2019.106040Zong Woo Geem, Joong Hoon Kim, & Loganathan, G. V. (2001). A New Heuristic Optimization Algorithm: Harmony Search. SIMULATION, 76(2), 60-68. doi:10.1177/003754970107600201Rashedi, E., Nezamabadi-pour, H., & Saryazdi, S. (2009). GSA: A Gravitational Search Algorithm. Information Sciences, 179(13), 2232-2248. doi:10.1016/j.ins.2009.03.004Rao, R. V., Savsani, V. J., & Vakharia, D. P. (2011). Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Computer-Aided Design, 43(3), 303-315. doi:10.1016/j.cad.2010.12.015Gandomi, A. H., & Alavi, A. H. (2012). Krill herd: A new bio-inspired optimization algorithm. Communications in Nonlinear Science and Numerical Simulation, 17(12), 4831-4845. doi:10.1016/j.cnsns.2012.05.010Cuevas, E., & Cienfuegos, M. (2014). A new algorithm inspired in the behavior of the social-spider for constrained optimization. Expert Systems with Applications, 41(2), 412-425. doi:10.1016/j.eswa.2013.07.067Xu, L., Hutter, F., Hoos, H. H., & Leyton-Brown, K. (2008). SATzilla: Portfolio-based Algorithm Selection for SAT. Journal of Artificial Intelligence Research, 32, 565-606. doi:10.1613/jair.2490Smith-Miles, K., & van Hemert, J. (2011). Discovering the suitability of optimisation algorithms by learning from evolved instances. Annals of Mathematics and Artificial Intelligence, 61(2), 87-104. doi:10.1007/s10472-011-9230-5Peña, J. M., Lozano, J. A., & Larrañaga, P. (2005). Globally Multimodal Problem Optimization Via an Estimation of Distribution Algorithm Based on Unsupervised Learning of Bayesian Networks. Evolutionary Computation, 13(1), 43-66. doi:10.1162/1063656053583432Hutter, F., Xu, L., Hoos, H. H., & Leyton-Brown, K. (2014). Algorithm runtime prediction: Methods & evaluation. Artificial Intelligence, 206, 79-111. doi:10.1016/j.artint.2013.10.003Eiben, A. E., & Smit, S. K. (2011). Parameter tuning for configuring and analyzing evolutionary algorithms. Swarm and Evolutionary Computation, 1(1), 19-31. doi:10.1016/j.swevo.2011.02.001García, J., Yepes, V., & Martí, J. V. (2020). A Hybrid k-Means Cuckoo Search Algorithm Applied to the Counterfort Retaining Walls Problem. Mathematics, 8(4), 555. doi:10.3390/math8040555García, J., Moraga, P., Valenzuela, M., & Pinto, H. (2020). A db-Scan Hybrid Algorithm: An Application to the Multidimensional Knapsack Problem. Mathematics, 8(4), 507. doi:10.3390/math8040507Poikolainen, I., Neri, F., & Caraffini, F. (2015). Cluster-Based Population Initialization for differential evolution frameworks. Information Sciences, 297, 216-235. doi:10.1016/j.ins.2014.11.026García, J., & Maureira, C. (2021). A KNN quantum cuckoo search algorithm applied to the multidimensional knapsack problem. Applied Soft Computing, 102, 107077. doi:10.1016/j.asoc.2020.107077Rice, J. R. (1976). The Algorithm Selection Problem. Advances in Computers Volume 15, 65-118. doi:10.1016/s0065-2458(08)60520-3Burke, E. K., Gendreau, M., Hyde, M., Kendall, G., Ochoa, G., Özcan, E., & Qu, R. (2013). Hyper-heuristics: a survey of the state of the art. Journal of the Operational Research Society, 64(12), 1695-1724. doi:10.1057/jors.2013.71Florez-Lozano, J., Caraffini, F., Parra, C., & Gongora, M. (2020). Cooperative and distributed decision-making in a multi-agent perception system for improvised land mines detection. Information Fusion, 64, 32-49. doi:10.1016/j.inffus.2020.06.009Crawford, B., Soto, R., Astorga, G., García, J., Castro, C., & Paredes, F. (2017). Putting Continuous Metaheuristics to Work in Binary Search Spaces. Complexity, 2017, 1-19. doi:10.1155/2017/8404231Mafarja, M., Aljarah, I., Heidari, A. A., Faris, H., Fournier-Viger, P., Li, X., & Mirjalili, S. (2018). Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowledge-Based Systems, 161, 185-204. doi:10.1016/j.knosys.2018.08.003Feng, Y., An, H., & Gao, X. (2018). The Importance of Transfer Function in Solving Set-Union Knapsack Problem Based on Discrete Moth Search Algorithm. Mathematics, 7(1), 17. doi:10.3390/math7010017Zhang, G. (2010). Quantum-inspired evolutionary algorithms: a survey and empirical study. Journal of Heuristics, 17(3), 303-351. doi:10.1007/s10732-010-9136-0Srikanth, K., Panwar, L. K., Panigrahi, B., Herrera-Viedma, E., Sangaiah, A. K., & Wang, G.-G. (2018). Meta-heuristic framework: Quantum inspired binary grey wolf optimizer for unit commitment problem. Computers & Electrical Engineering, 70, 243-260. doi:10.1016/j.compeleceng.2017.07.023Hu, H., Yang, K., Liu, L., Su, L., & Yang, Z. (2019). Short-Term Hydropower Generation Scheduling Using an Improved Cloud Adaptive Quantum-Inspired Binary Social Spider Optimization Algorithm. Water Resources Management, 33(7), 2357-2379. doi:10.1007/s11269-018-2138-7Gao, Y. J., Zhang, F. M., Zhao, Y., & Li, C. (2019). A novel quantum-inspired binary wolf pack algorithm for difficult knapsack problem. International Journal of Wireless and Mobile Computing, 16(3), 222. doi:10.1504/ijwmc.2019.099861Kumar, Y., Verma, S. K., & Sharma, S. (2020). Quantum-inspired binary gravitational search algorithm to recognize the facial expressions. International Journal of Modern Physics C, 31(10), 2050138. doi:10.1142/s0129183120501387Balas, E., & Padberg, M. W. (1976). Set Partitioning: A survey. SIAM Review, 18(4), 710-760. doi:10.1137/1018115Borneman, J., Chrobak, M., Della Vedova, G., Figueroa, A., & Jiang, T. (2001). Probe selection algorithms with applications in the analysis of microbial communities. Bioinformatics, 17(Suppl 1), S39-S48. doi:10.1093/bioinformatics/17.suppl_1.s39Boros, E., Hammer, P. L., Ibaraki, T., & Kogan, A. (1997). Logical analysis of numerical data. Mathematical Programming, 79(1-3), 163-190. doi:10.1007/bf02614316Balas, E., & Carrera, M. C. (1996). A Dynamic Subgradient-Based Branch-and-Bound Procedure for Set Covering. Operations Research, 44(6), 875-890. doi:10.1287/opre.44.6.875Beasley, J. E. (1987). An algorithm for set covering problem. European Journal of Operational Research, 31(1), 85-93. doi:10.1016/0377-2217(87)90141-xBeasley, J. E. (1990). A lagrangian heuristic for set-covering problems. Naval Research Logistics, 37(1), 151-164. doi:10.1002/1520-6750(199002)37:13.0.co;2-2Beasley, J. ., & Chu, P. . (1996). A genetic algorithm for the set covering problem. European Journal of Operational Research, 94(2), 392-404. doi:10.1016/0377-2217(95)00159-xSoto, R., Crawford, B., Olivares, R., Barraza, J., Figueroa, I., Johnson, F., … Olguín, E. (2017). Solving the non-unicost set covering problem by using cuckoo search and black hole optimization. Natural Computing, 16(2), 213-229. doi:10.1007/s11047-016-9609-

    Transfer learning for operator selection: A reinforcement learning approach

    Get PDF
    In the past two decades, metaheuristic optimisation algorithms (MOAs) have been increasingly popular, particularly in logistic, science, and engineering problems. The fundamental characteristics of such algorithms are that they are dependent on a parameter or a strategy. Some online and offline strategies are employed in order to obtain optimal configurations of the algorithms. Adaptive operator selection is one of them, and it determines whether or not to update a strategy from the strategy pool during the search process. In the field of machine learning, Reinforcement Learning (RL) refers to goal-oriented algorithms, which learn from the environment how to achieve a goal. On MOAs, reinforcement learning has been utilised to control the operator selection process. However, existing research fails to show that learned information may be transferred from one problem-solving procedure to another. The primary goal of the proposed research is to determine the impact of transfer learning on RL and MOAs. As a test problem, a set union knapsack problem with 30 separate benchmark problem instances is used. The results are statistically compared in depth. The learning process, according to the findings, improved the convergence speed while significantly reducing the CPU time

    Efficient tuning in supervised machine learning

    Get PDF
    The tuning of learning algorithm parameters has become more and more important during the last years. With the fast growth of computational power and available memory databases have grown dramatically. This is very challenging for the tuning of parameters arising in machine learning, since the training can become very time-consuming for large datasets. For this reason efficient tuning methods are required, which are able to improve the predictions of the learning algorithms. In this thesis we incorporate model-assisted optimization techniques, for performing efficient optimization on noisy datasets with very limited budgets. Under this umbrella we also combine learning algorithms with methods for feature construction and selection. We propose to integrate a variety of elements into the learning process. E.g., can tuning be helpful in learning tasks like time series regression using state-of-the-art machine learning algorithms? Are statistical methods capable to reduce noise e ffects? Can surrogate models like Kriging learn a reasonable mapping of the parameter landscape to the quality measures, or are they deteriorated by disturbing factors? Summarizing all these parts, we analyze if superior learning algorithms can be created, with a special focus on efficient runtimes. Besides the advantages of systematic tuning approaches, we also highlight possible obstacles and issues of tuning. Di fferent tuning methods are compared and the impact of their features are exposed. It is a goal of this work to give users insights into applying state-of-the-art learning algorithms profitably in practiceBundesministerium f ĂĽr Bildung und Forschung (Germany), Cologne University of Applied Sciences (Germany), Kind-Steinm uller-Stiftung (Gummersbach, Germany)Algorithms and the Foundations of Software technolog
    corecore