7 research outputs found

    Q-Learnheuristics: towards data-driven balanced metaheuristics

    Get PDF
    One of the central issues that must be resolved for a metaheuristic optimization process to work well is the dilemma of the balance between exploration and exploitation. The metaheuristics (MH) that achieved this balance can be called balanced MH, where a Q-Learning (QL) integration framework was proposed for the selection of metaheuristic operators conducive to this balance, particularly the selection of binarization schemes when a continuous metaheuristic solves binary combinatorial problems. In this work the use of this framework is extended to other recent metaheuristics, demonstrating that the integration of QL in the selection of operators improves the exploration-exploitation balance. Specifically, the Whale Optimization Algorithm and the Sine-Cosine Algorithm are tested by solving the Set Covering Problem, showing statistical improvements in this balance and in the quality of the solutions

    Challenging the Limits of Binarization: A New Scheme Selection Policy Using Reinforcement Learning Techniques for Binary Combinatorial Problem Solving

    No full text
    In this study, we introduce an innovative policy in the field of reinforcement learning, specifically designed as an action selection mechanism, and applied herein as a selector for binarization schemes. These schemes enable continuous metaheuristics to be applied to binary problems, thereby paving new paths in combinatorial optimization. To evaluate its efficacy, we implemented this policy within our BSS framework, which integrates a variety of reinforcement learning and metaheuristic techniques. Upon resolving 45 instances of the Set Covering Problem, our results demonstrate that reinforcement learning can play a crucial role in enhancing the binarization techniques employed. This policy not only significantly outperformed traditional methods in terms of precision and efficiency, but also proved to be extensible and adaptable to other techniques and similar problems. The approach proposed in this article is capable of significantly surpassing traditional methods in precision and efficiency, which could have important implications for a wide range of real-world applications. This study underscores the philosophy behind our approach: utilizing reinforcement learning not as an end in itself, but as a powerful tool for solving binary combinatorial problems, emphasizing its practical applicability and potential to transform the way we address complex challenges across various fields

    Swarm-Inspired Computing to Solve Binary Optimization Problems: A Backward Q-Learning Binarization Scheme Selector

    No full text
    In recent years, continuous metaheuristics have been a trend in solving binary-based combinatorial problems due to their good results. However, to use this type of metaheuristics, it is necessary to adapt them to work in binary environments, and in general, this adaptation is not trivial. The method proposed in this work evaluates the use of reinforcement learning techniques in the binarization process. Specifically, the backward Q-learning technique is explored to choose binarization schemes intelligently. This allows any continuous metaheuristic to be adapted to binary environments. The illustrated results are competitive, thus providing a novel option to address different complex problems in the industry

    A Novel Learning-Based Binarization Scheme Selector for Swarm Algorithms Solving Combinatorial Problems

    No full text
    Currently, industry is undergoing an exponential increase in binary-based combinatorial problems. In this regard, metaheuristics have been a common trend in the field in order to design approaches to successfully solve them. Thus, a well-known strategy includes the employment of continuous swarm-based algorithms transformed to perform in binary environments. In this work, we propose a hybrid approach that contains discrete smartly adapted population-based strategies to efficiently tackle binary-based problems. The proposed approach employs a reinforcement learning technique, known as SARSA (State–Action–Reward–State–Action), in order to utilize knowledge based on the run time. In order to test the viability and competitiveness of our proposal, we compare discrete state-of-the-art algorithms smartly assisted by SARSA. Finally, we illustrate interesting results where the proposed hybrid outperforms other approaches, thus, providing a novel option to tackle these types of problems in industry

    Analysis and Prediction of Engineering Student Behavior and Their Relation to Academic Performance Using Data Analytics Techniques

    No full text
    This study focuses on identifying personality traits in computer science students and determining whether they are related to academic performance. In addition, the importance of the personality traits based on motivation scale and depression, anxiety, and stress scales were measured. A sample of 188 students from the Computer Engineering Schools of the Pontifical Catholic University of Valparaíso was used. Through econometric two-stage least squares and paired sample correlation analysis, the results obtained indicate that there is a relation between academic performance and the personality traits measured by educational motivation scale and the ranking of university entrance and gender. In addition, these results led to characterization of students based on their personality traits and provided elements that may enhance the development of an effective personality that allows students to successfully face their environment, playing an important role in the educational process
    corecore