73 research outputs found

    DeepACO: Neural-enhanced Ant Systems for Combinatorial Optimization

    Full text link
    Ant Colony Optimization (ACO) is a meta-heuristic algorithm that has been successfully applied to various Combinatorial Optimization Problems (COPs). Traditionally, customizing ACO for a specific problem requires the expert design of knowledge-driven heuristics. In this paper, we propose DeepACO, a generic framework that leverages deep reinforcement learning to automate heuristic designs. DeepACO serves to strengthen the heuristic measures of existing ACO algorithms and dispense with laborious manual design in future ACO applications. As a neural-enhanced meta-heuristic, DeepACO consistently outperforms its ACO counterparts on eight COPs using a single neural model and a single set of hyperparameters. As a Neural Combinatorial Optimization method, DeepACO performs better than or on par with problem-specific methods on canonical routing problems. Our code is publicly available at https://github.com/henry-yeh/DeepACO.Comment: Accepted at NeurIPS 202

    Current applications of ant systems for subset problems

    Get PDF
    Early applications of Ant Colony Optimization (ACO) have been mainly concerned with solving ordering problems (e.g., the Traveling Salesman Problem). In this report we describe an Ant System algorithm, which would be appropriate for solving additional subset problems as was showed for solving the multiple knapsack problem in previous works. The experiments on progress show the potential power of the ACO approach for solving different subset problems.Eje: Sistemas inteligentes. Metaheurísticas.Red de Universidades con Carreras en Informática (RedUNCI

    Current applications of ant systems for subset problems

    Get PDF
    Early applications of Ant Colony Optimization (ACO) have been mainly concerned with solving ordering problems (e.g., the Traveling Salesman Problem). In this report we describe an Ant System algorithm, which would be appropriate for solving additional subset problems as was showed for solving the multiple knapsack problem in previous works. The experiments on progress show the potential power of the ACO approach for solving different subset problems.Eje: Sistemas inteligentes. Metaheurísticas.Red de Universidades con Carreras en Informática (RedUNCI

    Incorporating Memory and Learning Mechanisms Into Meta-RaPS

    Get PDF
    Due to the rapid increase of dimensions and complexity of real life problems, it has become more difficult to find optimal solutions using only exact mathematical methods. The need to find near-optimal solutions in an acceptable amount of time is a challenge when developing more sophisticated approaches. A proper answer to this challenge can be through the implementation of metaheuristic approaches. However, a more powerful answer might be reached by incorporating intelligence into metaheuristics. Meta-RaPS (Metaheuristic for Randomized Priority Search) is a metaheuristic that creates high quality solutions for discrete optimization problems. It is proposed that incorporating memory and learning mechanisms into Meta-RaPS, which is currently classified as a memoryless metaheuristic, can help the algorithm produce higher quality results. The proposed Meta-RaPS versions were created by taking different perspectives of learning. The first approach taken is Estimation of Distribution Algorithms (EDA), a stochastic learning technique that creates a probability distribution for each decision variable to generate new solutions. The second Meta-RaPS version was developed by utilizing a machine learning algorithm, Q Learning, which has been successfully applied to optimization problems whose output is a sequence of actions. In the third Meta-RaPS version, Path Relinking (PR) was implemented as a post-optimization method in which the new algorithm learns the good attributes by memorizing best solutions, and follows them to reach better solutions. The fourth proposed version of Meta-RaPS presented another form of learning with its ability to adaptively tune parameters. The efficiency of these approaches motivated us to redesign Meta-RaPS by removing the improvement phase and adding a more sophisticated Path Relinking method. The new Meta-RaPS could solve even the largest problems in much less time while keeping up the quality of its solutions. To evaluate their performance, all introduced versions were tested using the 0-1 Multidimensional Knapsack Problem (MKP). After comparing the proposed algorithms, Meta-RaPS PR and Meta-RaPS Q Learning appeared to be the algorithms with the best and worst performance, respectively. On the other hand, they could all show superior performance than other approaches to the 0-1 MKP in the literature

    Dynamic Impact for Ant Colony Optimization algorithm

    Get PDF
    This paper proposes an extension method for Ant Colony Optimization (ACO) algorithm called Dynamic Impact. Dynamic Impact is designed to solve challenging optimization problems that has nonlinear relationship between resource consumption and fitness in relation to other part of the optimized solution. This proposed method is tested against complex real-world Microchip Manufacturing Plant Production Floor Optimization (MMPPFO) problem, as well as theoretical benchmark Multi-Dimensional Knapsack problem (MKP). MMPPFO is a non-trivial optimization problem, due the nature of solution fitness value dependence on collection of wafer-lots without prioritization of any individual wafer-lot. Using Dynamic Impact on single objective optimization fitness value is improved by 33.2%. Furthermore, MKP benchmark instances of small complexity have been solved to 100% success rate where high degree of solution sparseness is observed, and large instances have showed average gap improved by 4.26 times. Algorithm implementation demonstrated superior performance across small and large datasets and sparse optimization problems.Intel Corporatio

    HEURISTICS FOR MULTIPLE KNAPSACK PROBLEM

    Get PDF
    ABSTRACT The Multiple Knapsack problem (MKP) is a hard combinatorial optimization problem with large application, which embraces many practical problems from different domains, like cargo loading, cutting stock, bin-packing, financial and other management, etc. It also arise as a subproblem in several more complex problems like vehicle routing problem and the algorithms to solve these problems will benefit from any improvement in the field of MKP. The aim of this paper is to compare different kind of heuristic models, statics and dynamics. The heuristics are used by an Ant Colony Optimization (ACO) algorithm to construct solutions to the MKP

    Modified and Ensemble Intelligent Water Drop Algorithms and Their Applications

    Get PDF
    1.1 Introduction Optimization is a process that concerns with finding the best solution of a given problem from among the possible solutions within an affordable time and cost (Weise et al., 2009). The first step in the optimization process is formulating the optimization problem through an objective function and a set of constrains that encompass the problem search space (ie, regions of feasible solutions). Every alternative (ie, solution) is represented by a set of decision variables. Each decision variable has a domain, which is a representation of the set of all possible values that the decision variable can take. The second step in optimization starts by utilizing an optimization method (ie, search method) to find the best candidate solutions. Candidate solution has a configuration of decision variables that satisfies the set of problem constrains, and that maximizes or minimizes the objective function (Boussaid et al., 2013). It converges to the optimal solution (ie, local or global optimal solution) by reaching the optimal values of the decision variables. Figure 1.1 depicts a 3D-fitness landscape of an optimization problem. It shows the concept of the local and global optima, where the local optimal solution is not necessarily the same as the global one (Weise et al., 2009). Optimization can be applied to many real-world problems in various domains. As an example, mathematicians apply optimization methods to identify the best outcome pertaining to some mathematical functions within a range of variables (Vesterstrom and Thomsen, 2004). In the presence of conflicting criteria, engineers use optimization methods t
    corecore