34 research outputs found

    Investigating Evaluation Measures in Ant Colony Algorithms for Learning Decision Tree Classifiers

    Get PDF
    Ant-Tree-Miner is a decision tree induction algorithm that is based on the Ant Colony Optimization (ACO) meta- heuristic. Ant-Tree-Miner-M is a recently introduced extension of Ant-Tree-Miner that learns multi-tree classification models. A multi-tree model consists of multiple decision trees, one for each class value, where each class-based decision tree is responsible for discriminating between its class value and all other values present in the class domain (one vs. all). In this paper, we investigate the use of 10 different classification quality evaluation measures in Ant-Tree-Miner-M, which are used for both candidate model evaluation and model pruning. Our experimental results, using 40 popular benchmark datasets, identify several quality functions that substantially improve on the simple Accuracy quality function that was previously used in Ant-Tree-Miner-M

    Promoting Search Diversity in Ant Colony Optimization with Stubborn Ants

    Get PDF
    AbstractIn ant colony optimization (ACO) methods, including Ant System and MAX-M IN Ant System, each ant stochastically generates its candidate solution, in a given iteration, based on the same pheromone T and heuristic η information as every other ant. Stubborn ants is an ACO variation in which if an ant generates a particular candidate solution in a given iteration, then the components of that solution will have a higher probability of being selected in the candidate solution generated by that ant in the next iteration. In previous work, we evaluated this variation with the M M AS Ant System model and the Traveling Salesman Problem (TSP), and found that it can both improve solution quality and reduce execution-time. In this paper, we evaluate stubborn ants with Ranked Ant System, and find that performance also improves in terms of solution quality and execution time

    Fuzzy PSO: A Generalization of Particle Swarm Optimization

    Get PDF
    In standard particle swarm optimization (PSO), the best particle in each neighborhood exerts its influence over other particles in the neighborhood. In this paper, we propose fuzzy PSO, a generalization which differs from standard PSO in the following respect: charisma is defined to be a fuzzy variable, and more than one particle in each neighborhood can have a non-zero degree of charisma, and, consequently, is allowed to influence others to a degree that depends on its charisma. We evaluate our model on the weighted maximum satisfiability (maxsat) problem, comparing performance to standard PSO and to Walk-Sat

    Co-Evolutionary Particle Swarm Optimization Applied to the 7x7 Seega Game

    Get PDF
    Seega is an ancient Egyptian two-stage board game that, in certain aspects, is more difficult than chess. The two-player game is most commonly played on a 7 × 7 board, but is also sometimes played on a 5 × 5 or 9 × 9 board. In the first and more difficult stage of the game, players take turns placing one disk each on the board until the board contains only one empty cell. In the second stage players take turns moving disks of their color; a disk that becomes surrounded by disks of the opposite color is captured and removed from the board. Building on previous work, on the 5 × 5 version of Seega, we focus, in this paper, on the 7 × 7 board. Our approach employs co-evolutionary particle swarm optimization for the generation of feature evaluation scores. Two separate swarms are used to evolve White players and Black players, respectively; each particle represents feature weights for use in the position evaluation. Experimental results are presented and the performance of the full game engine are discussed

    Applying Co-Evolutionary Particle Swam Optimization to the Egyptian Board Game Seega

    Get PDF
    Seega is an ancient Egyptian two-phase board game that, in certain aspects, is more difficult than chess. The two-player game is played on either a 5 × 5, 7 × 7, or 9 × 9 board. In the first and more difficult phase of the game, players take turns placing one disk each on the board until the board contains only one empty cell. In the second phase players take turns moving disks of their color; a disk that becomes surrounded by disks of the opposite color is captured and removed from the board. We have developed a Seega program that employs co-evolutionary particle swarm optimization in the generation of feature evaluation scores. Two separate swarms are used to evolve White players and Black players, respectively; each particle represents feature weights for use in the position evaluation. Experimental results are presented and the performance of the full game engine is discussed

    Negative Reinforcement and Backtrack-Points for Recurrent Neural Networks for Cost-Based Abduction

    Get PDF
    Abduction is the process of proceeding from data describing a set of observations or events, to a set of hypotheses which best explains or accounts for the data. Cost-based abduction (CKA) is an AI formalism in which evidence to be explained is treated as a goal to be proven, proofs have costs based on how much needs to be assumed to complete the proof, and the set of assumptions needed to complete the least-cost proof are taken as the best explanation for the given evidence. In this paper, we introduce two techniques for improving the performance of high order recurrent networks (HORN) applied to cost-based abduction. In the backtrack-points technique, we use heuristics to recognize early that the network trajectory is moving in the wrong direction; we then restore the network state to a previously-stored point, and apply heuristic perturbations to nudge the network trajectory in a different direction. In the negative reinforcement technique, we add hyperedges to the network to reduce the attractiveness of local-minima. We apply these techniques on a 300-hypothesis, 900-rule particularly-difficult instance of CBA

    Alpha-beta pruning and Althöfer\u27s pathology-free negamax algorithm

    No full text
    The minimax algorithm, also called the negamax algorithm, remains today the most widely used search technique for two-player perfect-information games. However, minimaxing has been shown to be susceptible to game tree pathology, a paradoxical situation in which the accuracy of the search can decrease as the height of the tree increases. Althöfer\u27s alternative minimax algorithm has been proven to be invulnerable to pathology. However, it has not been clear whether alpha-beta pruning, a crucial component of practical game programs, could be applied in the context of Alhöfer\u27s algorithm. In this brief paper, we show how alpha-beta pruning can be adapted to Althöfer\u27s algorithm. © 2012 by the author

    Is there a computational advantage to representing evaporation rate in ant colony optimization as a gaussian random variable?

    No full text
    We propose an ACO (Ant Colony Optimization) variation in which the evaporation rate, instead of being constant as is common in standard ACO algorithms, is a Gaussian random variable with non-negligible variance. In experimental results in the context of MAX-MIN Ant System (MMAS) and the Traveling Salesman Problem (TSP), we find that our variation performs considerably better than MMAS when the number of iterations is small, and that its performance is slightly better than MMAS when the number of iterations is large. © 2012 ACM

    Alpha-Beta Pruning and Althöfer’s Pathology-Free Negamax Algorithm

    No full text
    The minimax algorithm, also called the negamax algorithm, remains today the most widely used search technique for two-player perfect-information games. However, minimaxing has been shown to be susceptible to game tree pathology, a paradoxical situation in which the accuracy of the search can decrease as the height of the tree increases. Althöfer’s alternative minimax algorithm has been proven to be invulnerable to pathology. However, it has not been clear whether alpha-beta pruning, a crucial component of practical game programs, could be applied in the context of Alhöfer’s algorithm. In this brief paper, we show how alpha-beta pruning can be adapted to Althöfer’s algorithm

    Utilizing multiple pheromones in an ant-based algorithm for continuous-attribute classification rule discovery.

    Get PDF
    The cAnt-Miner algorithm is an Ant Colony Optimization (ACO) based technique for classification rule discovery in problem domains which include continuous attributes. In this paper, we propose several extensions to cAnt- Miner. The main extension is based on the use of multiple pheromone types, one for each class value to be predicted. In the proposed ?cAnt-Miner algorithm, an ant first selects a class value to be the consequent of a rule and the terms in the antecedent are selected based on the pheromone levels of the selected class value; pheromone update occurs on the corresponding pheromone type of the class value. The pre-selection of a class value also allows the use of more precise measures for the heuristic function and the dynamic discretization of continuous attributes, and further allows for the use of a rule quality measure that directly takes into account the confidence of the rule. Experimental results on 20 benchmark datasets show that our proposed extension improves classification accuracy to a statistically significant extent compared to cAnt-Miner, and has classification accuracy similar to the well-known Ripper and PART rule induction algorithms
    corecore