25 research outputs found
Bandit-Inspired Memetic Algorithms for Solving Quadratic Assignment Problems
In this paper we propose a novel algorithm called the Bandit-Inspired Memetic Algorithm (BIMA) and we have applied it to solve different large instances of the Quadratic Assignment Problem (QAP). Like other memetic algorithms, BIMA makes use of local search and a population of solutions. The novelty lies in the use of multi-armed bandit algorithms and assignment matrices for generating novel solutions, which will then be brought to a local minimum by local search. We have compared BIMA to multi-start local search (MLS) and iterated local search (ILS) on five QAP instances, and the results show that BIMA significantly outperforms these competitor
Sampled Policy Gradient for Learning to Play the Game Agar.io
In this paper, a new offline actor-critic learning algorithm is introduced:
Sampled Policy Gradient (SPG). SPG samples in the action space to calculate an
approximated policy gradient by using the critic to evaluate the samples. This
sampling allows SPG to search the action-Q-value space more globally than
deterministic policy gradient (DPG), enabling it to theoretically avoid more
local optima. SPG is compared to Q-learning and the actor-critic algorithms
CACLA and DPG in a pellet collection task and a self play environment in the
game Agar.io. The online game Agar.io has become massively popular on the
internet due to intuitive game design and the ability to instantly compete
against players around the world. From the point of view of artificial
intelligence this game is also very intriguing: The game has a continuous input
and action space and allows to have diverse agents with complex strategies
compete against each other. The experimental results show that Q-Learning and
CACLA outperform a pre-programmed greedy bot in the pellet collection task, but
all algorithms fail to outperform this bot in a fighting scenario. The SPG
algorithm is analyzed to have great extendability through offline exploration
and it matches DPG in performance even in its basic form without extensive
sampling
Learning to Play Pac-Xon with Q-Learning and Two Double Q-Learning Variants
Pac-Xon is an arcade video game in which the player tries to fill a level space by conquering blocks while being threatened by enemies. In this paper it is investigated whether a reinforcement learning (RL) agent can successfully learn to play this game. The RL agent consists of a multilayer perceptron (MLP) that uses a feature representation of the game state through input variables and gives Q-values for each possible action as output. For training the agent, the use of Q-learning is compared to two double Q-learning variants, the original algorithm and a novel variant. Furthermore, we have set up an alternative reward function which presents higher rewards towards the end of a level to try to increase the performance of the algorithms. The results show that all algorithms can be used to successfully learn to play Pac-Xon. Furthermore both double Q-learning variants obtain significantly higher performances than Q-learning and the progressive reward function does not yield better results than the regular reward function
Hierarchical reinforcement learning for real-time strategy games
Real-Time Strategy (RTS) games can be abstracted to resource allocation applicable in many fields and industries. We consider a simplified custom RTS game focused on mid-level combat using reinforcement learning (RL) algorithms. There are a number of contributions to game playing with RL in this paper. First, we combine hierarchical RL with a multi-layer perceptron (MLP) that receives higher-order inputs for increased learning speed and performance. Second, we compare Q-learning against Monte Carlo learning as reinforcement learning algorithms. Third, because the teams in the RTS game are multi-agent systems, we examine two different methods for assigning rewards to agents. Experiments are performed against two different fixed opponents. The results show that the combination of Q-learning and individual rewards yields the highest win-rate against the different opponents, and is able to defeat the opponent within 26 training games
Temporal difference learning for the game Tic-Tac-Toe 3D: Applying structure to neural networks
When reinforcement learning is applied to large state spaces, such as those occurring in playing board games, the use of a good function approximator to learn to approximate the value function is very important. In previous research, multi-layer perceptrons have often been quite successfully used as function approximator for learning to play particular games with temporal difference learning. With the recent developments in deep learning, it is important to study if using multiple hidden layers or particular network structures can help to improve learning the value function. In this paper, we compare five different structures of multilayer perceptrons for learning to play the game Tic-Tac-Toe 3D, both when training through self-play and when training against the same fixed opponent they are tested against. We compare three fully connected multilayer perceptrons with a different number of hidden layers and/or hidden units, as well as two structured ones. These structured multilayer perceptrons have a first hidden layer that is only sparsely connected to the input layer, and has units that correspond to the rows in Tic-Tac-Toe 3D. This allows them to more easily learn the contribution of specific patterns on the corresponding rows. One of the two structured multilayer perceptrons has a second hidden layer that is fully connected to the first one, which allows the neural network to learn to non-linearly integrate the information in these detected patterns. The results on Tic-Tac-Toe 3D show that the deep structured neural network with integrated pattern detectors has the strongest performance out of the compared multilayer perceptrons against a fixed opponent, both through self-training and through training against this fixed opponent
Local search and restart strategies for satisfiability solving in fuzzy logics
Satisfiability solving in fuzzy logics is a subject that has not been researched much, certainly compared to satisfiability in propositional logics. Yet, fuzzy logics are a powerful tool for modelling complex problems. Recently, we proposed an optimization approach to solving satisfiability in fuzzy logics and compared the standard Covariance Matrix Adaptation Evolution Strategy algorithm (CMA-ES) with an analytical solver on a set of benchmark problems. Especially on more finegrained problems did CMA-ES compare favourably to the analytical approach. In this paper, we evaluate two types of hillclimber in addition to CMA-ES, as well as restart strategies for these algorithms. Our results show that a population-based hillclimber outperforms CMA-ES on the harder problem class
Automatic Design of Multi-Objective Local Search Algorithms: Case Study on a bi-objective Permutation Flowshop Scheduling Problem
International audienceMulti-objective local search (MOLS) algorithms are efficient metaheuristics, which improve a set of solutions by using their neighbourhood to iteratively find better and better solutions. MOLS algorithms are versatile algorithms with many available strategies, first to select the solutions to explore, then to explore them, and finally to update the archive using some of the visited neighbours. In this paper, we propose a new generalisation of MOLS algorithms incorporating new recent ideas and algorithms. To be able to instantiate the many MOLS algorithms of the literature, our generalisation exposes numerous numerical and categorical parameters, raising the possibility of being automatically designed by an automatic algorithm configuration (AAC) mechanism. We investigate the worth of such an automatic design of MOLS algorithms using MO-ParamILS, a multi-objective AAC configurator, on the permutation flowshop scheduling problem, and demonstrate its worth against a traditional manual design
Recombination operators and selection strategies for evolutionary Markov Chain Monte Carlo algorithms
Markov Chain Monte Carlo (MCMC) methods are often used to sample from intractable target distributions. Some MCMC variants aim to improve the performance by running a population of MCMC chains. In this paper, we investigate the use of techniques from Evolutionary Computation (EC) to design population-based MCMC algorithms that exchange useful information between the individual chains. We investigate how one can ensure that the resulting class of algorithms, called Evolutionary MCMC (EMCMC), samples from the target distribution as expected from any MCMC algorithm. We analytically and experimentally show—using examples from discrete search spaces—that the proposed EMCMCs can outperform standard MCMCs by exploiting common partial structures between the more likely individual states. The MCMC chains in the population interact through recombination and selection. We analyze the required properties of recombination operators and acceptance (or selection) rules in EMCMCs. An important issue is how to preserve the detailed balance property which is a sufficient condition for an irreducible and aperiodic EMCMC to converge to a given target distribution. Transferring EC techniques to population-based MCMCs should be done with care. For instance, we prove that EMCMC algorithms with an elitist acceptance rule do not sample the target distribution correctly
A Bayesian model for anomaly detection in SQL databases for security systems A Bayesian model for anomaly detection in SQL databases for security systems
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal. If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the "Taverne" license above, please follow below link for the End User Agreement: www.tue.nl/taverne Take down policy If you believe that this document breaches copyright please contact us at: [email protected] providing details and we will investigate your claim. Abstract-We focus on automatic anomaly detection in SQL databases for security systems. Many logs of database systems, here the Townhall database, contain detailed information about users, like the SQL queries and the response of the database. A database is a list of log instances, where each log instance is a Cartesian product of feature values with an attached anomaly score. All log instances with the anomaly score in the top percentile are identified as anomalous. Our contribution is multi-folded. We define a model for anomaly detection of SQL databases that learns the structure of Bayesian networks from data. Our method for automatic feature extraction generates the maximal spanning tree to detect the strongest similarities between features. Novel anomaly scores based on the joint probability distribution of the database features and the log-likelihood of the maximal spanning tree detect both point and contextual anomalies. Multiple anomaly scores are combined within a robust anomaly analysis algorithm. We validate our method on the Townhall database showing the performance of our anomaly detection algorithm