16 research outputs found
Deep learning for video game playing
In this article, we review recent Deep Learning advances in the context of
how they have been applied to play different types of video games such as
first-person shooters, arcade games, and real-time strategy games. We analyze
the unique requirements that different game genres pose to a deep learning
system and highlight important open challenges in the context of applying these
machine learning methods to video games, such as general game playing, dealing
with extremely large decision spaces and sparse rewards
DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution
Recent research has demonstrated the vulnerability of fingerprint recognition
systems to dictionary attacks based on MasterPrints. MasterPrints are real or
synthetic fingerprints that can fortuitously match with a large number of
fingerprints thereby undermining the security afforded by fingerprint systems.
Previous work by Roy et al. generated synthetic MasterPrints at the
feature-level. In this work we generate complete image-level MasterPrints known
as DeepMasterPrints, whose attack accuracy is found to be much superior than
that of previous methods. The proposed method, referred to as Latent Variable
Evolution, is based on training a Generative Adversarial Network on a set of
real fingerprint images. Stochastic search in the form of the Covariance Matrix
Adaptation Evolution Strategy is then used to search for latent input variables
to the generator network that can maximize the number of impostor matches as
assessed by a fingerprint recognizer. Experiments convey the efficacy of the
proposed method in generating DeepMasterPrints. The underlying method is likely
to have broad applications in fingerprint security as well as fingerprint
synthesis.Comment: 8 pages; added new verification systems and diagrams. Accepted to
conference Biometrics: Theory, Applications, and Systems 201
PCGRL: Procedural Content Generation via Reinforcement Learning
We investigate how reinforcement learning can be used to train
level-designing agents. This represents a new approach to procedural content
generation in games, where level design is framed as a game, and the content
generator itself is learned. By seeing the design problem as a sequential task,
we can use reinforcement learning to learn how to take the next action so that
the expected final level quality is maximized. This approach can be used when
few or no examples exist to train from, and the trained generator is very fast.
We investigate three different ways of transforming two-dimensional level
design problems into Markov decision processes and apply these to three game
environments.Comment: 7 pages, 7 figures, 1 table, published at AIIDE202
Illuminating Generalization in Deep Reinforcement Learning through Procedural Level Generation
Deep reinforcement learning (RL) has shown impressive results in a variety of
domains, learning directly from high-dimensional sensory streams. However, when
neural networks are trained in a fixed environment, such as a single level in a
video game, they will usually overfit and fail to generalize to new levels.
When RL models overfit, even slight modifications to the environment can result
in poor agent performance. This paper explores how procedurally generated
levels during training can increase generality. We show that for some games
procedural level generation enables generalization to new levels within the
same distribution. Additionally, it is possible to achieve better performance
with less data by manipulating the difficulty of the levels in response to the
performance of the agent. The generality of the learned behaviors is also
evaluated on a set of human-designed levels. The results suggest that the
ability to generalize to human-designed levels highly depends on the design of
the level generators. We apply dimensionality reduction and clustering
techniques to visualize the generators' distributions of levels and analyze to
what degree they can produce levels similar to those designed by a human.Comment: Accepted to NeurIPS Deep RL Workshop 201
AtDelfi: Automatically Designing Legible, Full Instructions For Games
This paper introduces a fully automatic method for generating video game
tutorials. The AtDELFI system (AuTomatically DEsigning Legible, Full
Instructions for games) was created to investigate procedural generation of
instructions that teach players how to play video games. We present a
representation of game rules and mechanics using a graph system as well as a
tutorial generation method that uses said graph representation. We demonstrate
the concept by testing it on games within the General Video Game Artificial
Intelligence (GVG-AI) framework; the paper discusses tutorials generated for
eight different games. Our findings suggest that a graph representation scheme
works well for simple arcade style games such as Space Invaders and Pacman, but
it appears that tutorials for more complex games might require higher-level
understanding of the game than just single mechanics.Comment: 10 pages, 11 figures, published at Foundations of Digital Games
Conference 201
CPPN2GAN: Combining Compositional Pattern Producing Networks and GANs for Large-Scale Pattern Generation
Generative Adversarial Networks (GANs) are proving to be a powerful indirect
genotype-to-phenotype mapping for evolutionary search, but they have
limitations. In particular, GAN output does not scale to arbitrary dimensions,
and there is no obvious way of combining multiple GAN outputs into a cohesive
whole, which would be useful in many areas, such as the generation of video
game levels. Game levels often consist of several segments, sometimes repeated
directly or with variation, organized into an engaging pattern. Such patterns
can be produced with Compositional Pattern Producing Networks (CPPNs).
Specifically, a CPPN can define latent vector GAN inputs as a function of
geometry, which provides a way to organize level segments output by a GAN into
a complete level. This new CPPN2GAN approach is validated in both Super Mario
Bros. and The Legend of Zelda. Specifically, divergent search via MAP-Elites
demonstrates that CPPN2GAN can better cover the space of possible levels. The
layouts of the resulting levels are also more cohesive and aesthetically
consistent.Comment: GECCO 2020. arXiv admin note: text overlap with arXiv:2004.0015