107,910 research outputs found
Learning-Based Synthesis of Safety Controllers
We propose a machine learning framework to synthesize reactive controllers
for systems whose interactions with their adversarial environment are modeled
by infinite-duration, two-player games over (potentially) infinite graphs. Our
framework targets safety games with infinitely many vertices, but it is also
applicable to safety games over finite graphs whose size is too prohibitive for
conventional synthesis techniques. The learning takes place in a feedback loop
between a teacher component, which can reason symbolically about the safety
game, and a learning algorithm, which successively learns an overapproximation
of the winning region from various kinds of examples provided by the teacher.
We develop a novel decision tree learning algorithm for this setting and show
that our algorithm is guaranteed to converge to a reactive safety controller if
a suitable overapproximation of the winning region can be expressed as a
decision tree. Finally, we empirically compare the performance of a prototype
implementation to existing approaches, which are based on constraint solving
and automata learning, respectively
Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog
A number of recent works have proposed techniques for end-to-end learning of
communication protocols among cooperative multi-agent populations, and have
simultaneously found the emergence of grounded human-interpretable language in
the protocols developed by the agents, all learned without any human
supervision!
In this paper, using a Task and Tell reference game between two agents as a
testbed, we present a sequence of 'negative' results culminating in a
'positive' one -- showing that while most agent-invented languages are
effective (i.e. achieve near-perfect task rewards), they are decidedly not
interpretable or compositional.
In essence, we find that natural language does not emerge 'naturally',
despite the semblance of ease of natural-language-emergence that one may gather
from recent literature. We discuss how it is possible to coax the invented
languages to become more and more human-like and compositional by increasing
restrictions on how two agents may communicate.Comment: 9 pages, 7 figures, 2 tables, accepted at EMNLP 2017 as short pape
Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees
Deep Reinforcement Learning (DRL) has achieved impressive success in many
applications. A key component of many DRL models is a neural network
representing a Q function, to estimate the expected cumulative reward following
a state-action pair. The Q function neural network contains a lot of implicit
knowledge about the RL problems, but often remains unexamined and
uninterpreted. To our knowledge, this work develops the first mimic learning
framework for Q functions in DRL. We introduce Linear Model U-trees (LMUTs) to
approximate neural network predictions. An LMUT is learned using a novel
on-line algorithm that is well-suited for an active play setting, where the
mimic learner observes an ongoing interaction between the neural net and the
environment. Empirical evaluation shows that an LMUT mimics a Q function
substantially better than five baseline methods. The transparent tree structure
of an LMUT facilitates understanding the network's learned knowledge by
analyzing feature influence, extracting rules, and highlighting the
super-pixels in image inputs.Comment: This paper is accepted by ECML-PKDD 201
A Survey of Monte Carlo Tree Search Methods
Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work
Playing Smart - Artificial Intelligence in Computer Games
Abstract: With this document we will present an overview of artificial intelligence in general and artificial intelligence in the context of its use in modern computer games in particular. To this end we will firstly provide an introduction to the terminology of artificial intelligence, followed by a brief history of this field of computer science and finally we will discuss the impact which this science has had on the development of computer games. This will be further illustrated by a number of case studies, looking at how artificially intelligent behaviour has been achieved in selected games
- …