26,875 research outputs found
Generation of Policy-Level Explanations for Reinforcement Learning
Though reinforcement learning has greatly benefited from the incorporation of
neural networks, the inability to verify the correctness of such systems limits
their use. Current work in explainable deep learning focuses on explaining only
a single decision in terms of input features, making it unsuitable for
explaining a sequence of decisions. To address this need, we introduce
Abstracted Policy Graphs, which are Markov chains of abstract states. This
representation concisely summarizes a policy so that individual decisions can
be explained in the context of expected future transitions. Additionally, we
propose a method to generate these Abstracted Policy Graphs for deterministic
policies given a learned value function and a set of observed transitions,
potentially off-policy transitions used during training. Since no restrictions
are placed on how the value function is generated, our method is compatible
with many existing reinforcement learning methods. We prove that the worst-case
time complexity of our method is quadratic in the number of features and linear
in the number of provided transitions, . By applying
our method to a family of domains, we show that our method scales well in
practice and produces Abstracted Policy Graphs which reliably capture
relationships within these domains.Comment: Accepted to Proceedings of the Thirty-Third AAAI Conference on
Artificial Intelligence (2019
Deep Reinforcement Learning-Based Channel Allocation for Wireless LANs with Graph Convolutional Networks
Last year, IEEE 802.11 Extremely High Throughput Study Group (EHT Study
Group) was established to initiate discussions on new IEEE 802.11 features.
Coordinated control methods of the access points (APs) in the wireless local
area networks (WLANs) are discussed in EHT Study Group. The present study
proposes a deep reinforcement learning-based channel allocation scheme using
graph convolutional networks (GCNs). As a deep reinforcement learning method,
we use a well-known method double deep Q-network. In densely deployed WLANs,
the number of the available topologies of APs is extremely high, and thus we
extract the features of the topological structures based on GCNs. We apply GCNs
to a contention graph where APs within their carrier sensing ranges are
connected to extract the features of carrier sensing relationships.
Additionally, to improve the learning speed especially in an early stage of
learning, we employ a game theory-based method to collect the training data
independently of the neural network model. The simulation results indicate that
the proposed method can appropriately control the channels when compared to
extant methods
Representation Learning on Graphs: A Reinforcement Learning Application
In this work, we study value function approximation in reinforcement learning
(RL) problems with high dimensional state or action spaces via a generalized
version of representation policy iteration (RPI). We consider the limitations
of proto-value functions (PVFs) at accurately approximating the value function
in low dimensions and we highlight the importance of features learning for an
improved low-dimensional value function approximation. Then, we adopt different
representation learning algorithm on graphs to learn the basis functions that
best represent the value function. We empirically show that node2vec, an
algorithm for scalable feature learning in networks, and the Variational Graph
Auto-Encoder constantly outperform the commonly used smooth proto-value
functions in low-dimensional feature space
Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder
In this paper, we present a hierarchical path planning framework called SG-RL
(subgoal graphs-reinforcement learning), to plan rational paths for agents
maneuvering in continuous and uncertain environments. By "rational", we mean
(1) efficient path planning to eliminate first-move lags; (2) collision-free
and smooth for agents with kinematic constraints satisfied. SG-RL works in a
two-level manner. At the first level, SG-RL uses a geometric path-planning
method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract
paths, also called subgoal sequences. At the second level, SG-RL uses an RL
method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal
motion-planning policies which can generate kinematically feasible and
collision-free trajectories between adjacent subgoals. The first advantage of
the proposed method is that SSG can solve the limitations of sparse reward and
local minima trap for RL agents; thus, LSPI can be used to generate paths in
complex environments. The second advantage is that, when the environment
changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to
reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI
can deal with uncertainties by exploiting its generalization ability to handle
changes in environments. Simulation experiments in representative scenarios
demonstrate that, compared with existing methods, SG-RL can work well on
large-scale maps with relatively low action-switching frequencies and shorter
path lengths, and SG-RL can deal with small changes in environments. We further
demonstrate that the design of reward functions and the types of training
environments are important factors for learning feasible policies.Comment: 20 page
Improving Optimization Bounds using Machine Learning: Decision Diagrams meet Deep Reinforcement Learning
Finding tight bounds on the optimal solution is a critical element of
practical solution methods for discrete optimization problems. In the last
decade, decision diagrams (DDs) have brought a new perspective on obtaining
upper and lower bounds that can be significantly better than classical bounding
mechanisms, such as linear relaxations. It is well known that the quality of
the bounds achieved through this flexible bounding method is highly reliant on
the ordering of variables chosen for building the diagram, and finding an
ordering that optimizes standard metrics is an NP-hard problem. In this paper,
we propose an innovative and generic approach based on deep reinforcement
learning for obtaining an ordering for tightening the bounds obtained with
relaxed and restricted DDs. We apply the approach to both the Maximum
Independent Set Problem and the Maximum Cut Problem. Experimental results on
synthetic instances show that the deep reinforcement learning approach, by
achieving tighter objective function bounds, generally outperforms ordering
methods commonly used in the literature when the distribution of instances is
known. To the best knowledge of the authors, this is the first paper to apply
machine learning to directly improve relaxation bounds obtained by
general-purpose bounding mechanisms for combinatorial optimization problems.Comment: Accepted and presented at AAAI'1
- …