88,193 research outputs found
Batch Policy Learning under Constraints
When learning policies for real-world domains, two important questions arise:
(i) how to efficiently use pre-collected off-policy, non-optimal behavior data;
and (ii) how to mediate among different competing objectives and constraints.
We thus study the problem of batch policy learning under multiple constraints,
and offer a systematic solution. We first propose a flexible meta-algorithm
that admits any batch reinforcement learning and online learning procedure as
subroutines. We then present a specific algorithmic instantiation and provide
performance guarantees for the main objective and all constraints. To certify
constraint satisfaction, we propose a new and simple method for off-policy
policy evaluation (OPE) and derive PAC-style bounds. Our algorithm achieves
strong empirical results in different domains, including in a challenging
problem of simulated car driving subject to multiple constraints such as lane
keeping and smooth driving. We also show experimentally that our OPE method
outperforms other popular OPE techniques on a standalone basis, especially in a
high-dimensional setting
Deep Q-Learning versus Proximal Policy Optimization: Performance Comparison in a Material Sorting Task
This paper presents a comparison between two well-known deep Reinforcement
Learning (RL) algorithms: Deep Q-Learning (DQN) and Proximal Policy
Optimization (PPO) in a simulated production system. We utilize a Petri Net
(PN)-based simulation environment, which was previously proposed in related
work. The performance of the two algorithms is compared based on several
evaluation metrics, including average percentage of correctly assembled and
sorted products, average episode length, and percentage of successful episodes.
The results show that PPO outperforms DQN in terms of all evaluation metrics.
The study highlights the advantages of policy-based algorithms in problems with
high-dimensional state and action spaces. The study contributes to the field of
deep RL in context of production systems by providing insights into the
effectiveness of different algorithms and their suitability for different
tasks.Comment: Submitted and accepted version to the 32nd International Symposium on
Industrial Electronics (ISIE), Helsinki, Finlan
Count-Based Exploration in Feature Space for Reinforcement Learning
We introduce a new count-based optimistic exploration algorithm for
Reinforcement Learning (RL) that is feasible in environments with
high-dimensional state-action spaces. The success of RL algorithms in these
domains depends crucially on generalisation from limited training experience.
Function approximation techniques enable RL agents to generalise in order to
estimate the value of unvisited states, but at present few methods enable
generalisation regarding uncertainty. This has prevented the combination of
scalable RL algorithms with efficient exploration strategies that drive the
agent to reduce its uncertainty. We present a new method for computing a
generalised state visit-count, which allows the agent to estimate the
uncertainty associated with any state. Our \phi-pseudocount achieves
generalisation by exploiting same feature representation of the state space
that is used for value function approximation. States that have less frequently
observed features are deemed more uncertain. The \phi-Exploration-Bonus
algorithm rewards the agent for exploring in feature space rather than in the
untransformed state space. The method is simpler and less computationally
expensive than some previous proposals, and achieves near state-of-the-art
results on high-dimensional RL benchmarks.Comment: Conference: Twenty-sixth International Joint Conference on Artificial
Intelligence (IJCAI-17), 8 pages, 1 figur
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances
in deep learning for learning feature representations with reinforcement
learning. Some notable examples include training agents to play Atari games
based on raw pixel data and to acquire advanced manipulation skills using raw
sensory inputs. However, it has been difficult to quantify progress in the
domain of continuous control due to the lack of a commonly adopted benchmark.
In this work, we present a benchmark suite of continuous control tasks,
including classic tasks like cart-pole swing-up, tasks with very high state and
action dimensionality such as 3D humanoid locomotion, tasks with partial
observations, and tasks with hierarchical structure. We report novel findings
based on the systematic evaluation of a range of implemented reinforcement
learning algorithms. Both the benchmark and reference implementations are
released at https://github.com/rllab/rllab in order to facilitate experimental
reproducibility and to encourage adoption by other researchers.Comment: 14 pages, ICML 201
- …