13,625 research outputs found
Anderson acceleration for reinforcement learning
International audienceAnderson (1965) acceleration is an old and simple method for accelerating the computation of a fixed point. However, as far as we know and quite surprisingly, it has never been applied to dynamic programming or reinforcement learning. In this paper, we explain briefly what Anderson acceleration is and how it can be applied to value iteration, this being supported by preliminary experiments showing a significant speed up of convergence, that we critically discuss. We also discuss how this idea could be applied more generally to (deep) reinforcement learning
Convergence of Online and Approximate Multiple-Step Lookahead Policy Iteration
International audienceAnderson (1965) acceleration is an old and simple method for accelerating the computation of a fixed point. However, as far as we know and quite surprisingly, it has never been applied to dynamic programming or reinforcement learning. In this paper, we explain briefly what Anderson acceleration is and how it can be applied to value iteration, this being supported by preliminary experiments showing a significant speed up of convergence, that we critically discuss. We also discuss how this idea could be applied more generally to (deep) reinforcement learning
Learning to infer: RL-based search for DNN primitive selection on Heterogeneous Embedded Systems
Deep Learning is increasingly being adopted by industry for computer vision
applications running on embedded devices. While Convolutional Neural Networks'
accuracy has achieved a mature and remarkable state, inference latency and
throughput are a major concern especially when targeting low-cost and low-power
embedded platforms. CNNs' inference latency may become a bottleneck for Deep
Learning adoption by industry, as it is a crucial specification for many
real-time processes. Furthermore, deployment of CNNs across heterogeneous
platforms presents major compatibility issues due to vendor-specific technology
and acceleration libraries. In this work, we present QS-DNN, a fully automatic
search based on Reinforcement Learning which, combined with an inference engine
optimizer, efficiently explores through the design space and empirically finds
the optimal combinations of libraries and primitives to speed up the inference
of CNNs on heterogeneous embedded devices. We show that, an optimized
combination can achieve 45x speedup in inference latency on CPU compared to a
dependency-free baseline and 2x on average on GPGPU compared to the best vendor
library. Further, we demonstrate that, the quality of results and time
"to-solution" is much better than with Random Search and achieves up to 15x
better results for a short-time search
Model-Based Policy Search for Automatic Tuning of Multivariate PID Controllers
PID control architectures are widely used in industrial applications. Despite
their low number of open parameters, tuning multiple, coupled PID controllers
can become tedious in practice. In this paper, we extend PILCO, a model-based
policy search framework, to automatically tune multivariate PID controllers
purely based on data observed on an otherwise unknown system. The system's
state is extended appropriately to frame the PID policy as a static state
feedback policy. This renders PID tuning possible as the solution of a finite
horizon optimal control problem without further a priori knowledge. The
framework is applied to the task of balancing an inverted pendulum on a seven
degree-of-freedom robotic arm, thereby demonstrating its capabilities of fast
and data-efficient policy learning, even on complex real world problems.Comment: Accepted final version to appear in 2017 IEEE International
Conference on Robotics and Automation (ICRA
- …