18,347 research outputs found
Learning to infer: RL-based search for DNN primitive selection on Heterogeneous Embedded Systems
Deep Learning is increasingly being adopted by industry for computer vision
applications running on embedded devices. While Convolutional Neural Networks'
accuracy has achieved a mature and remarkable state, inference latency and
throughput are a major concern especially when targeting low-cost and low-power
embedded platforms. CNNs' inference latency may become a bottleneck for Deep
Learning adoption by industry, as it is a crucial specification for many
real-time processes. Furthermore, deployment of CNNs across heterogeneous
platforms presents major compatibility issues due to vendor-specific technology
and acceleration libraries. In this work, we present QS-DNN, a fully automatic
search based on Reinforcement Learning which, combined with an inference engine
optimizer, efficiently explores through the design space and empirically finds
the optimal combinations of libraries and primitives to speed up the inference
of CNNs on heterogeneous embedded devices. We show that, an optimized
combination can achieve 45x speedup in inference latency on CPU compared to a
dependency-free baseline and 2x on average on GPGPU compared to the best vendor
library. Further, we demonstrate that, the quality of results and time
"to-solution" is much better than with Random Search and achieves up to 15x
better results for a short-time search
Learning to Teach Reinforcement Learning Agents
In this article we study the transfer learning model of action advice under a
budget. We focus on reinforcement learning teachers providing action advice to
heterogeneous students playing the game of Pac-Man under a limited advice
budget. First, we examine several critical factors affecting advice quality in
this setting, such as the average performance of the teacher, its variance and
the importance of reward discounting in advising. The experiments show the
non-trivial importance of the coefficient of variation (CV) as a statistic for
choosing policies that generate advice. The CV statistic relates variance to
the corresponding mean. Second, the article studies policy learning for
distributing advice under a budget. Whereas most methods in the relevant
literature rely on heuristics for advice distribution we formulate the problem
as a learning one and propose a novel RL algorithm capable of learning when to
advise, adapting to the student and the task at hand. Furthermore, we argue
that learning to advise under a budget is an instance of a more generic
learning problem: Constrained Exploitation Reinforcement Learning
Incorporating Behavioral Constraints in Online AI Systems
AI systems that learn through reward feedback about the actions they take are
increasingly deployed in domains that have significant impact on our daily
life. However, in many cases the online rewards should not be the only guiding
criteria, as there are additional constraints and/or priorities imposed by
regulations, values, preferences, or ethical principles. We detail a novel
online agent that learns a set of behavioral constraints by observation and
uses these learned constraints as a guide when making decisions in an online
setting while still being reactive to reward feedback. To define this agent, we
propose to adopt a novel extension to the classical contextual multi-armed
bandit setting and we provide a new algorithm called Behavior Constrained
Thompson Sampling (BCTS) that allows for online learning while obeying
exogenous constraints. Our agent learns a constrained policy that implements
the observed behavioral constraints demonstrated by a teacher agent, and then
uses this constrained policy to guide the reward-based online exploration and
exploitation. We characterize the upper bound on the expected regret of the
contextual bandit algorithm that underlies our agent and provide a case study
with real world data in two application domains. Our experiments show that the
designed agent is able to act within the set of behavior constraints without
significantly degrading its overall reward performance.Comment: 9 pages, 6 figure
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
With advances in reinforcement learning (RL), agents are now being developed
in high-stakes application domains such as healthcare and transportation.
Explaining the behavior of these agents is challenging, as the environments in
which they act have large state spaces, and their decision-making can be
affected by delayed rewards, making it difficult to analyze their behavior. To
address this problem, several approaches have been developed. Some approaches
attempt to convey the behavior of the agent, describing the
actions it takes in different states. Other approaches devised
explanations which provide information regarding the agent's decision-making in
a particular state. In this paper, we combine global and local explanation
methods, and evaluate their joint and separate contributions, providing (to the
best of our knowledge) the first user study of combined local and global
explanations for RL agents. Specifically, we augment strategy summaries that
extract important trajectories of states from simulations of the agent with
saliency maps which show what information the agent attends to. Our results
show that the choice of what states to include in the summary (global
information) strongly affects people's understanding of agents: participants
shown summaries that included important states significantly outperformed
participants who were presented with agent behavior in a randomly set of chosen
world-states. We find mixed results with respect to augmenting demonstrations
with saliency maps (local information), as the addition of saliency maps did
not significantly improve performance in most cases. However, we do find some
evidence that saliency maps can help users better understand what information
the agent relies on in its decision making, suggesting avenues for future work
that can further improve explanations of RL agents
- …