4,286 research outputs found
A reinforcement learning framework for trajectory prediction under uncertainty and budget constraint
Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0)</p
Safe Offline Reinforcement Learning with Real-Time Budget Constraints
Aiming at promoting the safe real-world deployment of Reinforcement Learning
(RL), research on safe RL has made significant progress in recent years.
However, most existing works in the literature still focus on the online
setting where risky violations of the safety budget are likely to be incurred
during training. Besides, in many real-world applications, the learned policy
is required to respond to dynamically determined safety budgets (i.e.,
constraint threshold) in real time. In this paper, we target at the above
real-time budget constraint problem under the offline setting, and propose
Trajectory-based REal-time Budget Inference (TREBI) as a novel solution that
approaches this problem from the perspective of trajectory distribution.
Theoretically, we prove an error bound of the estimation on the episodic reward
and cost under the offline setting and thus provide a performance guarantee for
TREBI. Empirical results on a wide range of simulation tasks and a real-world
large-scale advertising application demonstrate the capability of TREBI in
solving real-time budget constraint problems under offline settings.Comment: We propose a method to handle the constraint problem with dynamically
determined safety budgets under the offline settin
Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks
Autonomous robots need to interact with unknown, unstructured and changing
environments, constantly facing novel challenges. Therefore, continuous online
adaptation for lifelong-learning and the need of sample-efficient mechanisms to
adapt to changes in the environment, the constraints, the tasks, or the robot
itself are crucial. In this work, we propose a novel framework for
probabilistic online motion planning with online adaptation based on a
bio-inspired stochastic recurrent neural network. By using learning signals
which mimic the intrinsic motivation signalcognitive dissonance in addition
with a mental replay strategy to intensify experiences, the stochastic
recurrent network can learn from few physical interactions and adapts to novel
environments in seconds. We evaluate our online planning and adaptation
framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is
shown by learning unknown workspace constraints sample-efficiently from few
physical interactions while following given way points.Comment: accepted in Neural Network
- …