209 research outputs found
Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration
We propose a technique for multi-task learning from demonstration that trains
the controller of a low-cost robotic arm to accomplish several complex picking
and placing tasks, as well as non-prehensile manipulation. The controller is a
recurrent neural network using raw images as input and generating robot arm
trajectories, with the parameters shared across the tasks. The controller also
combines VAE-GAN-based reconstruction with autoregressive multimodal action
prediction. Our results demonstrate that it is possible to learn complex
manipulation tasks, such as picking up a towel, wiping an object, and
depositing the towel to its previous position, entirely from raw images with
direct behavior cloning. We show that weight sharing and reconstruction-based
regularization substantially improve generalization and robustness, and
training on multiple tasks simultaneously increases the success rate on all
tasks
Latent Plans for Task-Agnostic Offline Reinforcement Learning
Everyday tasks of long-horizon and comprising a sequence of multiple implicit
subtasks still impose a major challenge in offline robot control. While a
number of prior methods aimed to address this setting with variants of
imitation and offline reinforcement learning, the learned behavior is typically
narrow and often struggles to reach configurable long-horizon goals. As both
paradigms have complementary strengths and weaknesses, we propose a novel
hierarchical approach that combines the strengths of both methods to learn
task-agnostic long-horizon policies from high-dimensional camera observations.
Concretely, we combine a low-level policy that learns latent skills via
imitation learning and a high-level policy learned from offline reinforcement
learning for skill-chaining the latent behavior priors. Experiments in various
simulated and real robot control tasks show that our formulation enables
producing previously unseen combinations of skills to reach temporally extended
goals by "stitching" together latent skills through goal chaining with an
order-of-magnitude improvement in performance upon state-of-the-art baselines.
We even learn one multi-task visuomotor policy for 25 distinct manipulation
tasks in the real world which outperforms both imitation learning and offline
reinforcement learning techniques.Comment: CoRL 2022. Project website: http://tacorl.cs.uni-freiburg.de
Probabilistic movement primitives for coordination of multiple humanโrobot collaborative tasks
This paper proposes an interaction learning method for collaborative and assistive robots based on movement primitives. The method allows for both action recognition and humanโrobot movement coordination. It uses imitation learning to construct a mixture model of humanโrobot interaction primitives. This probabilistic model allows the assistive trajectory of the robot to be inferred from human observations. The method is scalable in relation to the number of tasks and can learn nonlinear correlations between the trajectories that describe the humanโrobot interaction. We evaluated the method experimentally with a lightweight robot arm in a variety of assistive scenarios, including the coordinated handover of a bottle to a human, and the collaborative assembly of a toolbox. Potential applications of the method are personal caregiver robots, control of intelligent prosthetic devices, and robot coworkers in factories
Behavior Retrieval: Few-Shot Imitation Learning by Querying Unlabeled Datasets
Enabling robots to learn novel visuomotor skills in a data-efficient manner
remains an unsolved problem with myriad challenges. A popular paradigm for
tackling this problem is through leveraging large unlabeled datasets that have
many behaviors in them and then adapting a policy to a specific task using a
small amount of task-specific human supervision (i.e. interventions or
demonstrations). However, how best to leverage the narrow task-specific
supervision and balance it with offline data remains an open question. Our key
insight in this work is that task-specific data not only provides new data for
an agent to train on but can also inform the type of prior data the agent
should use for learning. Concretely, we propose a simple approach that uses a
small amount of downstream expert data to selectively query relevant behaviors
from an offline, unlabeled dataset (including many sub-optimal behaviors). The
agent is then jointly trained on the expert and queried data. We observe that
our method learns to query only the relevant transitions to the task, filtering
out sub-optimal or task-irrelevant data. By doing so, it is able to learn more
effectively from the mix of task-specific and offline data compared to naively
mixing the data or only using the task-specific data. Furthermore, we find that
our simple querying approach outperforms more complex goal-conditioned methods
by 20% across simulated and real robotic manipulation tasks from images. See
https://sites.google.com/view/behaviorretrieval for videos and code
Bayesian Disturbance Injection: Robust Imitation Learning of Flexible Policies
Scenarios requiring humans to choose from multiple seemingly optimal actions
are commonplace, however standard imitation learning often fails to capture
this behavior. Instead, an over-reliance on replicating expert actions induces
inflexible and unstable policies, leading to poor generalizability in an
application. To address the problem, this paper presents the first imitation
learning framework that incorporates Bayesian variational inference for
learning flexible non-parametric multi-action policies, while simultaneously
robustifying the policies against sources of error, by introducing and
optimizing disturbances to create a richer demonstration dataset. This
combinatorial approach forces the policy to adapt to challenging situations,
enabling stable multi-action policies to be learned efficiently. The
effectiveness of our proposed method is evaluated through simulations and
real-robot experiments for a table-sweep task using the UR3 6-DOF robotic arm.
Results show that, through improved flexibility and robustness, the learning
performance and control safety are better than comparison methods.Comment: 7 pages, Accepted by the 2021 International Conference on Robotics
and Automation (ICRA 2021
Bayesian Disturbance Injection: Robust Imitation Learning of Flexible Policies for Robot Manipulation
Humans demonstrate a variety of interesting behavioral characteristics when
performing tasks, such as selecting between seemingly equivalent optimal
actions, performing recovery actions when deviating from the optimal
trajectory, or moderating actions in response to sensed risks. However,
imitation learning, which attempts to teach robots to perform these same tasks
from observations of human demonstrations, often fails to capture such
behavior. Specifically, commonly used learning algorithms embody inherent
contradictions between the learning assumptions (e.g., single optimal action)
and actual human behavior (e.g., multiple optimal actions), thereby limiting
robot generalizability, applicability, and demonstration feasibility. To
address this, this paper proposes designing imitation learning algorithms with
a focus on utilizing human behavioral characteristics, thereby embodying
principles for capturing and exploiting actual demonstrator behavioral
characteristics. This paper presents the first imitation learning framework,
Bayesian Disturbance Injection (BDI), that typifies human behavioral
characteristics by incorporating model flexibility, robustification, and risk
sensitivity. Bayesian inference is used to learn flexible non-parametric
multi-action policies, while simultaneously robustifying policies by injecting
risk-sensitive disturbances to induce human recovery action and ensuring
demonstration feasibility. Our method is evaluated through risk-sensitive
simulations and real-robot experiments (e.g., table-sweep task, shaft-reach
task and shaft-insertion task) using the UR5e 6-DOF robotic arm, to demonstrate
the improved characterisation of behavior. Results show significant improvement
in task performance, through improved flexibility, robustness as well as
demonstration feasibility.Comment: 69 pages, 9 figures, accepted by Elsevier Neural Networks - Journa
๋ณต์กํ๊ณ ๋ถํ์คํ ํ๊ฒฝ์์ ์ต์ ์์ฌ ๊ฒฐ์ ์ ์ํ ํจ์จ์ ์ธ ๋ก๋ด ํ์ต
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ ๋ณด๊ณตํ๋ถ, 2021. 2. Songhwai Oh.The problem of sequential decision making under an uncertain and complex environment is a long-standing challenging problem in robotics. In this thesis, we focus on learning a policy function of robotic systems for sequential decision making under which is called a robot learning framework. In particular, we are interested in reducing the sample complexity of the robot learning framework. Hence, we develop three sample efficient robot learning frameworks. The first one is the maximum entropy reinforcement learning. The second one is a perturbation-based exploration. The last one is learning from demonstrations with mixed qualities.
For maximum entropy reinforcement learning, we employ a generalized Tsallis entropy regularization as an efficient exploration method. Tsallis entropy generalizes Shannon-Gibbs entropy by introducing a entropic index. By changing an entropic index, we can control the sparsity and multi-modality of policy. Based on this fact, we first propose a sparse Markov decision process (sparse MDP) which induces a sparse and multi-modal optimal policy distribution. In this MDP, the sparse entropy, which is a special case of Tsallis entropy, is employed as a policy regularization. We first analyze the optimality condition of a sparse MDP. Then, we propose dynamic programming methods for the sparse MDP and prove their convergence and optimality.
We also show that the performance error of a sparse MDP has a constant bound, while the error of a soft MDP increases logarithmically with respect to the number of actions, where this performance error is caused by the introduced regularization term. Furthermore, we generalize sparse MDPs to a new class of entropy-regularized Markov decision processes (MDPs), which will be referred to as Tsallis MDPs, and analyzes different types of optimal policies with interesting properties related to the stochasticity of the optimal policy by controlling the entropic index.
Furthermore, we also develop perturbation based exploration methods to handle heavy-tailed noises. In many robot learning problems, a learning signal is often corrupted by noises such as sub-Gaussian noise or heavy-tailed noise. While most of the exploration strategies have been analyzed under sub-Gaussian noise assumption, there exist few methods for handling such heavy-tailed rewards. Hence, to overcome heavy-tailed noise, we consider stochastic multi-armed bandits with heavy-tailed rewards. First, we propose a novel robust estimator that does not require prior information about a noise distribution, while other existing robust estimators demand prior knowledge. Then, we show that an error probability of the proposed estimator decays exponentially fast. Using this estimator, we propose a perturbation-based exploration strategy and develop a generalized regret analysis scheme that provides upper and lower regret bounds by revealing the relationship between the regret and the cumulative density function of the perturbation. From the proposed analysis scheme, we obtain gap-dependent and gap-independent upper and lower regret bounds of various perturbations. We also find the optimal hyperparameters for each perturbation, which can achieve the minimax optimal regret bound with respect to total rounds.
For learning from demonstrations with mixed qualities, we develop a novel inverse reinforcement learning framework using leveraged Gaussian processes (LGP) which can handle negative demonstrations. In LGP, the correlation between two Gaussian processes is captured by a leveraged kernel function. By using properties, the proposed inverse reinforcement learning algorithm can learn from both positive and negative demonstrations. While most existing inverse reinforcement learning (IRL) methods suffer from the lack of information near low reward regions, the proposed method alleviates this issue by incorporating negative
demonstrations. To mathematically formulate negative demonstrations, we introduce a novel generative model which can generate both positive and negative demonstrations using a parameter, called proficiency.
Moreover, since we represent a reward function using a leveraged Gaussian process which can model a nonlinear function, the proposed method can effectively estimate the structure of a nonlinear reward
function.๋ณธ ํ์ ๋
ผ๋ฌธ์์๋ ์๋ฒ๊ณผ ๋ณด์ํจ์๋ฅผ ๊ธฐ๋ฐ์ผ๋กํ ๋ก๋ด ํ์ต ๋ฌธ์ ๋ฅผ ๋ค๋ฃฌ๋ค. ๋ก๋ด ํ์ต ๋ฐฉ๋ฒ์ ๋ถํ์คํ๊ณ ๋ณต์ก ์
๋ฌด๋ฅผ ์ ์ํ ํ ์ ์๋ ์ต์ ์ ์ ์ฑ
ํจ์๋ฅผ ์ฐพ๋ ๊ฒ์ ๋ชฉํ๋ก ํ๋ค. ๋ก๋ด ํ์ต ๋ถ์ผ์ ๋ค์ํ ๋ฌธ์ ์ค์, ์ํ ๋ณต์ก๋๋ฅผ ์ค์ด๋ ๊ฒ์ ์ง์คํ๋ค. ํนํ, ํจ์จ์ ์ธ ํ์ ๋ฐฉ๋ฒ๊ณผ ํผํฉ ์๋ฒ์ผ๋ก ๋ถํฐ์ ํ์ต ๊ธฐ๋ฒ์ ๊ฐ๋ฐํ์ฌ ์ ์ ์์ ์ํ๋ก๋ ๋์ ํจ์จ์ ๊ฐ๋ ์ ์ฑ
ํจ์๋ฅผ ํ์ตํ๋ ๊ฒ์ด ๋ชฉํ์ด๋ค.
ํจ์จ์ ์ธ ํ์ ๋ฐฉ๋ฒ์ ๊ฐ๋ฐํ๊ธฐ ์ํด์, ์ฐ๋ฆฌ๋ ์ผ๋ฐํ๋ ์๋ฆฌ์ค ์ํธ๋กํผ๋ฅผ ์ฌ์ฉํ๋ค. ์๋ฆฌ์ค ์ํธ๋กํผ๋ ์ค๋
ผ-๊น์ค ์ํธ๋กํผ๋ฅผ ์ผ๋ฐํํ ๊ฐ๋
์ผ๋ก ์ํธ๋กํฝ ์ธ๋ฑ์ค๋ผ๋ ์๋ก์ด ํ๋ผ๋ฏธํฐ๋ฅผ ๋์
ํ๋ค. ์ํธ๋กํฝ ์ธ๋ฑ์ค๋ฅผ ์กฐ์ ํจ์ ๋ฐ๋ผ ๋ค์ํ ํํ์ ์ํธ๋กํผ๋ฅผ ๋ง๋ค์ด ๋ผ ์ ์๊ณ ๊ฐ ์ํธ๋กํผ๋ ์๋ก ๋ค๋ฅธ ๋ ๊ทค๋ฌ๋ผ์ด์ ์ด์
ํจ๊ณผ๋ฅผ ๋ณด์ธ๋ค. ์ด ์ฑ์ง์ ๊ธฐ๋ฐ์ผ๋ก, ์คํ์ค ๋ง๋ฅด์ฝํ ๊ฒฐ์ ๊ณผ์ ์ ์ ์ํ๋ค. ์คํ์ค ๋ง๋ฅด์ฝํ ๊ฒฐ์ ๊ณผ์ ์ ์คํ์ค ์๋ฆฌ์ค ์ํธ๋กํผ๋ฅผ ์ด์ฉํ์ฌ ํฌ์ํ๋ฉด์ ๋์์ ๋ค๋ชจ๋์ ์ ์ฑ
๋ถํฌ๋ฅผ ํํํ๋๋ฐ ํจ๊ณผ์ ์ด๋ค. ์ด๋ฅผ ํตํด์ ์ค๋
ผ-๊น์ค ์ํธ๋กํผ๋ฅผ ์ฌ์ฉํ์์๋์ ๋นํด ๋ ์ข์ ์ฑ๋ฅ์ ๊ฐ์์ ์ํ์ ์ผ๋ก ์ฆ๋ช
ํ์๋ค. ๋ํ ์คํ์ค ์๋ฆฌ์ค ์ํธ๋กํผ๋ก ์ธํ ์ฑ๋ฅ ์ ํ๋ฅผ ์ด๋ก ์ ์ผ๋ก ๊ณ์ฐํ์๋ค. ์คํ์ค ๋ง๋ฅด์ฝํ ๊ฒฐ์ ๊ณผ์ ์ ๋์ฑ ์ผ๋ฐํ์์ผ ์ผ๋ฐํ๋ ์๋ฆฌ์ค ์ํธ๋กํผ ๊ฒฐ์ ๊ณผ์ ์ ์ ์ํ์๋ค. ๋ง์ฐฌ๊ฐ์ง๋ก ์๋ฆฌ์ค ์ํธ๋กํผ๋ฅผ ๋ง๋ฅด์ฝํ ๊ฒฐ์ ๊ณผ์ ์ ์ถ๊ฐํจ์ผ๋ก์จ ์๊ธฐ๋ ์ต์ ์ ์ฑ
ํจ์์ ๋ณํ์ ์ฑ๋ฅ ์ ํ๋ฅผ ์ํ์ ์ผ๋ก ์ฆ๋ช
ํ์๋ค. ๋์๊ฐ, ์ฑ๋ฅ์ ํ๋ฅผ ์์จ ์ ์๋ ๋ฐฉ๋ฒ์ธ ์ํธ๋กํฝ ์ธ๋ฑ์ค ์ค์ผ์ฅด๋ง์ ์ ์ํ์๊ณ ์คํ์ ์ผ๋ก ์ต์ ์ ์ฑ๋ฅ์ ๊ฐ์์ ๋ณด์๋ค.
๋ํ, ํค๋นํ
์ผ๋ ์ก์์ด ์๋ ํ์ต ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด์ ์ธ๋(Perturbation)์ ์ด์ฉํ ํ์ ๊ธฐ๋ฒ์ ๊ฐ๋ฐํ์๋ค. ๋ก๋ด ํ์ต์ ๋ง์ ๋ฌธ์ ๋ ์ก์์ ์ํฅ์ด ์กด์ฌํ๋ค. ํ์ต ์ ํธ์์ ๋ค์ํ ํํ๋ก ์ก์์ด ๋ค์ด์๋ ๊ฒฝ์ฐ๊ฐ ์๊ณ ์ด๋ฌํ ๊ฒฝ์ฐ์ ์ก์์ ์ ๊ฑฐ ํ๋ฉด์ ์ต์ ์ ํ๋์ ์ฐพ๋ ๋ฌธ์ ๋ ํจ์จ์ ์ธ ํ์ฌ ๊ธฐ๋ฒ์ ํ์๋ก ํ๋ค. ๊ธฐ์กด์ ๋ฐฉ๋ฒ๋ก ๋ค์ ์๋ธ ๊ฐ์ฐ์์(sub-Gaussian) ์ก์์๋ง ์ ์ฉ ๊ฐ๋ฅํ๋ค๋ฉด, ๋ณธ ํ์ ๋
ผ๋ฌธ์์ ์ ์ํ ๋ฐฉ์์ ํค๋นํ
์ผ๋ ์ก์์ ํด๊ฒฐ ํ ์ ์๋ค๋ ์ ์์ ๊ธฐ์กด์ ๋ฐฉ๋ฒ๋ก ๋ค๋ณด๋ค ์ฅ์ ์ ๊ฐ๋๋ค. ๋จผ์ , ์ผ๋ฐ์ ์ธ ์ธ๋์ ๋ํด์ ๋ฆฌ๊ทธ๋ ๋ฐ์ด๋๋ฅผ ์ฆ๋ช
ํ์๊ณ ์ธ๋์ ๋์ ๋ถํฌํจ์(CDF)์ ๋ฆฌ๊ทธ๋ ์ฌ์ด์ ๊ด๊ณ๋ฅผ ์ฆ๋ช
ํ์๋ค. ์ด ๊ด๊ณ๋ฅผ ์ด์ฉํ์ฌ ๋ค์ํ ์ธ๋ ๋ถํฌ์ ๋ฆฌ๊ทธ๋ ๋ฐ์ด๋๋ฅผ ๊ณ์ฐ ๊ฐ๋ฅํ๊ฒ ํ์๊ณ ๋ค์ํ ๋ถํฌ๋ค์ ๊ฐ์ฅ ํจ์จ์ ์ธ ํ์ ํ๋ผ๋ฏธํฐ๋ฅผ ๊ณ์ฐํ์๋ค.
ํผํฉ์๋ฒ์ผ๋ก ๋ถํฐ์ ํ์ต ๊ธฐ๋ฒ์ ๊ฐ๋ฐํ๊ธฐ ์ํด์, ์ค์๋ฒ์ ๋ค๋ฃฐ ์ ์๋ ์๋ก์ด ํํ์ ๊ฐ์ฐ์์ ํ๋ก์ธ์ค ํ๊ท๋ถ์ ๋ฐฉ์์ ๊ฐ๋ฐํ์๊ณ , ์ด ๋ฐฉ์์ ํ์ฅํ์ฌ ๋ ๋ฒ๋ฆฌ์ง ๊ฐ์ฐ์์ ํ๋ก์ธ์ค ์ญ๊ฐํํ์ต ๊ธฐ๋ฒ์ ๊ฐ๋ฐํ์๋ค. ๊ฐ๋ฐ๋ ๊ธฐ๋ฒ์์๋ ์ ์๋ฒ์ผ๋ก๋ถํฐ ๋ฌด์์ ํด์ผ ํ๋์ง์ ์ค์๋ฒ์ผ๋ก๋ถํฐ ๋ฌด์์ ํ๋ฉด ์๋๋์ง๋ฅผ ๋ชจ๋ ํ์ตํ ์ ์๋ค. ๊ธฐ์กด์ ๋ฐฉ๋ฒ์์๋ ์ฐ์ผ ์ ์์๋ ์ค์๋ฒ์ ์ฌ์ฉ ํ ์ ์๊ฒ ๋ง๋ฆ์ผ๋ก์จ ์ํ ๋ณต์ก๋๋ฅผ ์ค์ผ ์ ์์๊ณ ์ ์ ๋ ๋ฐ์ดํฐ๋ฅผ ์์งํ์ง ์์๋ ๋๋ค๋ ์ ์์ ํฐ ์ฅ์ ์ ๊ฐ์์ ์คํ์ ์ผ๋ก ๋ณด์๋ค.1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Organization of the Dissertation . . . . . . . . . . . . . . . . . . . 4
2 Background 5
2.1 Learning from Rewards . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.1 Multi-Armed Bandits . . . . . . . . . . . . . . . . . . . . . 7
2.1.2 Contextual Multi-Armed Bandits . . . . . . . . . . . . . . . 7
2.1.3 Markov Decision Processes . . . . . . . . . . . . . . . . . . 9
2.1.4 Soft Markov Decision Processes . . . . . . . . . . . . . . . . 10
2.2 Learning from Demonstrations . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Behavior Cloning . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Inverse Reinforcement Learning . . . . . . . . . . . . . . . . 13
3 Sparse Policy Learning 19
3.1 Sparse Policy Learning for Reinforcement Learning . . . . . . . . . 19
3.1.1 Sparse Markov Decision Processes . . . . . . . . . . . . . . 23
3.1.2 Sparse Value Iteration . . . . . . . . . . . . . . . . . . . . . 29
3.1.3 Performance Error Bounds for Sparse Value Iteration . . . 30
3.1.4 Sparse Exploration and Update Rule for Sparse Deep QLearning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2 Sparse Policy Learning for Imitation Learning . . . . . . . . . . . . 46
3.2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2.2 Principle of Maximum Causal Tsallis Entropy . . . . . . . . 50
3.2.3 Maximum Causal Tsallis Entropy Imitation Learning . . . 54
3.2.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4 Entropy-based Exploration 65
4.1 Generalized Tsallis Entropy Reinforcement Learning . . . . . . . . 65
4.1.1 Maximum Generalized Tsallis Entropy in MDPs . . . . . . 71
4.1.2 Dynamic Programming for Tsallis MDPs . . . . . . . . . . 74
4.1.3 Tsallis Actor Critic for Model-Free RL . . . . . . . . . . . . 78
4.1.4 Experiments Setup . . . . . . . . . . . . . . . . . . . . . . . 79
4.1.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . 84
4.1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.2 E cient Exploration for Robotic Grasping . . . . . . . . . . . . . . 92
4.2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.2.2 Shannon Entropy Regularized Neural Contextual Bandit
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.2.3 Theoretical Analysis . . . . . . . . . . . . . . . . . . . . . . 99
4.2.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . 104
4.2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5 Perturbation-Based Exploration 113
5.1 Perturbed Exploration for sub-Gaussian Rewards . . . . . . . . . . 115
5.1.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.1.2 Heavy-Tailed Perturbations . . . . . . . . . . . . . . . . . . 117
5.1.3 Adaptively Perturbed Exploration . . . . . . . . . . . . . . 119
5.1.4 General Regret Bound for Sub-Gaussian Rewards . . . . . . 120
5.1.5 Regret Bounds for Speci c Perturbations with sub-Gaussian Rewards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.2 Perturbed Exploration for Heavy-Tailed Rewards . . . . . . . . . . 128
5.2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.2.2 Sub-Optimality of Robust Upper Con dence Bounds . . . . 132
5.2.3 Adaptively Perturbed Exploration with A p-Robust Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.2.4 General Regret Bound for Heavy-Tailed Rewards . . . . . . 136
5.2.5 Regret Bounds for Speci c Perturbations with Heavy-Tailed Rewards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5.2.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6 Inverse Reinforcement Learning with Negative Demonstrations149
6.1 Leveraged Gaussian Processes Inverse Reinforcement Learning . . 151
6.1.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.1.3 Gaussian Process Regression . . . . . . . . . . . . . . . . . 156
6.1.4 Leveraged Gaussian Processes . . . . . . . . . . . . . . . . . 159
6.1.5 Gaussian Process Inverse Reinforcement Learning . . . . . 164
6.1.6 Inverse Reinforcement Learning with Leveraged Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
6.1.7 Simulations and Experiment . . . . . . . . . . . . . . . . . 175
6.1.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
7 Conclusion 185
Appendices 189
A Proofs of Chapter 3.1. 191
A.1 Useful Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
A.2 Sparse Bellman Optimality Equation . . . . . . . . . . . . . . . . . 192
A.3 Sparse Tsallis Entropy . . . . . . . . . . . . . . . . . . . . . . . . . 195
A.4 Upper and Lower Bounds for Sparsemax Operation . . . . . . . . . 196
A.5 Comparison to Log-Sum-Exp . . . . . . . . . . . . . . . . . . . . . 200
A.6 Convergence and Optimality of Sparse Value Iteration . . . . . . . 201
A.7 Performance Error Bounds for Sparse Value Iteration . . . . . . . . 203
B Proofs of Chapter 3.2. 209
B.1 Change of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . 209
B.2 Concavity of Maximum Causal Tsallis Entropy . . . . . . . . . . . 210
B.3 Optimality Condition of Maximum Causal Tsallis Entropy . . . . . 212
B.4 Interpretation as Robust Bayes . . . . . . . . . . . . . . . . . . . . 215
B.5 Generative Adversarial Setting with Maximum Causal Tsallis Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
B.6 Tsallis Entropy of a Mixture of Gaussians . . . . . . . . . . . . . . 217
B.7 Causal Entropy Approximation . . . . . . . . . . . . . . . . . . . . 218
C Proofs of Chapter 4.1. 221
C.1 q-Maximum: Bounded Approximation of Maximum . . . . . . . . . 223
C.2 Tsallis Bellman Optimality Equation . . . . . . . . . . . . . . . . . 226
C.3 Variable Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
C.4 Tsallis Bellman Optimality Equation . . . . . . . . . . . . . . . . . 230
C.5 Tsallis Policy Iteration . . . . . . . . . . . . . . . . . . . . . . . . . 234
C.6 Tsallis Bellman Expectation (TBE) Equation . . . . . . . . . . . . 234
C.7 Tsallis Bellman Expectation Operator and Tsallis Policy Evaluation235
C.8 Tsallis Policy Improvement . . . . . . . . . . . . . . . . . . . . . . 237
C.9 Tsallis Value Iteration . . . . . . . . . . . . . . . . . . . . . . . . . 239
C.10 Performance Error Bounds . . . . . . . . . . . . . . . . . . . . . . 241
C.11 q-Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
D Proofs of Chapter 4.2. 245
D.1 In nite Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
D.2 Regret Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
E Proofs of Chapter 5.1. 255
E.1 General Regret Lower Bound of APE . . . . . . . . . . . . . . . . . 255
E.2 General Regret Upper Bound of APE . . . . . . . . . . . . . . . . 257
E.3 Proofs of Corollaries . . . . . . . . . . . . . . . . . . . . . . . . . . 266
F Proofs of Chapter 5.2. 279
F.1 Regret Lower Bound for Robust Upper Con dence Bound . . . . . 279
F.2 Bounds on Tail Probability of A p-Robust Estimator . . . . . . . . 284
F.3 General Regret Upper Bound of APE2 . . . . . . . . . . . . . . . . 287
F.4 General Regret Lower Bound of APE2 . . . . . . . . . . . . . . . . 299
F.5 Proofs of Corollaries . . . . . . . . . . . . . . . . . . . . . . . . . . 302Docto
Feature learning for multi-task inverse reinforcement learning
In this paper we study the question of life long learning of behaviors from human demonstrations by an intelligent system. One approach is to model the observed demonstrations by a stationary policy. Inverse rein-forcement learning, on the other hand, searches a reward function that makes the observed policy closed to optimal in the corresponding Markov decision process. This approach provides a model of the task solved by the demonstrator and has been shown to lead to better generalization in un-known contexts. However both approaches focus on learning a single task from the expert demonstration. In this paper we propose a feature learn-ing approach for inverse reinforcement learning in which several different tasks are demonstrated, but in which each task is modeled as a mixture of several, simpler, primitive tasks. We present an algorithm based on an al-ternate gradient descent to learn simultaneously a dictionary of primitive tasks (in the form of reward functions) and their combination into an ap-proximation of the task underlying observed behavior. We illustrate how this approach enables efficient re-use of knowledge from previous demon-strations. Namely knowledge on tasks that were previously observed by the learner is used to improve the learning of a new composite behavior, thus achieving transfer of knowledge between tasks
- โฆ