44,984 research outputs found
Interactive Teaching Algorithms for Inverse Reinforcement Learning
We study the problem of inverse reinforcement learning (IRL) with the added
twist that the learner is assisted by a helpful teacher. More formally, we
tackle the following algorithmic question: How could a teacher provide an
informative sequence of demonstrations to an IRL learner to speed up the
learning process? We present an interactive teaching framework where a teacher
adaptively chooses the next demonstration based on learner's current policy. In
particular, we design teaching algorithms for two concrete settings: an
omniscient setting where a teacher has full knowledge about the learner's
dynamics and a blackbox setting where the teacher has minimal knowledge. Then,
we study a sequential variant of the popular MCE-IRL learner and prove
convergence guarantees of our teaching algorithm in the omniscient setting.
Extensive experiments with a car driving simulator environment show that the
learning progress can be speeded up drastically as compared to an uninformative
teacher.Comment: IJCAI'19 paper (extended version
Multi-Armed Bandits for Intelligent Tutoring Systems
We present an approach to Intelligent Tutoring Systems which adaptively
personalizes sequences of learning activities to maximize skills acquired by
students, taking into account the limited time and motivational resources. At a
given point in time, the system proposes to the students the activity which
makes them progress faster. We introduce two algorithms that rely on the
empirical estimation of the learning progress, RiARiT that uses information
about the difficulty of each exercise and ZPDES that uses much less knowledge
about the problem.
The system is based on the combination of three approaches. First, it
leverages recent models of intrinsically motivated learning by transposing them
to active teaching, relying on empirical estimation of learning progress
provided by specific activities to particular students. Second, it uses
state-of-the-art Multi-Arm Bandit (MAB) techniques to efficiently manage the
exploration/exploitation challenge of this optimization process. Third, it
leverages expert knowledge to constrain and bootstrap initial exploration of
the MAB, while requiring only coarse guidance information of the expert and
allowing the system to deal with didactic gaps in its knowledge. The system is
evaluated in a scenario where 7-8 year old schoolchildren learn how to
decompose numbers while manipulating money. Systematic experiments are
presented with simulated students, followed by results of a user study across a
population of 400 school children
Data Poisoning Attacks in Contextual Bandits
We study offline data poisoning attacks in contextual bandits, a class of
reinforcement learning problems with important applications in online
recommendation and adaptive medical treatment, among others. We provide a
general attack framework based on convex optimization and show that by slightly
manipulating rewards in the data, an attacker can force the bandit algorithm to
pull a target arm for a target contextual vector. The target arm and target
contextual vector are both chosen by the attacker. That is, the attacker can
hijack the behavior of a contextual bandit. We also investigate the feasibility
and the side effects of such attacks, and identify future directions for
defense. Experiments on both synthetic and real-world data demonstrate the
efficiency of the attack algorithm.Comment: GameSec 201
- …