1,555 research outputs found
Personalizing Task-oriented Dialog Systems via Zero-shot Generalizable Reward Function
Task-oriented dialog systems enable users to accomplish tasks using natural
language. State-of-the-art systems respond to users in the same way regardless
of their personalities, although personalizing dialogues can lead to higher
levels of adoption and better user experiences. Building personalized dialog
systems is an important, yet challenging endeavor and only a handful of works
took on the challenge. Most existing works rely on supervised learning
approaches and require laborious and expensive labeled training data for each
user profile. Additionally, collecting and labeling data for each user profile
is virtually impossible. In this work, we propose a novel framework, P-ToD, to
personalize task-oriented dialog systems capable of adapting to a wide range of
user profiles in an unsupervised fashion using a zero-shot generalizable reward
function. P-ToD uses a pre-trained GPT-2 as a backbone model and works in three
phases. Phase one performs task-specific training. Phase two kicks off
unsupervised personalization by leveraging the proximal policy optimization
algorithm that performs policy gradients guided by the zero-shot generalizable
reward function. Our novel reward function can quantify the quality of the
generated responses even for unseen profiles. The optional final phase
fine-tunes the personalized model using a few labeled training examples. We
conduct extensive experimental analysis using the personalized bAbI dialogue
benchmark for five tasks and up to 180 diverse user profiles. The experimental
results demonstrate that P-ToD, even when it had access to zero labeled
examples, outperforms state-of-the-art supervised personalization models and
achieves competitive performance on BLEU and ROUGE metrics when compared to a
strong fully-supervised GPT-2 baselineComment: 11 pages, 4 tables, 31st ACM International Conference on Information
and Knowledge Management (CIKM'22
Reinforcement learning for personalized dialogue management
Language systems have been of great interest to the research community and
have recently reached the mass market through various assistant platforms on
the web. Reinforcement Learning methods that optimize dialogue policies have
seen successes in past years and have recently been extended into methods that
personalize the dialogue, e.g. take the personal context of users into account.
These works, however, are limited to personalization to a single user with whom
they require multiple interactions and do not generalize the usage of context
across users. This work introduces a problem where a generalized usage of
context is relevant and proposes two Reinforcement Learning (RL)-based
approaches to this problem. The first approach uses a single learner and
extends the traditional POMDP formulation of dialogue state with features that
describe the user context. The second approach segments users by context and
then employs a learner per context. We compare these approaches in a benchmark
of existing non-RL and RL-based methods in three established and one novel
application domain of financial product recommendation. We compare the
influence of context and training experiences on performance and find that
learning approaches generally outperform a handcrafted gold standard
- …