5 research outputs found

    Learning Adaptive Referring Expression Generation Policies for Spoken Dialogue Systems using Reinforcement Learning

    Get PDF
    Abstract Adaptive generation of referring expressions in dialogues is beneficial in terms of grounding between the dialogue partners. However, handcoding adaptive REG policies is hard. We present a reinforcement learning framework to automatically learn an adaptive referring expression generation policy for spoken dialogue systems

    Learning Lexical Alignment Policies for Generating Referring Expressions for Spoken Dialogue Systems

    No full text
    We address the problem that different users have different lexical knowledge about problem domains, so that automated dialogue systems need to adapt their generation choices online to the users ’ domain knowledge as it encounters them. We approach this problem using policy learning in Markov Decision Processes (MDP). In contrast to related work we propose a new statistical user model which incorporates the lexical knowledge of different users. We evaluate this user model by showing that it allows us to learn dialogue policies that automatically adapt their choice of referring expressions online to different users, and that these policies are significantly better than adaptive hand-coded policies for this problem. The learned policies are consistently between 2 and 8 turns shorter than a range of different hand-coded but adaptive baseline lexical alignment policies

    Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009)

    Get PDF
    corecore