87 research outputs found
Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
Pre-trained Language Models (PrLMs) have been widely used as backbones in
lots of Natural Language Processing (NLP) tasks. The common process of
utilizing PrLMs is first pre-training on large-scale general corpora with
task-independent LM training objectives, then fine-tuning on task datasets with
task-specific training objectives. Pre-training in a task-independent way
enables the models to learn language representations, which is universal to
some extent, but fails to capture crucial task-specific features in the
meantime. This will lead to an incompatibility between pre-training and
fine-tuning. To address this issue, we introduce task-specific pre-training on
in-domain task-related corpora with task-specific objectives. This procedure is
placed between the original two stages to enhance the model understanding
capacity of specific tasks. In this work, we focus on Dialogue-related Natural
Language Processing (DrNLP) tasks and design a Dialogue-Adaptive Pre-training
Objective (DAPO) based on some important qualities for assessing dialogues
which are usually ignored by general LM pre-training objectives. PrLMs with
DAPO on a large in-domain dialogue corpus are then fine-tuned for downstream
DrNLP tasks. Experimental results show that models with DAPO surpass those with
general LM pre-training objectives and other strong baselines on downstream
DrNLP tasks
Eliciting Knowledge from Large Pre-Trained Models for Unsupervised Knowledge-Grounded Conversation
Recent advances in large-scale pre-training provide large models with the
potential to learn knowledge from the raw text. It is thus natural to ask
whether it is possible to leverage these large models as knowledge bases for
downstream tasks. In this work, we answer the aforementioned question in
unsupervised knowledge-grounded conversation. We explore various methods that
best elicit knowledge from large models. Our human study indicates that, though
hallucinations exist, large models post the unique advantage of being able to
output common sense and summarize facts that cannot be directly retrieved from
the search engine. To better exploit such generated knowledge in dialogue
generation, we treat the generated knowledge as a noisy knowledge source and
propose the posterior-based reweighing as well as the noisy training strategy.
Empirical results on two benchmarks show advantages over the state-of-the-art
methods.Comment: Accepted to EMNLP 2022 Main Conference. The code is publicly
available at
https://github.com/lyy1994/PLM_as_KB/tree/main/projects/plm_as_k
Bipartite-play Dialogue Collection for Practical Automatic Evaluation of Dialogue Systems
Automation of dialogue system evaluation is a driving force for the efficient
development of dialogue systems. This paper introduces the bipartite-play
method, a dialogue collection method for automating dialogue system evaluation.
It addresses the limitations of existing dialogue collection methods: (i)
inability to compare with systems that are not publicly available, and (ii)
vulnerability to cheating by intentionally selecting systems to be compared.
Experimental results show that the automatic evaluation using the
bipartite-play method mitigates these two drawbacks and correlates as strongly
with human subjectivity as existing methods.Comment: 9 pages, Accepted to The AACL-IJCNLP 2022 Student Research Workshop
(SRW
A Unified Framework for Slot based Response Generation in a Multimodal Dialogue System
Natural Language Understanding (NLU) and Natural Language Generation (NLG)
are the two critical components of every conversational system that handles the
task of understanding the user by capturing the necessary information in the
form of slots and generating an appropriate response in accordance with the
extracted information. Recently, dialogue systems integrated with complementary
information such as images, audio, or video have gained immense popularity. In
this work, we propose an end-to-end framework with the capability to extract
necessary slot values from the utterance and generate a coherent response,
thereby assisting the user to achieve their desired goals in a multimodal
dialogue system having both textual and visual information. The task of
extracting the necessary information is dependent not only on the text but also
on the visual cues present in the dialogue. Similarly, for the generation, the
previous dialog context comprising multimodal information is significant for
providing coherent and informative responses. We employ a multimodal
hierarchical encoder using pre-trained DialoGPT and also exploit the knowledge
base (Kb) to provide a stronger context for both the tasks. Finally, we design
a slot attention mechanism to focus on the necessary information in a given
utterance. Lastly, a decoder generates the corresponding response for the given
dialogue context and the extracted slot values. Experimental results on the
Multimodal Dialogue Dataset (MMD) show that the proposed framework outperforms
the baselines approaches in both the tasks. The code is available at
https://github.com/avinashsai/slot-gpt.Comment: Published in the journal Multimedia Tools and Application
- …