72 research outputs found
Conversation Disentanglement with Bi-Level Contrastive Learning
Conversation disentanglement aims to group utterances into detached sessions,
which is a fundamental task in processing multi-party conversations. Existing
methods have two main drawbacks. First, they overemphasize pairwise utterance
relations but pay inadequate attention to the utterance-to-context relation
modeling. Second, huge amount of human annotated data is required for training,
which is expensive to obtain in practice. To address these issues, we propose a
general disentangle model based on bi-level contrastive learning. It brings
closer utterances in the same session while encourages each utterance to be
near its clustered session prototypes in the representation space. Unlike
existing approaches, our disentangle model works in both supervised setting
with labeled data and unsupervised setting when no such data is available. The
proposed method achieves new state-of-the-art performance on both settings
across several public datasets
Text Style Transfer: A Review and Experimental Evaluation
The stylistic properties of text have intrigued computational linguistics
researchers in recent years. Specifically, researchers have investigated the
Text Style Transfer (TST) task, which aims to change the stylistic properties
of the text while retaining its style independent content. Over the last few
years, many novel TST algorithms have been developed, while the industry has
leveraged these algorithms to enable exciting TST applications. The field of
TST research has burgeoned because of this symbiosis. This article aims to
provide a comprehensive review of recent research efforts on text style
transfer. More concretely, we create a taxonomy to organize the TST models and
provide a comprehensive summary of the state of the art. We review the existing
evaluation methodologies for TST tasks and conduct a large-scale
reproducibility study where we experimentally benchmark 19 state-of-the-art TST
algorithms on two publicly available datasets. Finally, we expand on current
trends and provide new perspectives on the new and exciting developments in the
TST field
Deep Learning for Text Style Transfer: A Survey
Text style transfer is an important task in natural language generation,
which aims to control certain attributes in the generated text, such as
politeness, emotion, humor, and many others. It has a long history in the field
of natural language processing, and recently has re-gained significant
attention thanks to the promising performance brought by deep neural models. In
this paper, we present a systematic survey of the research on neural text style
transfer, spanning over 100 representative articles since the first neural text
style transfer work in 2017. We discuss the task formulation, existing datasets
and subtasks, evaluation, as well as the rich methodologies in the presence of
parallel and non-parallel data. We also provide discussions on a variety of
important topics regarding the future development of this task. Our curated
paper list is at https://github.com/zhijing-jin/Text_Style_Transfer_SurveyComment: Computational Linguistics Journal 202
- …