497 research outputs found

    Formality Style Transfer Within and Across Languages with Limited Supervision

    Get PDF
    While much natural language processing work focuses on analyzing language content, language style also conveys important information about the situational context and purpose of communication. When editing an article, professional editors take into account the target audience to select appropriate word choice and grammar. Similarly, professional translators translate documents for a specific audience and often ask what is the expected tone of the content when taking a translation job. Computational models of natural language should consider both their meaning and style. Controlling style is an emerging research area in text rewriting and is under-investigated in machine translation. In this dissertation, we present a new perspective which closely connects formality transfer and machine translation: we aim to control style in language generation with a focus on rewriting English or translating French to English with a desired formality. These are challenging tasks because annotated examples of style transfer are only available in limited quantities. We first address this problem by inducing a lexical formality model based on word embeddings and a small number of representative formal and informal words. This enables us to assign sentential formality scores and rerank translation hypotheses whose formality scores are closer to user-provided formality level. To capture broader formality changes, we then turn to neural sequence to sequence models. Joint modeling of formality transfer and machine translation enables formality control in machine translation without dedicated training examples. Along the way, we also improve low-resource neural machine translation

    Controllable Text Summarization: Unraveling Challenges, Approaches, and Prospects -- A Survey

    Full text link
    Generic text summarization approaches often fail to address the specific intent and needs of individual users. Recently, scholarly attention has turned to the development of summarization methods that are more closely tailored and controlled to align with specific objectives and user needs. While a growing corpus of research is devoted towards a more controllable summarization, there is no comprehensive survey available that thoroughly explores the diverse controllable aspects or attributes employed in this context, delves into the associated challenges, and investigates the existing solutions. In this survey, we formalize the Controllable Text Summarization (CTS) task, categorize controllable aspects according to their shared characteristics and objectives, and present a thorough examination of existing methods and datasets within each category. Moreover, based on our findings, we uncover limitations and research gaps, while also delving into potential solutions and future directions for CTS.Comment: 19 pages, 1 figur

    Deep Learning for Text Style Transfer: A Survey

    Full text link
    Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others. It has a long history in the field of natural language processing, and recently has re-gained significant attention thanks to the promising performance brought by deep neural models. In this paper, we present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017. We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data. We also provide discussions on a variety of important topics regarding the future development of this task. Our curated paper list is at https://github.com/zhijing-jin/Text_Style_Transfer_SurveyComment: Computational Linguistics Journal 202

    Self-supervised learning in natural language processing

    Get PDF
    Most natural language processing (NLP) learning algorithms require labeled data. While this is given for a select number of (mostly English) tasks, the availability of labeled data is sparse or non-existent for the vast majority of use-cases. To alleviate this, unsupervised learning and a wide array of data augmentation techniques have been developed (Hedderich et al., 2021a). However, unsupervised learning often requires massive amounts of unlabeled data and also fails to perform in difficult (low-resource) data settings, i.e., if there is an increased distance between the source and target data distributions (Kim et al., 2020). This distributional distance can be the case if there is a domain drift or large linguistic distance between the source and target data. Unsupervised learning in itself does not exploit the highly informative (labeled) supervisory signals hidden in unlabeled data. In this dissertation, we show that by combining the right unsupervised auxiliary task (e.g., sentence pair extraction) with an appropriate primary task (e.g., machine translation), self-supervised learning can exploit these hidden supervisory signals more efficiently than purely unsupervised approaches, while functioning on less labeled data than supervised approaches. Our self-supervised learning approach can be used to learn NLP tasks in an efficient manner, even when the amount of training data is sparse or the data comes with strong differences in its underlying distribution, e.g., stemming from unrelated languages. For our general approach, we applied unsupervised learning as an auxiliary task to learn a supervised primary task. Concretely, we have focused on the auxiliary task of sentence pair extraction for sequence-to-sequence primary tasks (i.e., machine translation and style transfer) as well as language modeling, clustering, subspace learning and knowledge integration for primary classification tasks (i.e., hate speech detection and sentiment analysis). For sequence-to-sequence tasks, we show that self-supervised neural machine translation (NMT) achieves competitive results on high-resource language pairs in comparison to unsupervised NMT while requiring less data. Further combining self-supervised NMT with unsupervised NMT-inspired augmentation techniques makes the learning of low-resource (similar, distant and unrelated) language pairs possible. Further, using our self-supervised approach, we show how style transfer can be learned without the need for parallel data, generating stylistic rephrasings of highest overall performance on all tested tasks. For sequence-to-label tasks, we underline the benefit of auxiliary task-based augmentation over primary task augmentation. An auxiliary task that showed to be especially beneficial to the primary task performance was subspace learning, which led to impressive gains in (cross-lingual) zero-shot classification performance on similar or distant target tasks, also on similar, distant and unrelated languages.Die meisten Lernalgorithmen der Computerlingistik (CL) benötigen gelabelte Daten. Diese sind zwar für eine Auswahl an (hautpsächlich Englischen) Aufgaben verfügbar, für den Großteil aller Anwendungsfälle sind gelabelte Daten jedoch nur spärrlich bis gar nicht vorhanden. Um dem gegenzusteuern, wurde eine große Auswahl an Techniken entwickelt, welche sich das unüberwachte Lernen oder Datenaugmentierung zu eigen machen (Hedderich et al., 2021a). Unüberwachtes Lernen benötigt jedoch massive Mengen an ungelabelten Daten und versagt, wenn es mit schwierigen (resourcenarmen) Datensituationen konfrontiert wird, d.h. wenn eine größere Distanz zwischen der Quellen- und Zieldatendistributionen vorhanden ist (Kim et al., 2020). Eine distributionelle Distanz kann zum Beispiel der Fall sein, wenn ein Domänenunterschied oder eine größere sprachliche Distanz zwischen der Quellenund Zieldaten besteht. Unüberwachtes Lernen selbst nutzt die hochinformativen (gelabelten) Überwachungssignale, welche sich in ungelabelte Daten verstecken, nicht aus. In dieser Dissertation zeigen wir, dass selbstüberwachtes Lernen, durch die Kombination der richtigen unüberwachten Hilfsaufgabe (z.B. Satzpaarextraktion) mit einer passenden Hauptaufgabe (z.B. maschinelle Übersetzung), diese versteckten Überwachsungssignale effizienter ausnutzen kann als pure unüberwachte Lernalgorithmen, und dabei auch noch weniger gelabelte Daten benötigen als überwachte Lernalgorithmen. Unser selbstüberwachter Lernansatz erlaubt es uns, CL Aufgaben effizient zu lernen, selbst wenn die Trainingsdatenmenge spärrlich ist oder die Daten mit starken distributionellen Differenzen einher gehen, z.B. weil die Daten von zwei nicht verwandten Sprachen stammen. Im Generellen haben wir unüberwachtes Lernen als Hilfsaufgabe angewandt um eine überwachte Hauptaufgabe zu erlernen. Konkret haben wir uns auf Satzpaarextraktion als Hilfsaufgabe für Sequenz-zu-Sequenz Hauptaufgaben (z.B. maschinelle Übersetzung und Stilübertragung) konzentriert sowohl als auch Sprachmodelierung, Clustern, Teilraumlernen und Wissensintegration zum erlernen von Klassifikationsaufgaben (z.B. Hassredenidentifikation und Sentimentanalyse). Für Sequenz-zu-Sequenz Aufgaben zeigen wir, dass selbstüberwachte maschinelle Übersetzung (MÜ) im Vergleich zur unüberwachten MÜ wettbewerbsfähige Ergebnisse auf resourcenreichen Sprachpaaren erreicht und währenddessen weniger Daten zum Lernen benötigt. Wenn selbstüberwachte MÜ mit Augmentationstechniken, inspiriert durch unüberwachte MÜ, kombiniert wird, wird auch das Lernen von resourcenarmen (ähnlichen, entfernt verwandten und nicht verwandten) Sprachpaaren möglich. Außerdem zeigen wir, wie unser selbsüberwachter Lernansatz es ermöglicht Stilübertragung ohne parallele Daten zu erlernen und dabei stylistische Umformulierungen von höchster Qualität auf allen geprüften Aufgaben zu erlangen. Für Sequenz-zu-Label Aufgaben unterstreichen wir den Vorteil, welchen hilfsaufgabenseitige Augmentierung über hauptaufgabenseitige Augmentierung hat. Eine Hilfsaufgabe welche sich als besonders hilfreich für die Qualität der Hauptaufgabe herausstellte ist das Teilraumlernen, welches zu beeindruckenden Leistungssteigerungen für (sprachübergreifende) zero-shot Klassifikation ähnlicher und entfernter Zielaufgaben (auch für ähnliche, entfernt verwandte und nicht verwandte Sprachen) führt

    Findings of the IWSLT 2022 Evaluation Campaign.

    Get PDF
    The evaluation campaign of the 19th International Conference on Spoken Language Translation featured eight shared tasks: (i) Simultaneous speech translation, (ii) Offline speech translation, (iii) Speech to speech translation, (iv) Low-resource speech translation, (v) Multilingual speech translation, (vi) Dialect speech translation, (vii) Formality control for speech translation, (viii) Isometric speech translation. A total of 27 teams participated in at least one of the shared tasks. This paper details, for each shared task, the purpose of the task, the data that were released, the evaluation metrics that were applied, the submissions that were received and the results that were achieved

    Context-Based Personalisation in Neural Machine Translation of Dialogue

    Get PDF
    Neural machine translation (NMT) has revolutionised automatic translation and has been instrumental in saving costs and improvements in productivity within the translation industry. However, contemporary NMT systems are still primarily designed to translate isolated sentences, disregarding crucial contextual information in the process. This lack of context awareness frequently leads to assumptions about the most likely interpretation of the source text, potentially propagating harmful biases learned from the training data, such as assuming that the average participant in a conversation is male. In the dialogue domain, where the meaning of an utterance may vary depending on what was said before, the environment, the individuals involved, their relationship, and more, translations produced by context-agnostic systems often fall short in capturing the nuances of specific characters or situations. This thesis expands the understanding of and explores the potential applications of contextual NMT with focus on personalisation. Our methods challenge the prevailing context-agnostic strategy in machine translation and seek to address the aforementioned issues. Our research suggests that by integrating existing information into the translation process we can enhance the quality of translation hypotheses. Additionally, we demonstrate that one type of information can be effectively leveraged to enable manipulation of another. Our experiments involve adapting machine translation systems to individual speakers and productions, focusing on combinations of their individual characteristics rather than relying on discrete labels. We also explore personalisation of language models based on context information expressed in this way: to personalise a model for a particular character, we use a combination of their traits. These personalised language models are then used in an evaluation scenario where the context specificity of machine translation hypotheses is expressed as the pointwise mutual information between the proposed text and its original context. Finally, our best personalised NMT system is thoroughly evaluated in a professional multi-modal setting of translating subtitles for TV series on two language pairs: English-to-German and English-to-French. Throughout the thesis, we report on experiments with various types of context in a setting of translation between English and a range of European languages. Our chosen domain is dialogue extracted from TV series and films, due to the availability of context-rich datasets, as well as the potential practical application of this research to the work of the industrial partner to this PhD, ZOO Digital. Our research tackles five primary challenges: (i) direct incorporation of extra-textual information into neural machine translation systems, (ii) zero-shot and few-shot control of this information, (iii) reference-free evaluation and analysis of contextual NMT, (iv) personalisation of language models (LMs) and NMT systems using rich sets of speaker and film metadata annotations, and (v) human evaluation of machine translation in a professional post-editing setting. By addressing these challenges, this thesis aims to enhance machine translation in dialogue by ensuring translations are better suited to the specific characters, addressees, and contextual factors involved. The research contributes to the advancement of NMT systems that can effectively account for the personalised nature of dialogue

    문맥 인식기반의 문서 단위 신경망 기계 번역 연구

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 전기·정보공학부, 2022.2. 정교민.The neural machine translation (NMT) has attracted great attention in recent years, as it has yielded state-of-the-art translation quality. Despite of their promising results, many current NMT systems are sentence-level; translating each sentence independently. This ignores contexts on text thus producing inadequate and inconsistent translations at the document-level. To overcome the shortcomings, the context-aware NMT (CNMT) has been proposed that takes contextual sentences as input. This dissertation proposes novel methods for improving the CNMT system and an application of CNMT. We first tackle the efficient modeling of multiple contextual sentences on CNMT encoder. For this purpose, we propose a hierarchical context encoder that encodes contextual sentences from token-level to sentence-level. This novel architecture enables the model to achieve state-of-the-art performance on translation quality while taking less computation time on training and translation than existing methods. Secondly, we investigate the training method for CNMT models, where most models rely on negative log-likelihood (NLL) that do not fully exploit contextual dependencies. To overcome the insufficiency, we introduce coreference-based contrastive learning for CNMT that generates contrastive examples from coreference chains between the source and target sentences. The proposed method improves pronoun resolution accuracy of CNMT models, as well as overall translation quality. Finally, we investigate an application of CNMT on dealing with Korean honorifics which depends on contextual information for generating adequate translations. For the English-Korean translation task, we propose to use CNMT models that capture crucial contextual information on the English source document and adopt a context-aware post-editing system for exploiting contexts on Korean target sentences, resulting in more consistent Korean honorific translations.신경망 기계번역 기법은 최근 번역 품질에 있어서 큰 성능 향상을 이룩하여 많은 주목을 받고 있다. 그럼에도 불구하고 현재 대부분의 신경망 번역 시스템은 텍스트를 독립된 문장 단위로 번역을 수행하기 때문에 텍스트에 존재하는 문맥을 무시하고 결국 문서 단위로 파악했을 때 적절하지 않은 번역문을 생성할 수 있는 단점이 있다. 이를 극복하기 위해 주변 문장을 동시에 고려하는 문맥 인식 기반 신경망 번역 기법이 제안되고 있다. 본 학위 논문은 문맥 인식 기반 신경망 번역 시스템의 성능을 개선시킬 수 있는 기법들과 문맥 인식 기반 신경망 번역 기법의 활용 방안을 제시한다. 먼저 여러 개의 문맥 문장들을 효과적으로 모델링하기 위해 문맥 문장들을 토큰 레벨 및 문장 레벨로 단계적으로 표현하는 계층적 문맥 인코더를 제시하였다. 제시된 모델은 기존 모델들과 비교하여 가장 좋은 번역 품질을 얻으면서 동시에 학습 및 번역에 걸리는 연산 시간을 단축하였다. 두 번째로는 문맥 인식 기반 신경망 번역모델의 학습 방법을 개선하고자 하였는데 이는 기존 연구에서는 문맥에 대한 의존 관계를 전부 활용하지 못하는 전통적인 음의 로그우도 손실함수에 의존하고 있기 때문이다. 이를 보완하기 위해 문맥 인식 기반 신경망 번역모델을 위한 상호참조에 기반한 대조학습 기법을 제시한다. 제시된 기법은 원문과 주변 문맥 문장들 사이에 존재하는 상호참조 사슬을 활용하여 대조 사례를 생성하며, 문맥 인식 기반 신경망 번역 모델들의 전반적인 번역 품질 뿐만 아니라 대명사 해결 성능도 크게 향상시켰다. 마지막으로는 맥락 정보가 필요한 한국어 경어체 번역에 있어서 문맥 인식 기반 신경망 번역 기법의 활용 방안에 대해서도 연구하였다. 이에 영어-한국어 번역 문제에 문맥 인식 기반 신경망 번역 기법을 적용하여 영어 원문에서 필수적인 맥락 정보를 추출하는 한편 한국어 번역문에서도 문맥 인식 사후편집 시스템을 활용하여 보다 일관된 한국어 경어체 표현을 번역하도록 개선하는 기법을 제시하였다.Abstract i Contents ii List of Tables vi List of Figures viii 1 Introduction 1 2 Background: Neural Machine Translation 7 2.1 A Brief History 7 2.2 Problem Setup 9 2.3 Encoder-Decoder architectures 10 2.3.1 RNN-based Architecture 11 2.3.2 SAN-based Architecture 13 2.4 Training 16 2.5 Decoding 16 2.6 Evaluation 17 3 Efficient Hierarchical Architecture for Modeling Contextual Sentences 18 3.1 Related works 20 3.1.1 Modeling Context in NMT 20 3.1.2 Hierarchical Context Modeling 21 3.1.3 Evaluation of Context-aware NMT 21 3.2 Model description 22 3.2.1 Context-aware NMT encoders 22 3.2.2 Hierarchical context encoder 27 3.3 Data 28 3.3.1 English-German IWSLT 2017 corpus 29 3.3.2 OpenSubtitles corpus 29 3.3.3 English-Korean subtitle corpus 31 3.4 Experiments 31 3.4.1 Hyperparameters and Training details 31 3.4.2 Overall BLEU evaluation 32 3.4.3 Model complexity analysis 32 3.4.4 BLEU evaluation on helpful/unhelpful context 34 3.4.5 EnKo pronoun resolution test suite 35 3.4.6 Qualitative Analysis 37 3.5 Summary of Efficient Hierarchical Architecture for Modeling Contextual Sentences 43 4 Contrastive Learning for Context-aware Neural Machine Translation 44 4.1 Related Works 46 4.1.1 Context-aware NMT Architectures 46 4.1.2 Coreference and NMT 47 4.1.3 Data augmentation for NMT 47 4.1.4 Contrastive Learning 47 4.2 Context-aware NMT models 48 4.3 Our Method: CorefCL 50 4.3.1 Data Augmentation Using Coreference 50 4.3.2 Contrastive Learning for Context-aware NMT 52 4.4 Experiments 53 4.4.1 Datasets 53 4.4.2 Settings 54 4.4.3 Overall BLEU Evaluation 55 4.4.4 Results on English-German Contrastive Evaluation Set 57 4.4.5 Analysis 58 4.5 Summary of Contrastive Learning for Context-aware Neural Machine Translation 59 5 Improving English-Korean Honorific Translation Using Contextual Information 60 5.1 Related Works 63 5.1.1 Neural Machine Translation dealing with Korean 63 5.1.2 Controlling the Styles in NMT 63 5.1.3 Context-Aware NMT Framework and Application 64 5.2 Addressing Korean Honorifics in Context 65 5.2.1 Overview of Korean Honorifics System 65 5.2.2 The Role of Context on Choosing Honorifics 68 5.3 Context-Aware NMT Frameworks 69 5.3.1 NMT Model with Contextual Encoders 71 5.3.2 Context-Aware Post Editing (CAPE) 71 5.4 Our Proposed Method - Context-Aware NMT for Korean Honorifics 73 5.4.1 Using CNMT methods for Honorific-Aware Translation 74 5.4.2 Scope of Honorific Expressions 75 5.4.3 Automatic Honorific Labeling 76 5.5 Experiments 77 5.5.1 Dataset and Preprocessing 77 5.5.2 Model Implementation and Training Details 80 5.5.3 Metrics 80 5.5.4 Results 81 5.5.5 Translation Examples and Analysis 86 5.6 Summary of Improving English-Korean Honorific Translation Using Contextual Information 89 6 Future Directions 91 6.1 Document-level Datasets 91 6.2 Document-level Evaluation 92 6.3 Bias and Fairness of Document-level NMT 93 6.4 Towards Practical Applications 94 7 Conclusions 96 Abstract (In Korean) 117 Acknowledgment 119박
    corecore