34 research outputs found

    Argumentation Mining in User-Generated Web Discourse

    Full text link
    The goal of argumentation mining, an evolving research field in computational linguistics, is to design methods capable of analyzing people's argumentation. In this article, we go beyond the state of the art in several ways. (i) We deal with actual Web data and take up the challenges given by the variety of registers, multiple domains, and unrestricted noisy user-generated Web discourse. (ii) We bridge the gap between normative argumentation theories and argumentation phenomena encountered in actual data by adapting an argumentation model tested in an extensive annotation study. (iii) We create a new gold standard corpus (90k tokens in 340 documents) and experiment with several machine learning methods to identify argument components. We offer the data, source codes, and annotation guidelines to the community under free licenses. Our findings show that argumentation mining in user-generated Web discourse is a feasible but challenging task.Comment: Cite as: Habernal, I. & Gurevych, I. (2017). Argumentation Mining in User-Generated Web Discourse. Computational Linguistics 43(1), pp. 125-17

    Connective-Lex: A Web-Based Multilingual Lexical Resource for Connectives

    Get PDF
    In this paper, we present a tangible outcome of the TextLink network: a joint online database project displaying and linking existing and newly-created lexicons of discourse connectives in multiple languages. We discuss the definition and demarcation of the class of connectives that should be included in such a resource, and present the syntactic, semantic/pragmatic, and lexicographic information we collected. Further, the technical implementation of the database and the search functionality are presented. We discuss how the multilingual integration of several connective lexicons provides added value for linguistic researchers and other users interested in connectives, by allowing crosslinguistic comparison and a direct linking between discourse relational devices in different languages. Finally, we provide pointers for possible future extensions both in breadth (i.e., by adding lexicons for additional languages) and depth (by extending the information provided for each connective item and by strengthening the crosslinguistic links).Nous présentons dans cet article un résultat tangible du réseau TextLink : un projet conjoint de base de données en ligne, qui montre et relie des lexiques, aussi bien existants que créés récemment, de connecteurs discursifs dans plusieurs langues. Nous commençons par considérer la définition et la délimitation de la classe des connecteurs qui devraient être inclus dans une telle ressource, et nous présentons l’information syntaxique, sémantico-pragmatique et lexicographique que nous avons recueillie. D’autre part, l’implémentation technique de cette base de données et les fonctionnalités de recherche qu’elle permet sont aussi décrites. Nous discutons de quelle manière l’intégration multilingue de plusieurs lexiques de connecteurs apporte une valeur ajoutée aux chercheurs en linguistique et aux autres utilisateurs qui s’intéressent aux connecteurs, en permettant de comparer plusieurs langues et de relier directement les connecteurs dans différentes langues. Pour finir, nous donnons des indications quant à une possible extension future en termes d’ampleur (par exemple, en ajoutant des lexiques pour de nouvelles langues) et de profondeur (en augmentant l’information qui est donnée pour chaque connecteur et en renforçant les liens entre lexiques)

    Deep Learning for Text Style Transfer: A Survey

    Full text link
    Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others. It has a long history in the field of natural language processing, and recently has re-gained significant attention thanks to the promising performance brought by deep neural models. In this paper, we present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017. We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data. We also provide discussions on a variety of important topics regarding the future development of this task. Our curated paper list is at https://github.com/zhijing-jin/Text_Style_Transfer_SurveyComment: Computational Linguistics Journal 202

    Crowdsource Annotation and Automatic Reconstruction of Online Discussion Threads

    Get PDF
    Modern communication relies on electronic messages organized in the form of discussion threads. Emails, IMs, SMS, website comments, and forums are all composed of threads, which consist of individual user messages connected by metadata and discourse coherence to messages from other users. Threads are used to display user messages effectively in a GUI such as an email client, providing a background context for understanding a single message. Many messages are meaningless without the context provided by their thread. However, a number of factors may result in missing thread structure, ranging from user mistake (replying to the wrong message), to missing metadata (some email clients do not produce/save headers that fully encapsulate thread structure; and, conversion of archived threads from over repository to another may also result in lost metadata), to covert use (users may avoid metadata to render discussions difficult for third parties to understand). In the field of security, law enforcement agencies may obtain vast collections of discussion turns that require automatic thread reconstruction to understand. For example, the Enron Email Corpus, obtained by the Federal Energy Regulatory Commission during its investigation of the Enron Corporation, has no inherent thread structure. In this thesis, we will use natural language processing approaches to reconstruct threads from message content. Reconstruction based on message content sidesteps the problem of missing metadata, permitting post hoc reorganization and discussion understanding. We will investigate corpora of email threads and Wikipedia discussions. However, there is a scarcity of annotated corpora for this task. For example, the Enron Emails Corpus contains no inherent thread structure. Therefore, we also investigate issues faced when creating crowdsourced datasets and learning statistical models of them. Several of our findings are applicable for other natural language machine classification tasks, beyond thread reconstruction. We will divide our investigation of discussion thread reconstruction into two parts. First, we explore techniques needed to create a corpus for our thread reconstruction research. Like other NLP pairwise classification tasks such as Wikipedia discussion turn/edit alignment and sentence pair text similarity rating, email thread disentanglement is a heavily class-imbalanced problem, and although the advent of crowdsourcing has reduced annotation costs, the common practice of crowdsourcing redundancy is too expensive for class-imbalanced tasks. As the first contribution of this thesis, we evaluate alternative strategies for reducing crowdsourcing annotation redundancy for class-imbalanced NLP tasks. We also examine techniques to learn the best machine classifier from our crowdsourced labels. In order to reduce noise in training data, most natural language crowdsourcing annotation tasks gather redundant labels and aggregate them into an integrated label, which is provided to the classifier. However, aggregation discards potentially useful information from linguistically ambiguous instances. For the second contribution of this thesis, we show that, for four of five natural language tasks, filtering of the training dataset based on crowdsource annotation item agreement improves task performance, while soft labeling based on crowdsource annotations does not improve task performance. Second, we investigate thread reconstruction as divided into the tasks of thread disentanglement and adjacency recognition. We present the Enron Threads Corpus, a newly-extracted corpus of 70,178 multi-email threads with emails from the Enron Email Corpus. In the original Enron Emails Corpus, emails are not sorted by thread. To disentangle these threads, and as the third contribution of this thesis, we perform pairwise classification, using text similarity measures on non-quoted texts in emails. We show that i) content text similarity metrics outperform style and structure text similarity metrics in both a class-balanced and class-imbalanced setting, and ii) although feature performance is dependent on the semantic similarity of the corpus, content features are still effective even when controlling for semantic similarity. To reconstruct threads, it is also necessary to identify adjacency relations among pairs. For the forum of Wikipedia discussions, metadata is not available, and dialogue act typologies, helpful for other domains, are inapplicable. As our fourth contribution, via our experiments, we show that adjacency pair recognition can be performed using lexical pair features, without a dialogue act typology or metadata, and that this is robust to controlling for topic bias of the discussions. Yet, lexical pair features do not effectively model the lexical semantic relations between adjacency pairs. To model lexical semantic relations, and as our fifth contribution, we perform adjacency recognition using extracted keyphrases enhanced with semantically related terms. While this technique outperforms a most frequent class baseline, it fails to outperform lexical pair features or tf-idf weighted cosine similarity. Our investigation shows that this is the result of poor word sense disambiguation and poor keyphrase extraction causing spurious false positive semantic connections. In concluding this thesis, we also reflect on open issues and unanswered questions remaining after our research contributions, discuss applications for thread reconstruction, and suggest some directions for future work

    지식 기반 대화에서의 대화 특성을 활용한 지식 선택 및 랭킹 방법

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 전기·컴퓨터공학부, 2022. 8. 이상구.Knowledge grounded conversation (KGC) model aims to generate informative responses relevant to both conversation history and external knowledge. One of the most important parts of KGC models is to find the knowledge which provides the basis on which the responses are grounded. If the model selects inappropriate knowledge, it may produce responses that are irrelevant or lack knowledge. In this dissertation, we study the methods of leveraging conversational characteristics to select or rank the knowledge for knowledge grounded conversation. In particular, this dissertation provides novel two methods, where one of which focuses on the sequential structure of multi-turn conversation, and the other focuses on utilizing local context and topic of a long conversation. We first propose two knowledge selection strategies of which one preserves the sequential matching features and the other encodes the sequential nature of the conversation. Second, we propose a novel knowledge ranking model that composes an appropriate range of relevant documents by exploiting both the topic keywords and local context of a conversation. In addition, we apply the knowledge ranking model in quote recommendation with our new quote recommendation framework that provides hard negative samples to the model. Our experimental results show that the KGC models based on our proposed knowledge selection and ranking methods outperform the competitive models in terms of groundness and relevance.지식 기반 대화 모델은 대화 기록과 외부 지식 이 두 가지 모두에 관련된 응답을 생성하는 것을 목표로 한다. 지식 기반 대화 모델의 가장 중요한 부분 중 하나는 응답의 기반을 제공하는 지식을 찾는 것이다. 지식 기반 모델이 주어진 문맥에 부적합한 지식을 찾는 경우 관련성이 떨어지거나 지식이 부족한 응답이 생성될 수 있다. 이 문제를 해결하기 위해 이 논문에서는 지식 기반 대화를 위해 대화 여러 특성을 활용하여 지식을 선정하는 지식 선택 모델과 지식 순위 모델을 제시한다. 구체적으로 본 논문에서는 다중 턴 대화에서의 순차적 구조 또는 응답 이전 문맥과 대화의 주제를 활용하는 새로운 두 가지 방법을 제시한다. 첫 번째 방법으로써 본 논문은 두 가지 지식 선택 전략을 제안한다. 제안하는 전략 중 하나는 지식과 대화 턴 간의 순차적 매칭 특징을 보존하는 방법이고 다른 전략은 대화의 순차적 특성을 인코딩하여 지식을 선택하는 방법이다. 두 번째로 본 논문은 대화의 주제 키워드와 응답 바로 이전의 문맥을 모두 활용하여 적절한 범위의 관련 문서들로 검색 결과를 구성하는 새로운 지식 순위 모델을 제안한다. 마지막으로 지식 순위 모델의 적응성 검증을 위해 정답 인용구와 의미적으로 유사하지만 정답은 아닌 인용구의 집합을 인용구 순위 모델에 제공하는 인용구 추천 프레임워크를 제안한다. 제안된 지식 선택 및 순위 모델을 기반으로 하는 지식 기반 대화 모델이 경쟁 모델보다 외부 지식 및 대화 문맥과의 관련성 측면에서 우수하다는 것을 사람 간의 대화 데이터를 이용한 다수의 실험을 통해 검증하였다.Abstract 1 1. Introduction 17 2. Background and Related Works 25 2.1 Terminology 25 2.2 Overview of Technologies for Conversational Systems 27 2.2.1 Open-domain Dialogue System 27 2.2.2 Task-oriented Dialogue System 29 2.2.3 Question Answering System 29 2.3 Components of Knowledge Grounded Conversation Model 31 2.4 Related Works 36 2.4.1 KGC datasets 36 2.4.2 Soft Selection-based KGC Model 36 2.4.3 Hard Selection-based KGC Model 37 2.4.4 Retrieval-based KGC Models 39 2.4.5 Response Generation with Knowledge Integration 39 2.4.6 Quote Recommendation 42 2.5 Evaluation Methods 44 2.6 Problem Statements 47 3. Knowledge Selection with Sequential Structure of Conversation 48 3.1 Motivation 48 3.2 Reduce-Match Strategy & Match-Reduce Strategy 49 3.2.1 Backbone architecture 51 3.2.2 Reduce-Match Strategy-based Models 52 3.2.3 Match-Reduce Strategy-based Models 56 3.3 Experiments 62 3.3.1 Experimental Setup 62 3.3.2 Experimental Results 70 3.4 Analysis 72 3.4.1 Case Study 72 3.4.2 Impact of Matching Difficulty 75 3.4.3 Impact of Length of Context 77 3.4.4 Impact of Dialogue Act of Message 78 4. Knowledge Ranking with Local Context and Topic Keywords 81 4.1 Motivation 81 4.2 Retrieval-Augmented Knowledge Grounded Conversation Model 85 4.2.1 Base Model 86 4.2.2 Topic-aware Dual Matching for Knowledge Re-ranking 86 4.2.3 Data Weighting Scheme for Retrieval Augmented Generation Models 89 4.3 Experiments 90 4.3.1 Experimental Setup 90 4.3.2 Experimental Results 94 4.4 Analysis 98 4.4.1 Case Study 98 4.4.2 Ablation Study 99 4.4.3 Model Variations 104 4.4.4 Error Analysis 105 5. Application: Quote Recommendation with Knowledge Ranking 110 5.1 Motivation 110 5.2 CAGAR: A Framework for Quote Recommendation 112 5.2.1 Conversation Encoder 114 5.2.2 Quote Encoder 114 5.2.3 Candidate Generator 115 5.2.4 Re-ranker 116 5.2.5 Training and Inference 116 5.3 Experiments 117 5.3.1 Experimental Setup 117 5.3.2 Experimental Results 119 5.4 Analysis 120 5.4.1 Ablation Study 120 5.4.2 Case Study 121 5.4.3 Impact of Length of Context 121 5.4.4 Impact of Training Set Size per Quote 123 6. Conclusion 125 6.1 Contributions and Limitations 126 6.2 Future Works 128 Appendix A. Preliminary Experiments for Quote Recommendations 131 A.1 Methods 131 A.1.1 Matching Granularity Adjustment 131 A.1.2 Random Forest 133 A.1.3 Convolutional Neural Network 133 A.1.4 Recurrent Neural Network 134 A.2 Experiments 135 A.2.1 Baselines and Implementation Details 135 A.2.2 Datasets 136 A.2.3 Results and Discussions 137 초록 162박

    Interactional Slingshots: Providing Support Structure to User Interactions in Hybrid Intelligence Systems

    Full text link
    The proliferation of artificial intelligence (AI) systems has enabled us to engage more deeply and powerfully with our digital and physical environments, from chatbots to autonomous vehicles to robotic assistive technology. Unfortunately, these state-of-the-art systems often fail in contexts that require human understanding, are never-before-seen, or complex. In such cases, though the AI-only approaches cannot solve the full task, their ability to solve a piece of the task can be combined with human effort to become more robust to handling complexity and uncertainty. A hybrid intelligence system—one that combines human and machine skill sets—can make intelligent systems more operable in real-world settings. In this dissertation, we propose the idea of using interactional slingshots as a means of providing support structure to user interactions in hybrid intelligence systems. Much like how gravitational slingshots provide boosts to spacecraft en route to their final destinations, so do interactional slingshots provide boosts to user interactions en route to solving tasks. Several challenges arise: What does this support structure look like? How much freedom does the user have in their interactions? How is user expertise paired with that of the machine’s? To do this as a tractable socio-technical problem, we explore this idea in the context of data annotation problems, especially in those domains where AI methods fail to solve the overall task. Getting annotated (labeled) data is crucial for successful AI methods, and becomes especially more difficult in domains where AI fails, since problems in such domains require human understanding to fully solve, but also present challenges related to annotator expertise, annotation freedom, and context curation from the data. To explore data annotation problems in this space, we develop techniques and workflows whose interactional slingshot support structure harnesses the user’s interaction with data. First, we explore providing support in the form of nudging non-expert users’ interactions as they annotate text data for the task of creating conversational memory. Second, we add support structure in the form of assisting non-expert users during the annotation process itself for the task of grounding natural language references to objects in 3D point clouds. Finally, we supply support in the form of guiding expert and non-expert users both before and during their annotations for the task of conversational disentanglement across multiple domains. We demonstrate that building hybrid intelligence systems with each of these interactional slingshot support mechanisms—nudging, assisting, and guiding a user’s interaction with data—improves annotation outcomes, such as annotation speed, accuracy, effort level, even when annotators’ expertise and skill levels vary. Thesis Statement: By providing support structure that nudges, assists, and guides user interactions, it is possible to create hybrid intelligence systems that enable more efficient (faster and/or more accurate) data annotation.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163138/1/sairohit_1.pd

    Efficient Neural Methods for Coreference Resolution

    Get PDF
    Coreference resolution is a core task in natural language processing and in creating language technologies. Neural methods and models for automatically resolving references have emerged and developed over the last several years. This progress is largely marked by continuous improvements on a single dataset and metric. In this thesis, the assumptions that underlie these improvements are shown to be unrealistic for real-world use due to the computational and data tradeoffs made to achieve apparently high performance. The thesis outlines and proposes solutions to three issues. First, to address the growing memory requirements and restrictions on input document length, a novel, constant memory neural model for coreference resolution is proposed and shown to attain performance comparable to contemporary models. Second, to address the failure of these models to generalize across datasets, continued training is evaluated and shown to be successful for transferring coreference resolution models between domains and languages. Finally, to combat the gains obtained via the use of increasingly large pretrained language models, multitask model pruning can be applied to maintain a single (small) model for multiple datasets. These methods reduce the computational cost of running a model and the annotation cost of creating a model for any arbitrary dataset. As real-world applications continue to demand resolution of coreference, methods that reduce the technical cost of training new models and making predictions are greatly desired, which this thesis addresses
    corecore