138 research outputs found

    A Recurrent Neural Model with Attention for the Recognition of Chinese Implicit Discourse Relations

    Full text link
    We introduce an attention-based Bi-LSTM for Chinese implicit discourse relations and demonstrate that modeling argument pairs as a joint sequence can outperform word order-agnostic approaches. Our model benefits from a partial sampling scheme and is conceptually simple, yet achieves state-of-the-art performance on the Chinese Discourse Treebank. We also visualize its attention activity to illustrate the model's ability to selectively focus on the relevant parts of an input sequence.Comment: To appear at ACL2017, code available at https://github.com/sronnqvist/discourse-ablst

    CRPC-DB – A Discourse Bank for Portuguese

    Get PDF
    info:eu-repo/semantics/publishedVersio

    Linking discourse marker inventories

    Get PDF
    The paper describes the first comprehensive edition of machine-readable discourse marker lexicons. Discourse markers such as and, because, but, though or thereafter are essential communicative signals in human conversation, as they indicate how an utterance relates to its communicative context. As much of this information is implicit or expressed differently in different languages, discourse parsing, context-adequate natural language generation and machine translation are considered particularly challenging aspects of Natural Language Processing. Providing this data in machine-readable, standard-compliant form will thus facilitate such technical tasks, and moreover, allow to explore techniques for translation inference to be applied to this particular group of lexical resources that was previously largely neglected in the context of Linguistic Linked (Open) Data

    Towards unification of discourse annotation frameworks

    Get PDF
    Funding: The author is funded by University of St Andrews-China Scholarship Council joint scholarship (NO.202008300012).Discourse information is difficult to represent and annotate. Among the major frameworks for annotating discourse information, RST, PDTB and SDRT are widely discussed and used, each having its own theoretical foundation and focus. Corpora annotated under different frameworks vary considerably. To make better use of the existing discourse corpora and achieve the possible synergy of different frameworks, it is worthwhile to investigate the systematic relations between different frameworks and devise methods of unifying the frameworks. Although the issue of framework unification has been a topic of discussion for a long time, there is currently no comprehensive approach which considers unifying both discourse structure and discourse relations and evaluates the unified framework intrinsically and extrinsically. We plan to use automatic means for the unification task and evaluate the result with structural complexity and downstream tasks. We will also explore the application of the unified framework in multi-task learning and graphical models.Publisher PD

    How compatible are our discourse annotation frameworks? Insights from mapping RST-DT and PDTB annotations

    Get PDF
    Discourse-annotated corpora are an important resource for the community, but they are often annotated according to different frameworks. This makes joint usage of the annotations difficult, preventing researchers from searching the corpora in a unified way, or using all annotated data jointly to train computational systems. Several theoretical proposals have recently been made for mapping the relational labels of different frameworks to each other, but these proposals have so far not been validated against existing annotations. The two largest discourse relation annotated resources, the Penn Discourse Treebank and the Rhetorical Structure Theory Discourse Treebank, have however been annotated on the same texts, allowing for a direct comparison of the annotation layers. We propose a method for automatically aligning the discourse segments, and then evaluate existing mapping proposals by comparing the empirically observed against the proposed mappings. Our analysis highlights the influence of segmentation on subsequent discourse relation labelling, and shows that while agreement between frameworks is reasonable for explicit relations, agreement on implicit relations is low. We identify several sources of systematic discrepancies between the two annotation schemes and discuss consequences for future annotation and for usage of the existing resources

    Multilingual Extension of PDTB-Style Annotation: The Case of TED Multilingual Discourse Bank

    Get PDF
    We introduce TED-Multilingual Discourse Bank, a corpus of TED talks transcripts in 6 languages (English, German, Polish, EuropeanPortuguese, Russian and Turkish), where the ultimate aim is to provide a clearly described level of discourse structure and semanticsin multiple languages. The corpus is manually annotated following the goals and principles of PDTB, involving explicit and implicitdiscourse connectives, entity relations, alternative lexicalizations and no relations. In the corpus, we also aim to capture the character-istics of spoken language that exist in the transcripts and adapt the PDTB scheme according to our aims; for example, we introducehypophora. We spot other aspects of spoken discourse such as the discourse marker use of connectives to keep them distinct from theirdiscourse connective use. TED-MDB is, to the best of our knowledge, one of the few multilingual discourse treebanks and is hoped tobe a source of parallel data for contrastive linguistic analysis as well as language technology applications. We describe the corpus, theannotation procedure and provide preliminary corpus statistics.info:eu-repo/semantics/publishedVersio
    corecore