7,065 research outputs found

    A Deep Network Model for Paraphrase Detection in Short Text Messages

    Full text link
    This paper is concerned with paraphrase detection. The ability to detect similar sentences written in natural language is crucial for several applications, such as text mining, text summarization, plagiarism detection, authorship authentication and question answering. Given two sentences, the objective is to detect whether they are semantically identical. An important insight from this work is that existing paraphrase systems perform well when applied on clean texts, but they do not necessarily deliver good performance against noisy texts. Challenges with paraphrase detection on user generated short texts, such as Twitter, include language irregularity and noise. To cope with these challenges, we propose a novel deep neural network-based approach that relies on coarse-grained sentence modeling using a convolutional neural network and a long short-term memory model, combined with a specific fine-grained word-level similarity matching model. Our experimental results show that the proposed approach outperforms existing state-of-the-art approaches on user-generated noisy social media data, such as Twitter texts, and achieves highly competitive performance on a cleaner corpus

    The Structured Process Modeling Method (SPMM) : what is the best way for me to construct a process model?

    Get PDF
    More and more organizations turn to the construction of process models to support strategical and operational tasks. At the same time, reports indicate quality issues for a considerable part of these models, caused by modeling errors. Therefore, the research described in this paper investigates the development of a practical method to determine and train an optimal process modeling strategy that aims to decrease the number of cognitive errors made during modeling. Such cognitive errors originate in inadequate cognitive processing caused by the inherent complexity of constructing process models. The method helps modelers to derive their personal cognitive profile and the related optimal cognitive strategy that minimizes these cognitive failures. The contribution of the research consists of the conceptual method and an automated modeling strategy selection and training instrument. These two artefacts are positively evaluated by a laboratory experiment covering multiple modeling sessions and involving a total of 149 master students at Ghent University

    Modelling Discourse-related terminology in OntoLingAnnot’s ontologies

    Get PDF
    Recently, computational linguists have shown great interest in discourse annotation in an attempt to capture the internal relations in texts. With this aim, we have formalized the linguistic knowledge associated to discourse into different linguistic ontologies. In this paper, we present the most prominent discourse-related terms and concepts included in the ontologies of the OntoLingAnnot annotation model. They show the different units, values, attributes, relations, layers and strata included in the discourse annotation level of the OntoLingAnnot model, within which these ontologies are included, used and evaluated

    Neural Discourse Structure for Text Categorization

    Full text link
    We show that discourse structure, as defined by Rhetorical Structure Theory and provided by an existing discourse parser, benefits text categorization. Our approach uses a recursive neural network and a newly proposed attention mechanism to compute a representation of the text that focuses on salient content, from the perspective of both RST and the task. Experiments consider variants of the approach and illustrate its strengths and weaknesses.Comment: ACL 2017 camera ready versio

    Deep Multitask Learning for Semantic Dependency Parsing

    Full text link
    We present a deep neural architecture that parses sentences into three semantic dependency graph formalisms. By using efficient, nearly arc-factored inference and a bidirectional-LSTM composed with a multi-layer perceptron, our base system is able to significantly improve the state of the art for semantic dependency parsing, without using hand-engineered features or syntax. We then explore two multitask learning approaches---one that shares parameters across formalisms, and one that uses higher-order structures to predict the graphs jointly. We find that both approaches improve performance across formalisms on average, achieving a new state of the art. Our code is open-source and available at https://github.com/Noahs-ARK/NeurboParser.Comment: Proceedings of ACL 201
    corecore