11 research outputs found

    Annotating the Structure and Semantics of Fables

    Get PDF

    Étudier des structures de discours : préoccupations pratiques et méthodologiques

    Get PDF
    National audienceThis paper deals with problems related to discourse analysis within the framework of corpus linguistics, through a linguistic study dealing with procedurality in discourse. The fact that the study does not concern a specific lexical item makes it difficult to collect data without any predefined idea, in other words without introducing a bias in the study. The paper proposes a method to solve these problems, involving several annotators on the same texts and merging their proposals in order to get an objective unified annotation. We show that this step is a real part of the overall linguistic analysis.Cet article porte sur des problèmes d'analyse en corpus de structures discursives, en partant de l'exemple de la procéduralité. Quand l'objet d'étude ne porte pas sur une forme particulière, il est difficile de recueillir les données à analyser sans idée préconçue, c'est-à-dire sans biaiser a priori les résultats. L'article propose une méthode permettant de résoudre en partie ces problèmes, en partant d'une annotation à plusieurs mains qui est progressivement unifiée afin d'obtenir un résultat objectif. Nous montrons que cette étape fait pleinement partie de l'étude linguistique elle-même

    Annotating the meaning of discourse connectives by looking at their translation: The translation-spotting technique

    Get PDF
    The various meanings of discourse connectives like while and however are difficult to identify and annotate, even for trained human annotators. This problem is all the more important that connectives are salient textual markers of cohesion and need to be correctly interpreted for many NLP applications. In this paper, we suggest an alternative route to reach a reliable annotation of connectives, by making use of the information provided by their translation in large parallel corpora. This method thus replaces the difficult explicit reasoning involved in traditional sense annotation by an empirical clustering of the senses emerging from the translations. We argue that this method has the advantage of providing more reliable reference data than traditional sense annotation. In addition, its simplicity allows for the rapid constitution of large annotated datasets

    The biomedical discourse relation bank

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Identification of discourse relations, such as causal and contrastive relations, between situations mentioned in text is an important task for biomedical text-mining. A biomedical text corpus annotated with discourse relations would be very useful for developing and evaluating methods for biomedical discourse processing. However, little effort has been made to develop such an annotated resource.</p> <p>Results</p> <p>We have developed the Biomedical Discourse Relation Bank (BioDRB), in which we have annotated explicit and implicit discourse relations in 24 open-access full-text biomedical articles from the GENIA corpus. Guidelines for the annotation were adapted from the Penn Discourse TreeBank (PDTB), which has discourse relations annotated over open-domain news articles. We introduced new conventions and modifications to the sense classification. We report reliable inter-annotator agreement of over 80% for all sub-tasks. Experiments for identifying the sense of explicit discourse connectives show the connective itself as a highly reliable indicator for coarse sense classification (accuracy 90.9% and F1 score 0.89). These results are comparable to results obtained with the same classifier on the PDTB data. With more refined sense classification, there is degradation in performance (accuracy 69.2% and F1 score 0.28), mainly due to sparsity in the data. The size of the corpus was found to be sufficient for identifying the sense of explicit connectives, with classifier performance stabilizing at about 1900 training instances. Finally, the classifier performs poorly when trained on PDTB and tested on BioDRB (accuracy 54.5% and F1 score 0.57).</p> <p>Conclusion</p> <p>Our work shows that discourse relations can be reliably annotated in biomedical text. Coarse sense disambiguation of explicit connectives can be done with high reliability by using just the connective as a feature, but more refined sense classification requires either richer features or more annotated data. The poor performance of a classifier trained in the open domain and tested in the biomedical domain suggests significant differences in the semantic usage of connectives across these domains, and provides robust evidence for a biomedical sublanguage for discourse and the need to develop a specialized biomedical discourse annotated corpus. The results of our cross-domain experiments are consistent with related work on identifying connectives in BioDRB.</p

    Corpus-driven Semantics of Concession: Where do Expectations Come from?

    Get PDF
    &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Concession is one of the trickiest semantic discourse relations appearing in natural language. Many have tried to sub-categorize Concession and to define formal criteria to both distinguish its subtypes as well as for distinguishing Concession from the (similar) semantic relation of Contrast. But there is still a lack of consensus among the different proposals. In this paper, we focus on those approaches, e.g. (Lagerwerf 1998), (Winter &amp; Rimon 1994), and (Korbayova &amp; Webber 2007), assuming that Concession features two primary interpretations, "direct" and "indirect". We argue that this two way classification falls short of accounting for the full range of variants identified in naturally occurring data. Our investigation of one thousand Concession tokens in the Penn Discourse Treebank (PDTB) reveals that the interpretation of concessive relations varies according to the source of expectation. Four sources of expectation are identified. Each is characterized by a different relation holding between the eventuality that raises the expectation and the eventuality describing the expectation. We report a) a reliable inter-annotator agreement on the four types of sources identified in the PDTB data, b) a significant improvement on the annotation of previous disagreements on Concession-Contrast in the PDTB and c) a novel logical account of Concession using basic constructs from Hobbs' (1998) logic. Our proposal offers a uniform framework for the interpretation of Concession while accounting for the different sources of expectation by modifying a single predicate in the proposed formulae

    Annotating the meaning of discourse connectives by looking at their translation: The translation-spotting technique

    Get PDF
    The various meanings of discourse connectives like while and however are difficult to identify and annotate, even for trained human annotators. This problem is all the more important that connectives are salient textual markers of cohesion and need to be correctly interpreted for many NLP applications. In this paper, we suggest an alternative route to reach a reliable annotation of connectives, by making use of the information provided by their translation in large parallel corpora. This method thus replaces the difficult explicit reasoning involved in traditional sense annotation by an empirical clustering of the senses emerging from the translations. We argue that this method has the advantage of providing more reliable reference data than traditional sense annotation. In addition, its simplicity allows for the rapid constitution of large annotated datasets

    An exploratory study using the predicate-argument structure to develop methodology for measuring semantic similarity of radiology sentences

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)The amount of information produced in the form of electronic free text in healthcare is increasing to levels incapable of being processed by humans for advancement of his/her professional practice. Information extraction (IE) is a sub-field of natural language processing with the goal of data reduction of unstructured free text. Pertinent to IE is an annotated corpus that frames how IE methods should create a logical expression necessary for processing meaning of text. Most annotation approaches seek to maximize meaning and knowledge by chunking sentences into phrases and mapping these phrases to a knowledge source to create a logical expression. However, these studies consistently have problems addressing semantics and none have addressed the issue of semantic similarity (or synonymy) to achieve data reduction. To achieve data reduction, a successful methodology for data reduction is dependent on a framework that can represent currently popular phrasal methods of IE but also fully represent the sentence. This study explores and reports on the benefits, problems, and requirements to using the predicate-argument statement (PAS) as the framework. A convenient sample from a prior study with ten synsets of 100 unique sentences from radiology reports deemed by domain experts to mean the same thing will be the text from which PAS structures are formed

    Sense annotation in the Penn discourse treebank

    Get PDF
    An important aspect of discourse understanding and generation involves the recognition and processing of discourse relations. These are conveyed by discourse connectives, i.e., lexical items like because and as a result or implicit connectives expressing an inferred discourse relation. The Penn Discourse TreeBank (PDTB) provides annotations of the argument structure, attribution and semantics of discourse connectives. In this paper, we provide the rationale of the tagset, detailed descriptions of the senses with corpus examples, simple semantic definitions of each type of sense tags as well as informal descriptions of the inferences allowed at each level

    Sense annotation in the Penn Discourse Treebank

    No full text
    corecore