36 research outputs found

    Discourse relations and defeasible knowledge

    Get PDF
    This paper presents a formal account of the temporal interpretation of text. The distinct nat- ural interpretations of texts with similar syntax are explained in terms of defeasible rules charac- tcrising causal laws and Gricean-style pragmatic maxims. Intuitively compelling patterns of defea. sible entaJlment that are supported by the logic in which the theory is expressed are shown to underly temporal interpretation

    Algorithms for Analysing the Temporal Structure of Discourse

    Get PDF
    We describe a method for analysing the temporal structure of a discourse which takes into account the effects of tense, aspect, temporal adverbials and rhetorical structure and which minimises unnecessary ambiguity in the temporal structure. It is part of a discourse grammar implemented in Carpenter's ALE formalism. The method for building up the temporal structure of the discourse combines constraints and preferences: we use constraints to reduce the number of possible structures, exploiting the HPSG type hierarchy and unification for this purpose; and we apply preferences to choose between the remaining options using a temporal centering mechanism. We end by recommending that an underspecified representation of the structure using these techniques be used to avoid generating the temporal/rhetorical structure until higher-level information can be used to disambiguate.Comment: EACL '95, 8 pages, 1 eps picture, tar-ed, compressed, uuencoded, uses eaclap.sty, a4wide.sty, epsf.te

    A Hierarchical Neural Autoencoder for Paragraphs and Documents

    Full text link
    Natural language generation of coherent long texts like paragraphs or longer documents is a challenging problem for recurrent networks models. In this paper, we explore an important step toward this generation task: training an LSTM (Long-short term memory) auto-encoder to preserve and reconstruct multi-sentence paragraphs. We introduce an LSTM model that hierarchically builds an embedding for a paragraph from embeddings for sentences and words, then decodes this embedding to reconstruct the original paragraph. We evaluate the reconstructed paragraph using standard metrics like ROUGE and Entity Grid, showing that neural models are able to encode texts in a way that preserve syntactic, semantic, and discourse coherence. While only a first step toward generating coherent text units from neural models, our work has the potential to significantly impact natural language generation and summarization\footnote{Code for the three models described in this paper can be found at www.stanford.edu/~jiweil/

    An Ontology Design Pattern for Representing Causality

    Get PDF
    The causal pattern is a proposed ontology design pattern for representing the structure of causal relations in a knowledge graph. This pattern is grounded in the concepts defined and used by the CausalAI community i.e., Causal Bayesian Networks and do-calculus. Specifically, the pattern models three primary concepts: (1) causal relations, (2) causal event roles, and (3) causal effect weights. Two use cases involving a sprinkler system and asthma patients are provided along with their relevant competency questions

    From Narrative to Visual Narrative to Audiovisual Narrative: the Multimodal Discourse Theory Connection (Invited Talk)

    Get PDF
    Models of narrative have been proposed from many perspectives and most of these nowadays promote further the notion that narrative is a transmedial phenomenon: i.e., stories can be told making use of distinct and multiple forms of expressions. This raises a range of theoretical and practical questions, as well as rendering the task of providing computational models of narrative both more interesting and more challenging. Central to this endeavour are issues concerned with the potential mutual conditioning of narrative forms and the media employed. Methods are required for isolating narrative properties and mechanisms that may be generalised across media, while at the same time appropriately respecting differences in medial affordances. In this discussion paper I set out a corresponding approach to characterising narrative that draws on a fine-grained formal characterisation of multimodal discourse developed on the basis of both functional and formal linguistic models of discourse, generalised to the multimodal case. After briefly setting out the theoretical principles on which the account builds, I position narrative with respect to the framework and give an example of how audiovisual narratives such as film are accounted for. It will be suggested that a common anchoring in a well specified notion of discourse as an intrinsically multimodal phenomenon offers beneficial new angles on how narratives can be modelled, as well as establishing bridges between humanistic understandings of narrative and complementary computational accounts of narratives involving communicative goal-based planning

    Implicatures and hierarchies of presumptions

    Get PDF
    Implicatures are described as particular forms reasoning from best explanation, in which the para-digm of possible explanations consists of the possible semantic interpretations of a sentence or a word. The need for explanation will be shown to be triggered by conflicts between presumptions, namely hearer’s dialogical expectations and the presumptive sentence meaning. What counts as the best explanation can be established on the grounds of hierarchies of presumptions, dependent on dialogue types and interlocutors’ culture

    Reflexões sobre o tempo e o aspecto em diferentes tipos sequenciais e em diferentes géneros discursivos

    Get PDF
    O artigo apresenta resultados comparativos resultantes de uma investigação sobre propriedades temporais e aspectuais em textos que atualizam diferentes protótipos sequenciais (Adam 1992) e, simultaneamente, em textos de diferentes géneros. A análise evidencia que as classes aspectuais (Moens 1987) e as relações temporais entre as eventualidades denotadas dependem do tipo sequencial: as sequências narrativas integram sobretudo eventos e observa-se predominantemente a relação temporal de sequencialidade; as sequências descritivas integram estados e observa-se a relação temporal sobreposição entre essas eventualidades. A análise de duas sequências narrativas inseridas em textos de géneros distintos demonstra, de igual modo, que o género desempenha um papel decisivo na selecção do tempo verbal dominante.This article presents some comparative results of an investigation project about temporal and aspectual properties in texts of different sequence types (namely narrative and descriptive; cf. Adam (2001)) and, simultaneously, in texts of different discourse genres. It is argued that aspectual classes (cf. Moens (1987)) and temporal relations among eventualities depend upon the choice of the sequence type: narrative sequences include mainly events and the temporal relation of precedence; descriptive sequences include mainly states and the temporal relation of overlapping. It is also pointed out that narrative sequences present a more complex temporal and aspectual structure than descriptive sequences. Furthermore, the results of the analysis of two narrative sequences which belong to two different discourse genres suggest that some discourse genres also play an important role on determining the temporal and aspectual properties of a textual sequence

    Pragmatics and word meaning

    Get PDF
    In this paper, we explore the interaction between lexical semantics and pragmatics. We argue that linguistic processing is informationally encapsulated and utilizes relatively simple ‘taxonomic’ lexical semantic knowledge. On this basis, defeasible lexical generalisations deliver defeasible parts of logical form. In contrast, pragmatic inference is open-ended and involves arbitrary real-world knowledge. Two axioms specify when pragmatic defaults override lexical ones. We demonstrate that modelling this interaction allows us to achieve a more refined interpretation of words in a discourse context than either the lexicon or pragmatics could do on their own.</jats:p

    Knowledge, Causality and Temporal Representation

    Get PDF
    In this paper, a formal semantic framework is developed in order to account for the temporal semantics of text. The theory is able to represent and reason about both semantic issues, which are independent of world knowledge (wk), and pragmatic issues, which are not, within a single logical framework. The theory will allow a text&apos;s semantic entailments to differ from its pragmatic ones, even though they are all derived within the same logic. I demonstrate that this feature of the theory gives rise to solutions to several puzzles concerning the temporal structure of text. 1 The Problem The purpose of this paper is to provide a formal account of the temporal semantics of text. The chief goal is to explain when a text is temporally coherent: it should not mislead the reader as to the order of the events reported. If John hits Max, causing Max to turn round (to face John), then text (1) reflects this while (2) distorts it: (1) John hit Max. Max turned round. (2) Max turned roun..

    COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

    Full text link
    Recent years have brought about a renewed interest in commonsense representation and reasoning in the field of natural language understanding. The development of new commonsense knowledge graphs (CSKG) has been central to these advances as their diverse facts can be used and referenced by machine learning models for tackling new and challenging tasks. At the same time, there remain questions about the quality and coverage of these resources due to the massive scale required to comprehensively encompass general commonsense knowledge. In this work, we posit that manually constructed CSKGs will never achieve the coverage necessary to be applicable in all situations encountered by NLP agents. Therefore, we propose a new evaluation framework for testing the utility of KGs based on how effectively implicit knowledge representations can be learned from them. With this new goal, we propose ATOMIC 2020, a new CSKG of general-purpose commonsense knowledge containing knowledge that is not readily available in pretrained language models. We evaluate its properties in comparison with other leading CSKGs, performing the first large-scale pairwise study of commonsense knowledge resources. Next, we show that ATOMIC 2020 is better suited for training knowledge models that can generate accurate, representative knowledge for new, unseen entities and events. Finally, through human evaluation, we show that the few-shot performance of GPT-3 (175B parameters), while impressive, remains ~12 absolute points lower than a BART-based knowledge model trained on ATOMIC 2020 despite using over 430x fewer parameters
    corecore