1,603 research outputs found

    A Montagovian Treatment of Modal Subordination

    Get PDF
    International audienceEpistemic modality involves complex contextual dependencies that linguists have studied (Roberts 1989; Veltman 1996). We show that they have a natural treatment within a continuation style semantics.Les modalitĂ©s Ă©pistĂ©miques mettent en Ɠuvre des dĂ©pendances contextuelles complexes que les linguistes ont dĂ©jĂ  Ă©tudiĂ©es (Roberts 1989, Veltman 1996). Nous montrons qu'elles ont un traitement naturel avec une sĂ©mantique utilisant les continuations

    Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax

    Get PDF
    We show that both an LSTM and a unitary-evolution recurrent neural network (URN) can achieve encouraging accuracy on two types of syntactic patterns: context-free long distance agreement, and mildly context-sensitive cross serial dependencies. This work extends recent experiments on deeply nested context-free long distance dependencies, with similar results. URNs differ from LSTMs in that they avoid non-linear activation functions, and they apply matrix multiplication to word embeddings encoded as unitary matrices. This permits them to retain all information in the processing of an input string over arbitrary distances. It also causes them to satisfy strict compositionality. URNs constitute a significant advance in the search for explainable models in deep learning applied to NLP

    Strategic Conversation

    Get PDF
    International audienceModels of conversation that rely on a strong notion of cooperation don’t apply to strategic conversation — that is, to conversation where the agents’ motives don’t align, such as courtroom cross examination and political debate. We provide a game-theoretic framework that provides an analysis of both cooperative and strategic conversation. Our analysis features a new notion of safety that applies to implicatures: an implicature is safe when it can be reliably treated as a matter of public record. We explore the safety of implicatures within cooperative and non cooperative settings. We then provide a symbolic model enabling us (i) to prove a correspondence result between a characterisation of conversation in terms of an alignment of players’ preferences and one where Gricean principles of cooperative conversation like Sincerity hold, and (ii) to show when an implicature is safe and when it is not

    Expectation-based Comprehension : Modeling the Interaction of World Knowledge and Linguistic Experience

    Get PDF
    The processing difficulty of each word we encounter in a sentence is affected by both our prior linguistic experience and our general knowledge about the world. Computational models of incremental language processing have, however, been limited in accounting for the influence of world knowledge. We develop an incremental model of language comprehension that constructs—on a word-by-word basis—rich, probabilistic situation model representations. To quantify linguistic processing effort, we adopt Surprisal Theory, which asserts that the processing difficulty incurred by a word is inversely proportional to its expectancy (Hale, 2001; Levy, 2008). In contrast with typical language model implementations of surprisal, the proposed model instantiates a novel comprehension-centric metric of surprisal that reflects the likelihood of the unfolding utterance meaning as established after processing each word. Simulations are presented that demonstrate that linguistic experience and world knowledge are integrated in the model at the level of interpretation and combine in determining online expectations

    Commitments to Preferences in Dialogue

    Get PDF
    We propose a method for modelling how dialogue moves influence and are influenced by the agents’ preferences. We extract constraints on preferences and dependencies among them, even when they are expressed indirectly, by exploiting discourse structure. Our method relies on a study of 20 dialogues chosen at random from the Verbmobil corpus. We then test the algorithms predictions against the judgements of naive annotators on 3 random unseen dialogues. The average annotator-algorithm agreement and the average inter-annotator agreement show that our method is reliable.
    • 

    corecore