16,891 research outputs found

    Computational Approaches to the Pragmatics Problem

    Get PDF
    Cummins C, de Ruiter J. Computational Approaches to the Pragmatics Problem. Language and Linguistics Compass. 2014;8(4):133-143.Unlike many aspects of human language, pragmatics involves a systematic many-to-many mapping between form and meaning. This renders the computational problems of encoding and decoding meaning especially challenging, both for humans in normal conversation and for artificial dialogue systems that need to understand their users' input. A particularly striking example of this difficulty is the recognition of speech act or dialogue act types. In this review, we discuss why this is a problem and why its solution is potentially relevant both for our understanding of human interaction and for the implementation of artificial systems. We examine some of the theoretical and practical attempts that have been made to overcome this problem and consider how the field might develop in the near future

    Teaching and learning guide for:Computational approaches to the pragmatics problem

    Get PDF
    Cummins C, de Ruiter J. Teaching And Learning Guide For Computational Approaches To The Pragmatics Problem. Language and Linguistics Compass. 2015;9(8):328-331

    Reasoning About Pragmatics with Neural Listeners and Speakers

    Full text link
    We present a model for pragmatically describing scenes, in which contrastive behavior results from a combination of inference-driven pragmatics and learned semantics. Like previous learned approaches to language generation, our model uses a simple feature-driven architecture (here a pair of neural "listener" and "speaker" models) to ground language in the world. Like inference-driven approaches to pragmatics, our model actively reasons about listener behavior when selecting utterances. For training, our approach requires only ordinary captions, annotated _without_ demonstration of the pragmatic behavior the model ultimately exhibits. In human evaluations on a referring expression game, our approach succeeds 81% of the time, compared to a 69% success rate using existing techniques

    Unified Pragmatic Models for Generating and Following Instructions

    Full text link
    We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks. Our pragmatics-enabled models reason about why speakers produce certain instructions, and about how listeners will react upon hearing them. Like previous pragmatic models, we use learned base listener and speaker models to build a pragmatic speaker that uses the base listener to simulate the interpretation of candidate descriptions, and a pragmatic listener that reasons counterfactually about alternative descriptions. We extend these models to tasks with sequential structure. Evaluation of language generation and interpretation shows that pragmatic inference improves state-of-the-art listener models (at correctly interpreting human instructions) and speaker models (at producing instructions correctly interpreted by humans) in diverse settings.Comment: NAACL 2018, camera-ready versio

    On an Intuitionistic Logic for Pragmatics

    Get PDF
    We reconsider the pragmatic interpretation of intuitionistic logic [21] regarded as a logic of assertions and their justications and its relations with classical logic. We recall an extension of this approach to a logic dealing with assertions and obligations, related by a notion of causal implication [14, 45]. We focus on the extension to co-intuitionistic logic, seen as a logic of hypotheses [8, 9, 13] and on polarized bi-intuitionistic logic as a logic of assertions and conjectures: looking at the S4 modal translation, we give a denition of a system AHL of bi-intuitionistic logic that correctly represents the duality between intuitionistic and co-intuitionistic logic, correcting a mistake in previous work [7, 10]. A computational interpretation of cointuitionism as a distributed calculus of coroutines is then used to give an operational interpretation of subtraction.Work on linear co-intuitionism is then recalled, a linear calculus of co-intuitionistic coroutines is dened and a probabilistic interpretation of linear co-intuitionism is given as in [9]. Also we remark that by extending the language of intuitionistic logic we can express the notion of expectation, an assertion that in all situations the truth of p is possible and that in a logic of expectations the law of double negation holds. Similarly, extending co-intuitionistic logic, we can express the notion of conjecture that p, dened as a hypothesis that in some situation the truth of p is epistemically necessary

    A preliminary bibliography on focus

    Get PDF
    [I]n its present form, the bibliography contains approximately 1100 entries. Bibliographical work is never complete, and the present one is still modest in a number of respects. It is not annotated, and it still contains a lot of mistakes and inconsistencies. It has nevertheless reached a stage which justifies considering the possibility of making it available to the public. The first step towards this is its pre-publication in the form of this working paper. […] The bibliography is less complete for earlier years. For works before 1970, the bibliographies of Firbas and Golkova 1975 and Tyl 1970 may be consulted, which have not been included here

    Cross-Lingual Adaptation using Structural Correspondence Learning

    Full text link
    Cross-lingual adaptation, a special case of domain adaptation, refers to the transfer of classification knowledge between two languages. In this article we describe an extension of Structural Correspondence Learning (SCL), a recently proposed algorithm for domain adaptation, for cross-lingual adaptation. The proposed method uses unlabeled documents from both languages, along with a word translation oracle, to induce cross-lingual feature correspondences. From these correspondences a cross-lingual representation is created that enables the transfer of classification knowledge from the source to the target language. The main advantages of this approach over other approaches are its resource efficiency and task specificity. We conduct experiments in the area of cross-language topic and sentiment classification involving English as source language and German, French, and Japanese as target languages. The results show a significant improvement of the proposed method over a machine translation baseline, reducing the relative error due to cross-lingual adaptation by an average of 30% (topic classification) and 59% (sentiment classification). We further report on empirical analyses that reveal insights into the use of unlabeled data, the sensitivity with respect to important hyperparameters, and the nature of the induced cross-lingual correspondences
    corecore