16,111 research outputs found

    Semantics as a gateway to language

    Get PDF
    This paper presents an account of semantics as a system that integrates conceptual representations into language. I define the semantic system as an interface level of the conceptual system CS that translates conceptual representations into a format that is accessible by language. The analysis I put forward does not treat the make up of this level as idiosyncratic, but subsumes it under a unified notion of linguistic interfaces. This allows us to understand core aspects of the linguistic-conceptual interface as an instance of a general pattern underlying the correlation of linguistic and non-linguistic structures. By doing so, the model aims to provide a broader perspective onto the distinction between and interaction of conceptual and linguistic processes and the correlation of semantic and syntactic structures

    Resolving Lexical Ambiguity in Tensor Regression Models of Meaning

    Full text link
    This paper provides a method for improving tensor-based compositional distributional models of meaning by the addition of an explicit disambiguation step prior to composition. In contrast with previous research where this hypothesis has been successfully tested against relatively simple compositional models, in our work we use a robust model trained with linear regression. The results we get in two experiments show the superiority of the prior disambiguation method and suggest that the effectiveness of this approach is model-independent

    The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling

    Get PDF
    Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling

    Multimodal Grounding for Language Processing

    Get PDF
    This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language. We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze different methods for combining multimodal representations. Based on this methodological inventory, we discuss the benefit of multimodal grounding for a variety of language processing tasks and the challenges that arise. We particularly focus on multimodal grounding of verbs which play a crucial role for the compositional power of language.Comment: The paper has been published in the Proceedings of the 27 Conference of Computational Linguistics. Please refer to this version for citations: https://www.aclweb.org/anthology/papers/C/C18/C18-1197

    Inducing a Semantically Annotated Lexicon via EM-Based Clustering

    Full text link
    We present a technique for automatic induction of slot annotations for subcategorization frames, based on induction of hidden classes in the EM framework of statistical estimation. The models are empirically evalutated by a general decision test. Induction of slot labeling for subcategorization frames is accomplished by a further application of EM, and applied experimentally on frame observations derived from parsing large corpora. We outline an interpretation of the learned representations as theoretical-linguistic decompositional lexical entries.Comment: 8 pages, uses colacl.sty. Proceedings of the 37th Annual Meeting of the ACL, 199

    Metaphoric coherence: Distinguishing verbal metaphor from `anomaly\u27

    Get PDF
    Theories and computational models of metaphor comprehension generally circumvent the question of metaphor versus “anomaly” in favor of a treatment of metaphor versus literal language. Making the distinction between metaphoric and “anomalous” expressions is subject to wide variation in judgment, yet humans agree that some potentially metaphoric expressions are much more comprehensible than others. In the context of a program which interprets simple isolated sentences that are potential instances of cross‐modal and other verbal metaphor, I consider some possible coherence criteria which must be satisfied for an expression to be “conceivable” metaphorically. Metaphoric constraints on object nominals are represented as abstracted or extended along with the invariant structural components of the verb meaning in a metaphor. This approach distinguishes what is preserved in metaphoric extension from that which is “violated”, thus referring to both “similarity” and “dissimilarity” views of metaphor. The role and potential limits of represented abstracted properties and constraints is discussed as they relate to the recognition of incoherent semantic combinations and the rejection or adjustment of metaphoric interpretations

    Gated-Attention Architectures for Task-Oriented Language Grounding

    Full text link
    To perform tasks specified by natural language instructions, autonomous agents need to extract semantically meaningful representations of language and map it to visual elements and actions in the environment. This problem is called task-oriented language grounding. We propose an end-to-end trainable neural architecture for task-oriented language grounding in 3D environments which assumes no prior linguistic or perceptual knowledge and requires only raw pixels from the environment and the natural language instruction as input. The proposed model combines the image and text representations using a Gated-Attention mechanism and learns a policy to execute the natural language instruction using standard reinforcement and imitation learning methods. We show the effectiveness of the proposed model on unseen instructions as well as unseen maps, both quantitatively and qualitatively. We also introduce a novel environment based on a 3D game engine to simulate the challenges of task-oriented language grounding over a rich set of instructions and environment states.Comment: To appear in AAAI-1
    corecore