13 research outputs found

    Semantic Composition via Probabilistic Model Theory

    Get PDF
    Semantic composition remains an open problem for vector space models of semantics. In this paper, we explain how the probabilistic graphical model used in the framework of Functional Distributional Semantics can be interpreted as a probabilistic version of model theory. Building on this, we explain how various semantic phenomena can be recast in terms of conditional probabilities in the graphical model. This connection between formal semantics and machine learning is helpful in both directions: it gives us an explicit mechanism for modelling context-dependent meanings (a challenge for formal semantics), and also gives us well-motivated techniques for composing distributed representations (a challenge for distributional semantics). We present results on two datasets that go beyond word similarity, showing how these semantically-motivated techniques improve on the performance of vector models.Schiff Foundatio

    A Discriminative Model for Perceptually-Grounded Incremental Reference Resolution

    Get PDF
    Kennington C, Dia L, Schlangen D. A Discriminative Model for Perceptually-Grounded Incremental Reference Resolution. In: Proceedings of the 11th International Conference on Computational Semantics (IWCS) 2015. 2015: 195-205.A large part of human communication involves referring to entities in the world, and often these entities are objects that are visually present for the interlocutors. A computer system that aims to resolve such references needs to tackle a complex task: objects and their visual features must be determined, the referring expressions must be recognised, extra-linguistic information such as eye gaze or pointing gestures must be incorporated --- and the intended connection between words and world must be reconstructed. In this paper, we introduce a discriminative model of reference resolution that processes incrementally (i.e., word for word), is perceptually-grounded, and improves when interpolated with information from gaze and pointing gestures. We evaluated our model and found that it performed robustly in a realistic reference resolution task, when compared to a generative model

    Interfacing language, spatial perception and cognition in Type Theory with Records

    Get PDF
    We argue that computational modelling of perception, action, language, and cognition introduces several requirements on a formal semantic theory and its practical implementations. Using examples of semantic representations of spatial descriptions we show how Type Theory with Records (TTR) satisfies these requirements

    Compositional lexical networks: a case study of the English spatial adjectives

    Get PDF
    Most words cannot be given a single precise definition, but instead consist of multiple senses related to each other like members of a family. In cognitive approaches to semantics, this kind of category is described by a lexical, a diagram in which nodes represent senses and arrows represent sense connections. However, lexical network theory is not compositional: it does not explain how lexical networks are combined together to yield the meanings of phrases and sentences. The aim of this thesis is to develop lexical network theory in a formal, compositional setting. I argue that a traditional approach to formal semantics based on the simply-typed lambda calculus is not rich enough to implement lexical networks because it is unable to type the arrows which link word senses together. Instead, I propose replacing simple type theory with Martin-Löf Dependent Type Theory, and show how this allows for a fully compositional implementation of lexical networks. The resulting theory is applied to the description of the English spatial adjectives - high, low, tall, long, short, deep, shallow, thick and thin. These adjectives are an ideal starting point for studying the interaction between lexical and compositional semantics, since they have been studied extensively from both points of view. I illustrate how a compositional theory of lexical networks can provide an interface by which the insights of cognitive semantics can be imported into formal semantics, and vice versa

    Learning to Interpret and Apply Multimodal Descriptions

    Get PDF
    Han T. Learning to Interpret and Apply Multimodal Descriptions. Bielefeld: UniversitĂ€t Bielefeld; 2018.Enabling computers to understand natural human communication is a goal researchers have been long aspired to in artificial intelligence. Since the concept demonstration of “Put-That- There” in 1980s, significant achievements have been made in developing multimodal interfaces that can process human communication such as speech, eye gaze, facial emotion, co-verbal hand gestures and pen input. State-of-the-art multimodal interfaces are able to process pointing gestures, symbolic gestures with conventional meanings, as well as gesture commands with pre-defined meanings (e.g., circling for “select”). However, in natural communication, co- verbal gestures/pen input rarely convey meanings via conventions or pre-defined rules, but embody meanings relatable to the accompanying speech. For example, in route given tasks, people often describe landmarks verbally (e.g., two buildings), while demonstrating the relative position with two hands facing each other in the space. Interestingly, when the same gesture is accompanied by the utterance a ball, it may indicate the size of the ball. Hence, the interpretation of such co-verbal hand gestures largely depends on the accompanied verbal content. Similarly, when describing objects, while verbal utterances are most convenient for describing colour and category (e.g., a brown elephant), hand-drawn sketches are often deployed to convey iconic information such as the exact shape of the elephant’s trunk, which is typically difficult to encode in language. This dissertation concerns the task of learning to interpret multimodal descriptions com- posed of verbal utterances and hand gestures/sketches, and apply corresponding interpretations to tasks such as image retrieval. Specifically, we aim to address following research questions: 1) For co-verbal gestures that embody meanings relatable to accompanied verbal content, how can we use natural language information to interpret the semantics of such co-verbal gestures, e.g., does a gesture indicate relative position or size? 2) As an integral system of commu- nication, speech and gestures not only bear close semantic relations, but also close temporal relations. To what degree and on which dimensions can hand gestures benefit the task of inter- preting multimodal descriptions? 3) While it’s obvious that iconic information in hand-drawn sketches enriches verbal content in object descriptions, how to model the joint contributions of such multimodal descriptions and to what degree can verbal descriptions compensate reduced iconic details in hand-drawn sketches? To address the above questions, we first introduce three multimodal description corpora: a spatial description corpus composed of natural language and placing gestures (also referred as abstract deictics), a multimodal object description corpus composed of natural language and hand-drawn sketches, and an existing corpus - the Bielefeld Speech and Gesture Alignment Corpus (SAGA). 3 4 We frame the problem of learning gesture semantics as a multi-label classification task us- ing natural language information and hand gesture features. We conducted an experiment with the SAGA corpus. The results show that natural language is informative for the interpretation of hand gestures. Further more, we describe a system that models the interpretation and application of spatial descriptions and explored three variants of representation methods of the verbal content. When representing the verbal content in the descriptions with a set of automatically learned symbols, the system’s performance is on par with representations with manually defined symbols (e.g., pre-defined object properties). We show that abstract deictic gestures not only lead to better understanding of spatial descriptions, but also result in earlier correct decisions of the system, which can be used to trigger immediate reactions in dialogue systems. Finally, we investigate the interplay of semantics between symbolic (natural language) and iconic (sketches) modes in multimodal object descriptions, where natural language and sketches jointly contribute to the communications. We model the meaning of natural language and sketches two existing models and combine the meanings from both modalities with a late fusion approach. The results show that even adding reduced sketches (30% of full sketches) can help in the retrieval task. Moreover, in current setup, natural language descriptions can compensate around 30% of reduced sketches

    Concepts, Frames and Cascades in Semantics, Cognition and Ontology

    Get PDF
    This open access book presents novel theoretical, empirical and experimental work exploring the nature of mental representations that support natural language production and understanding, and other manifestations of cognition. One fundamental question raised in the text is whether requisite knowledge structures can be adequately modeled by means of a uniform representational format, and if so, what exactly is its nature. Frames are a key topic covered which have had a strong impact on the exploration of knowledge representations in artificial intelligence, psychology and linguistics; cascades are a novel development in frame theory. Other key subject areas explored are: concepts and categorization, the experimental investigation of mental representation, as well as cognitive analysis in semantics. This book is of interest to students, researchers, and professionals working on cognition in the fields of linguistics, philosophy, and psychology

    Exploring sexual exclusivity among individual members of same-sex, male couples in long-term relationships

    Get PDF
    Bibliography: leaves 235-261Queer studies have not adequately considered gay men seeking sexual exclusivity within longterm relationships. In contrast, the emphasis has been on understanding evolving queer norms. Homonormativity has been informing sexual permissiveness. In accordance, and contrasting gay men seeking sexual exclusivity, gay, male couples tended to use relationship agreements to stipulate guidelines for extradyadic sex. This study was inspired by my inability—as a counsellor of gay men seeking sexual exclusivity—to provide them with credible insights to better understand their goals. Representing an initial step in generating practical knowledge, it was anticipated that my counselling clients could benefit from an exploration of lived experiences rather than having to rely on theoretical inferences and opinions. “How” and “why” participants maintained sexual exclusivity were the main targets of discovery. Eleven gay, Canadian men aged thirty-three and older, in relationships of five years or longer, participated in semistructured interviews in-person or via video chat. Using Kleiman’s (2004) protocol for phenomenological analysis, common units of meaning were coded, from interview responses, so that distinct subthemes, contributing to six themes, were identified. These findings included content concerning “seeking positive affects,” “avoiding negative affects,” “factors supporting sexual exclusivity,” “threats to sexual exclusivity,” “rigidity in beliefs,” and “decision-making toward sexual exclusivity.” The first two themes integrated innately to form a meta-theme, “emotional optimization.” An essential insight into how participants maintained sexual exclusivity was their awareness of, and restraint in using, sexually tantalizing, visual stimuli, which was the primary risk to sexual exclusivity. Suggestions for gay men desiring sexual exclusivity included discontinued utility of pornography and cybersex. Varied implications for prospective research, clinical practice and support groups were delineated.PsychologyD. Phil. (Psychology
    corecore