5,140 research outputs found

    Visual Salience and Reference Resolution in Simulated 3-D Environments

    Full text link

    A perceptually based computational framework for the interpretation of spatial language

    Get PDF
    The goal of this work is to develop a semantic framework to underpin the development of natural language (NL) interfaces for 3 Dimensional (3-D) simulated environments. The thesis of this work is that the computational interpretation of language in such environments should be based on a framework that integrates a model of visual perception with a model of discourse. When interacting with a 3-D environment, users have two main goals the first is to move around in the simulated environment and the second is to manipulate objects in the environment. In order to interact with an object through language, users need to be able to refer to the object. There are many different types of referring expressions including definite descriptions, pronominals, demonstratives, one-anaphora, other-expressions, and locative-expressions Some of these expressions are anaphoric (e g , pronominals, oneanaphora, other-expressions). In order to computationally interpret these, it is necessary to develop, and implement, a discourse model. Interpreting locative expressions requires a semantic model for prepositions and a mechanism for selecting the user’s intended frame of reference. Finally, many of these expressions presuppose a visual context. In order to interpret them this context must be modelled and utilised. This thesis develops a perceptually grounded discourse-based computational model of reference resolution capable of handling anaphoric and locative expressions. There are three novel contributions in this framework a visual saliency algorithm, a semantic model for locative expressions containing projective prepositions, and a discourse model. The visual saliency algorithm grades the prominence of the objects in the user's view volume at each frame. This algorithm is based on the assumption that objects which are larger and more central to the user's view are more prominent than objects which are smaller or on the periphery of their view. The resulting saliency ratings for each frame are stored in a data structure linked to the NL system’s context model. This approach gives the system a visual memory that may be drawn upon in order to resolve references. The semantic model for locative expressions defines a computational algorithm for interpreting locatives that contain a projective preposition. Specifically, the prepositions in front of behind, to the right of, and to the left of. There are several novel components within this model. First, there is a procedure for handling the issue of frame of reference selection. Second, there is an algorithm for modelling the spatial templates of projective prepositions. This algonthm integrates a topological model with visual perceptual cues. This approach allows us to correctly define the regions described by projective preposition in the viewer-centred frame of reference, in situations that previous models (Yamada 1993, Gapp 1994a, Olivier et al 1994, Fuhr et al 1998) have found problematic. Thirdly, the abstraction used to represent the candidate trajectors of a locative expression ensures that each candidate is ascribed the highest rating possible. This approach guarantees that the candidate trajector that occupies the location with the highest applicability in the prepositions spatial template is selected as the locative’s referent. The context model extends the work of Salmon-Alt and Romary (2001) by integrating the perceptual information created by the visual saliency algonthm with a model of discourse. Moreover, the context model defines an interpretation process that provides an explicit account of how the visual and linguistic information sources are utilised when attributing a referent to a nominal expression. It is important to note that the context model provides the set of candidate referents and candidate trajectors for the locative expression interpretation algorithm. These are restncted to those objects that the user has seen. The thesis shows that visual salience provides a qualitative control in NL interpretation for 3-D simulated environments and captures interesting and significant effects such as graded judgments. Moreover, it provides an account for how object occlusion impacts on the semantics of projective prepositions that are canonically aligned with the front-back axis in the viewer-centred frame of reference

    A false colouring real time visual saliency algorithm for reference resolution in simulated 3-D environments

    Get PDF
    In this paper we present a novel false colouring visual saliency algorithm and illustrate how it is used in the Situated Language Interpreter system to resolve natural language references

    A computational model of the referential semantics of projective prepositions

    Get PDF
    In this paper we present a framework for interpreting locative expressions containing the prepositions in front of and behind. These prepositions have different semantics in the viewer-centred and intrinsic frames of reference (Vandeloise, 1991). We define a model of their semantics in each frame of reference. The basis of these models is a novel parameterized continuum function that creates a 3-D spatial template. In the intrinsic frame of reference the origin used by the continuum function is assumed to be known a priori and object occlusion does not impact on the applicability rating of a point in the spatial template. In the viewer-centred frame the location of the spatial template’s origin is dependent on the user’s perception of the landmark at the time of the utterance and object occlusion is integrated into the model. Where there is an ambiguity with respect to the intended frame of reference, we define an algorithm for merging the spatial templates from the competing frames of reference, based on psycholinguistic observations in (Carlson-Radvansky, 1997)

    Towards an Indexical Model of Situated Language Comprehension for Cognitive Agents in Physical Worlds

    Full text link
    We propose a computational model of situated language comprehension based on the Indexical Hypothesis that generates meaning representations by translating amodal linguistic symbols to modal representations of beliefs, knowledge, and experience external to the linguistic system. This Indexical Model incorporates multiple information sources, including perceptions, domain knowledge, and short-term and long-term experiences during comprehension. We show that exploiting diverse information sources can alleviate ambiguities that arise from contextual use of underspecific referring expressions and unexpressed argument alternations of verbs. The model is being used to support linguistic interactions in Rosie, an agent implemented in Soar that learns from instruction.Comment: Advances in Cognitive Systems 3 (2014

    Painterly rendering techniques: A state-of-the-art review of current approaches

    Get PDF
    In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd

    Beyond Intra-modality: A Survey of Heterogeneous Person Re-identification

    Full text link
    An efficient and effective person re-identification (ReID) system relieves the users from painful and boring video watching and accelerates the process of video analysis. Recently, with the explosive demands of practical applications, a lot of research efforts have been dedicated to heterogeneous person re-identification (Hetero-ReID). In this paper, we provide a comprehensive review of state-of-the-art Hetero-ReID methods that address the challenge of inter-modality discrepancies. According to the application scenario, we classify the methods into four categories -- low-resolution, infrared, sketch, and text. We begin with an introduction of ReID, and make a comparison between Homogeneous ReID (Homo-ReID) and Hetero-ReID tasks. Then, we describe and compare existing datasets for performing evaluations, and survey the models that have been widely employed in Hetero-ReID. We also summarize and compare the representative approaches from two perspectives, i.e., the application scenario and the learning pipeline. We conclude by a discussion of some future research directions. Follow-up updates are avaible at: https://github.com/lightChaserX/Awesome-Hetero-reIDComment: Accepted by IJCAI 2020. Project url: https://github.com/lightChaserX/Awesome-Hetero-reI

    A Computational Model of the Referential Semantics of Projective Prepositions

    Get PDF
    In this paper we present a framework for interpreting locative expressions containing the prepositions in front of and behind. These prepositions have different semantics in the viewer-centred and intrinsic frames of reference (Vandeloise, 1991). We define a model of their semantics in each frame of reference. The basis of these models is a novel parameterized continuum function that creates a 3-D spatial template. In the intrinsic frame of reference the origin used by the continuum function is assumed to be known a priori and object occlusion does not impact on the applicability rating of a point in the spatial template. In the viewer-centred frame the location of the spatial template’s origin is dependent on the user’s perception of the landmark at the time of the utterance and object occlusion is integrated into the model. Where there is an ambiguity with respect to the intended frame of reference, we define an algorithm for merging the spatial templates from the competing frames of reference, based on psycholinguistic observations in (Carlson-Radvansky, 1997)
    corecore