542 research outputs found
Persistence pays off: Paying Attention to What the LSTM Gating Mechanism Persists
Language Models (LMs) are important components in several Natural Language
Processing systems. Recurrent Neural Network LMs composed of LSTM units,
especially those augmented with an external memory, have achieved
state-of-the-art results. However, these models still struggle to process long
sequences which are more likely to contain long-distance dependencies because
of information fading and a bias towards more recent information. In this paper
we demonstrate an effective mechanism for retrieving information in a memory
augmented LSTM LM based on attending to information in memory in proportion to
the number of timesteps the LSTM gating mechanism persisted the information
Exploiting visual salience for the generation of referring expressions
In this paper we present a novel approach to generating
referring expressions (GRE) that is tailored to a model of the visual context the user is attending to. The approach
integrates a new computational model of visual salience in simulated 3-D environments with Dale and Reiter’s (1995) Incremental Algorithm. The advantage of our GRE framework are: (1) the context set used by the GRE algorithm is dynamically computed by the visual saliency algorithm as a user navigates through a simulation; (2) the integration of visual salience into the generation process means that in some instances underspecified but sufficiently detailed descriptions of the target object are generated that are shorter than those generated by GRE algorithms which focus purely on adjectival and type attributes; (3) the integration of visual saliency into the generation process means that our GRE algorithm will in some instances succeed in generating a description of the target object in situations where GRE algorithms which focus purely on adjectival and type attributes fail
A false colouring real time visual saliency algorithm for reference resolution in simulated 3-D environments
In this paper we present a novel false colouring visual
saliency algorithm and illustrate how it is used in the Situated Language Interpreter system to resolve natural language references
A computational model of the referential semantics of projective prepositions
In this paper we present a framework for interpreting locative expressions containing the prepositions in front of and behind. These prepositions have different semantics in the viewer-centred and intrinsic frames of reference (Vandeloise, 1991). We define a model of their semantics in each frame of reference. The basis of these models is a novel parameterized continuum function that creates a 3-D spatial template. In the intrinsic frame of reference the origin used by the continuum function is assumed to be known a priori and object occlusion does not impact on the applicability rating of a point in the spatial template. In the viewer-centred frame the location of the spatial template’s origin is dependent on the user’s perception of the landmark at the time of the utterance and object
occlusion is integrated into the model. Where there is an ambiguity with respect to the intended frame of reference, we define an algorithm for merging the spatial templates from the competing frames of reference, based on psycholinguistic observations in (Carlson-Radvansky, 1997)
A perceptually based computational framework for the interpretation of spatial language
The goal of this work is to develop a semantic framework to underpin the development of natural language (NL) interfaces for 3 Dimensional (3-D) simulated environments. The thesis of this work is that the computational interpretation of language in such environments should be based on a framework that integrates a model of visual perception with a model of discourse.
When interacting with a 3-D environment, users have two main goals the first is to move around in the simulated environment and the second is to manipulate objects in the environment. In order to interact with an object through language, users need to be able to refer to the object. There are many different types of referring expressions including definite descriptions, pronominals, demonstratives, one-anaphora, other-expressions, and locative-expressions Some of these expressions are anaphoric (e g , pronominals, oneanaphora, other-expressions). In order to computationally interpret these, it is necessary to develop, and implement, a discourse model. Interpreting locative expressions requires a semantic model for prepositions and a mechanism for selecting the user’s intended frame of reference. Finally, many of these expressions presuppose a visual context. In order to interpret them this context must be modelled and utilised.
This thesis develops a perceptually grounded discourse-based computational model of reference resolution capable of handling anaphoric and locative expressions. There are three novel contributions in this framework a visual saliency algorithm, a semantic model for locative expressions containing projective prepositions, and a discourse model.
The visual saliency algorithm grades the prominence of the objects in the user's view volume at each frame. This algorithm is based on the assumption that objects which are larger and more central to the user's view are more prominent than objects which are smaller or on the periphery of their view. The resulting saliency ratings for each frame are stored in a data structure linked to the NL system’s context model. This approach gives the system a visual memory that may be drawn upon in order to resolve references.
The semantic model for locative expressions defines a computational algorithm for interpreting locatives that contain a projective preposition. Specifically, the prepositions in front of behind, to the right of, and to the left of. There are several novel components within this model. First, there is a procedure for handling the issue of frame of reference selection. Second, there is an algorithm for modelling the spatial templates of projective prepositions. This algonthm integrates a topological model with visual perceptual cues. This approach allows us to correctly define the regions described by projective preposition in the viewer-centred frame of reference, in situations that previous models (Yamada 1993, Gapp 1994a, Olivier et al 1994, Fuhr et al 1998) have found
problematic. Thirdly, the abstraction used to represent the candidate trajectors of a locative expression ensures that each candidate is ascribed the highest rating possible. This approach guarantees that the candidate trajector that occupies the location with the highest applicability in the prepositions spatial template is selected as the locative’s referent.
The context model extends the work of Salmon-Alt and Romary (2001) by integrating the perceptual information created by the visual saliency algonthm with a model of discourse. Moreover, the context model defines an interpretation process that provides an explicit account of how the visual and linguistic information sources are utilised when attributing a referent to a nominal expression. It is important to note that the context model provides the set of candidate referents and candidate trajectors for the locative expression interpretation algorithm. These are restncted to those objects that the user has seen.
The thesis shows that visual salience provides a qualitative control in NL interpretation for 3-D simulated environments and captures interesting and significant effects such as graded judgments. Moreover, it provides an account for how object occlusion impacts on the semantics of projective prepositions that are canonically aligned with the front-back axis in the viewer-centred frame of reference
Mind the Gap: Situated Spatial Language a Case-Study in Connecting Perception and Language
This abstract reviews the literature on computational models of spatial semantics and the potential of deep learning models as an useful approach to this challenge
Fundamentals of Machine Learning for Neural Machine Translation
This paper presents a short introduction to neural networks and how they are used for machine translation and concludes with some discussion on the current research challenges being addressed by neural machine translation (NMT) research. The primary goal of this paper is to give a no-tears introduction to NMT to readers that do not have a computer science or mathematical background. The secondary goal is to provide the reader with a deep enough understanding of NMT that they can appreciate the strengths of weaknesses of the technology. The paper starts with a brief introduction to standard feed-forward neural networks (what they are, how they work, and how they are trained), this is followed by an introduction to word-embeddings (vector representations of words) and then we introduce recurrent neural networks. Once these fundamentals have been introduced we then focus in on the components of a standard neural-machine translation architecture, namely: encoder networks, decoder language models, and the encoder-decoder architecture
- …