277 research outputs found
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
Definiteness agreement and the pragmatics of reference in the Maltese NP
Maltese noun phrases exhibit a form of ‘definiteness agreement’ between head
noun and modifier. When the noun is definite, an adjectival modifier is often
overtly marked as definite as well. However, the status of this phenomenon
as a case of true morphosyntactic agreement has been disputed, given its
apparent optionality. Not all definite nps have modifiers which are overtly
marked as definite. Some authors have argued that definiteness marking on
the adjective is in fact pragmatically licensed. The present paper presents a
corpus-based study of the distribution of adjectives with and without definite
marking, and then tests the pragmatic licensing claim through a production
study. Speakers were found to be more likely to use definite adjectives in
referential noun phrases when the adjectives had a specifically contrastive
function. This result is discussed in the context of both theoretical and
psycholinguistic work on the pragmatics of referentiality.peer-reviewe
Possessives and beyond : semantics and syntax
Recent work on the semantics of possessives has evinced a resurgence of interest in the substantive nature and provenance of the possessive relation (e.g. Barker 1995; Partee and Borschev 1998, 2000a, 2000b; Borschev and Partee 2001; Vikner and Jensen 2002) *. A more systematic account of these relations is made possible by developments in lexical semantic theories, which have given rise to a weakly polymorphic view of the syntax-lexical semantics interface, whereby lexical items are underspecified to some degree, and dependent on the selectional properties of other elements in their immediate syntactic environment (e.g. Pustejovsky 1995, 1998). While various approaches subscribe to some version of these hypotheses, there are important theoretical differences between them with respect to the domain in which knowledge is considered to lie, whether it is encoded in a sort system underlying the lexicon, or whether it is construed as ‘world knowledge’ (cf. Dölling 1995, 1997).
This paper endorses the view that the lexicon should be imputed with a limited amount of knowledge, organised as a sort inheritance hierarchy (Pustejovsky 1995). It attempts to extend the approach to possessive relations proposed by Jensen and Vikner (1994, 2004; Vikner and Jensen, 2002), based on the Generative Lexicon, to a particular class of possessive constructions. Such constructions, exemplified by expressions like a women’s magazine, are often ambiguous between a regular, relational interpretation and an alternative ‘modificational’ interpretation. Anticipating the outcome of the analysis, the latter will be referred to as Generic Possessives (GPs). Focusing on data from Maltese, I will show that the possessor NP in these constructions is kind-denoting. I will argue that the GP expresses a relation holding between the entity denoted by the head noun and putative realizations of the kind denoted by the possessor NP.peer-reviewe
What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption Generator?
In neural image captioning systems, a recurrent neural network (RNN) is
typically viewed as the primary `generation' component. This view suggests that
the image features should be `injected' into the RNN. This is in fact the
dominant view in the literature. Alternatively, the RNN can instead be viewed
as only encoding the previously generated words. This view suggests that the
RNN should only be used to encode linguistic features and that only the final
representation should be `merged' with the image features at a later stage.
This paper compares these two architectures. We find that, in general, late
merging outperforms injection, suggesting that RNNs are better viewed as
encoders, rather than generators.Comment: Appears in: Proceedings of the 10th International Conference on
Natural Language Generation (INLG'17
Morphological analysis for the Maltese language : the challenges of a hybrid system
Maltese is a morphologically rich language with a hybrid morphological system which features both concatenative and non-concatenative processes. This paper analyses the impact of this hybridity on the performance of machine learning techniques for morphological labelling and clustering. In particular, we analyse a dataset of morphologically related word clusters to evaluate the difference in results for concatenative and non-concatenative clusters. We also describe research carried out in morphological labelling, with a particular focus on the verb category. Two evaluations were carried out, one using an unseen dataset, and another one using a gold standard dataset which was manually labelled. The gold standard dataset was split into concatenative and non-concatenative to analyse the difference in results between the two morphological systems.non peer-reviewe
Interpreting Vision and Language Generative Models with Semantic Visual Priors
When applied to Image-to-text models, interpretability methods often provide
token-by-token explanations namely, they compute a visual explanation for each
token of the generated sequence. Those explanations are expensive to compute
and unable to comprehensively explain the model's output. Therefore, these
models often require some sort of approximation that eventually leads to
misleading explanations. We develop a framework based on SHAP, that allows for
generating comprehensive, meaningful explanations leveraging the meaning
representation of the output sequence as a whole. Moreover, by exploiting
semantic priors in the visual backbone, we extract an arbitrary number of
features that allows the efficient computation of Shapley values on large-scale
models, generating at the same time highly meaningful visual explanations. We
demonstrate that our method generates semantically more expressive explanations
than traditional methods at a lower compute cost and that it can be generalized
over other explainability methods
- …