7,278 research outputs found
Is Structure Necessary for Modeling Argument Expectations in Distributional Semantics?
Despite the number of NLP studies dedicated to thematic fit estimation,
little attention has been paid to the related task of composing and updating
verb argument expectations. The few exceptions have mostly modeled this
phenomenon with structured distributional models, implicitly assuming a
similarly structured representation of events. Recent experimental evidence,
however, suggests that human processing system could also exploit an
unstructured "bag-of-arguments" type of event representation to predict
upcoming input. In this paper, we re-implement a traditional structured model
and adapt it to compare the different hypotheses concerning the degree of
structure in our event knowledge, evaluating their relative performance in the
task of the argument expectations update.Comment: conference paper, IWC
From Frequency to Meaning: Vector Space Models of Semantics
Computers understand very little of the meaning of human language. This
profoundly limits our ability to give instructions to computers, the ability of
computers to explain their actions to us, and the ability of computers to
analyse and process text. Vector space models (VSMs) of semantics are beginning
to address these limits. This paper surveys the use of VSMs for semantic
processing of text. We organize the literature on VSMs according to the
structure of the matrix in a VSM. There are currently three broad classes of
VSMs, based on term-document, word-context, and pair-pattern matrices, yielding
three classes of applications. We survey a broad range of applications in these
three categories and we take a detailed look at a specific open source project
in each category. Our goal in this survey is to show the breadth of
applications of VSMs for semantics, to provide a new perspective on VSMs for
those who are already familiar with the area, and to provide pointers into the
literature for those who are less familiar with the field
A distributional model of semantic context effects in lexical processinga
One of the most robust findings of experimental psycholinguistics is that the context in which a word is presented influences the effort involved in processing that word. We present a novel model of contextual facilitation based on word co-occurrence prob ability distributions, and empirically validate the model through simulation of three representative types of context manipulation: single word priming, multiple-priming and contextual constraint. In our simulations the effects of semantic context are mod eled using general-purpose techniques and representations from multivariate statistics, augmented with simple assumptions reflecting the inherently incremental nature of speech understanding. The contribution of our study is to show that special-purpose m echanisms are not necessary in order to capture the general pattern of the experimental results, and that a range of semantic context effects can be subsumed under the same principled account.›
- …