1,955 research outputs found
A Compositional Treatment of Polysemous Arguments in Categorial Grammar
We discuss an extension of the standard logical rules (functional application
and abstraction) in Categorial Grammar (CG), in order to deal with some
specific cases of polysemy. We borrow from Generative Lexicon theory which
proposes the mechanism of {\em coercion}, next to a rich nominal lexical
semantic structure called {\em qualia structure}.
In a previous paper we introduced coercion into the framework of {\em
sign-based} Categorial Grammar and investigated its impact on traditional
Fregean compositionality. In this paper we will elaborate on this idea, mostly
working towards the introduction of a new semantic dimension. Where in current
versions of sign-based Categorial Grammar only two representations are derived:
a prosodic one (form) and a logical one (modelling), here we introduce also a
more detaled representation of the lexical semantics. This extra knowledge will
serve to account for linguistic phenomena like {\em metonymy\/}.Comment: LaTeX file, 19 pages, uses pubsmacs, pubsbib, pubsarticle, leqn
Ontology and Formal Semantics - Integration Overdue
In this note we suggest that difficulties encountered in natural language semantics are, for the most part, due to the use of mere symbol manipulation systems that are devoid of any content. In such systems, where there is hardly any link with our common-sense view of the world, and it is quite difficult to envision how one can formally account for the considerable amount of content that is often implicit, but almost never explicitly stated in our everyday discourse. \ud
The solution, in our opinion, is a compositional semantics grounded in an ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. In the compositional logic we envision there are ontological (or first-intension) concepts, and logical (or second-intension) concepts, and where the ontological concepts include not only Davidsonian events, but other abstract objects as well (e.g., states, processes, properties, activities, attributes, etc.) \ud
It will be demonstrated here that in such a framework, a number of challenges in the semantics of natural language (e.g., metonymy, intensionality, metaphor, etc.) can be properly and uniformly addressed.\u
Exploring Metaphorical Senses and Word Representations for Identifying Metonyms
A metonym is a word with a figurative meaning, similar to a metaphor. Because
metonyms are closely related to metaphors, we apply features that are used
successfully for metaphor recognition to the task of detecting metonyms. On the
ACL SemEval 2007 Task 8 data with gold standard metonym annotations, our system
achieved 86.45% accuracy on the location metonyms. Our code can be found on
GitHub.Comment: 9 pages, 8 pages conten
Processing Metonymy: a Domain-Model Heuristic Graph Traversal Approach
We address here the treatment of metonymic expressions from a knowledge
representation perspective, that is, in the context of a text understanding
system which aims to build a conceptual representation from texts according to
a domain model expressed in a knowledge representation formalism.
We focus in this paper on the part of the semantic analyser which deals with
semantic composition. We explain how we use the domain model to handle metonymy
dynamically, and more generally, to underlie semantic composition, using the
knowledge descriptions attached to each concept of our ontology as a kind of
concept-level, multiple-role qualia structure.
We rely for this on a heuristic path search algorithm that exploits the
graphic aspects of the conceptual graphs formalism. The methods described have
been implemented and applied on French texts in the medical domain.Comment: 6 pages, LaTeX, one encapsulated PostScript figure, uses colap.sty
(included) and epsf.sty (available from the cmp-lg macro library). To appear
in Coling-9
Polysemy and word meaning: an account of lexical meaning for different kinds of content words
There is an ongoing debate about the meaning of lexical words, i.e., words that contribute with content to the meaning of sentences. This debate has coincided with a renewal in the study of polysemy, which has taken place in the psycholinguistics camp mainly. There is already a fruitful interbreeding between two lines of research: the theoretical study of lexical word meaning, on the one hand, and the models of polysemy psycholinguists present, on the other. In this paper I aim at deepening on this ongoing interbreeding, examine what is said about polysemy, particularly in the psycholinguistics literature, and then show how what we seem to know about the representation and storage of polysemous senses affects the models that we have about lexical word meaning
On the nature of the lexicon: the status of rich lexical meanings
The main goal of this paper is to show that there are many phenomena that pertain to the construction of truth-conditional compounds that follow characteristic patterns, and whose explanation requires appealing to knowledge structures organized in specific ways. We review a number of phenomena, ranging from non-homogenous modification and privative modification to polysemy and co-predication that indicate that knowledge structures do play a role in obtaining truth-conditions. After that, we show that several extant accounts that invoke rich lexical meanings to explain such phenomena face problems related to inflexibility and lack of predictive power. We review different ways in which one might react to such problems as regards lexical meanings: go richer, go moderately richer, go thinner, and go moderately thinner. On the face of it, it looks like moderate positions are unstable, given the apparent lack of a clear cutoff point between the semantic and the conceptual, but also that a very thin view and a very rich view may turn out to be indistinguishable in the long run. As far as we can see, the most pressing open questions concern this last issue: can there be a principled semantic/world knowledge distinction? Where could it be drawn: at some upper level (e.g. enriched qualia structures) or at some basic level (e.g. constraints)? How do parsimony considerations affect these two different approaches? A thin meanings approach postulates intermediate representations whose role is not clear in the interpretive process, while a rich meanings approach to lexical meaning seems to duplicate representations: the same representations that are stored in the lexicon would form part of conceptual representations. Both types of parsimony problems would be solved by assuming a direct relation between word forms and (parts of) conceptual or world knowledge, leading to a view that has been attributed to Chomsky (e.g. by Katz 1980) in which there is just syntax and encyclopedic knowledge
Fitting, Not Clashing! A Distributional Semantic Model of Logical Metonymy
Logical metonymy interpretation (e.g. begin the book ->writing) has received wide attention in linguistics. Experimental results have shown higher processing costs for metonymic conditions compared with non-metonymic ones (
read the book). According to a widely held interpretation, it is
the type clash between the event-selecting verb and the entity-denoting object (begin the book) that triggers coercion mechanisms and leads to additional processing effort. We propose an alternative explanation and argue that the extra processing effort is an effect of thematic fit. This is a more
economical hypothesis that does not need to postulate a separate type clash mechanism: entity-denoting objects simply have a low fit as objects of event-selecting verbs. We test linguistic datasets from psycholinguistic experiments and find that a structured distributional model of thematic fit,
which does not encode any explicit argument type information, is able to replicate all significant experimental findings. This result provides evidence for a graded account of coercion phenomena in which thematic fit accounts for both the trigger of the coercion and the retrieval of the covert even
- …