97 research outputs found
Veridicity
This paper addresses the problem of assessing the veridicity of textual content. Has an event mentioned in the text really occurred? Who is the source of the information? What is the stance of the author of the text? Does the author indicate whether he believes the source? We will survey some of linguistic conventions that indicate the author\u27s commitment, or the lack thereof, to the propositions contained in her text. In particular we discuss phenomena that have been studied as presuppositions or conventional implicatures in previous literature. Some of those, such as factive and non-factive verbs, have received extensive attention in the past. Some others, such as supplemental expressions (e.g. appositives, parentheticals), have not received much previous attention, although they are very common and a rich source of textual inferences. A recent study by Christopher Potts classifies supplemental expressions as conventional implicatures. We agree with Potts on the label but not on what it means. In contrast to Potts, we claim that supplemental expressions cannot always be treated as the author\u27s direct commitments and argue that they do not constitute a basis for a distinction between presuppositions and conventional implicatures. We illustrate some cases of conventional implicature and show how they indicate an author\u27s commitment to the truth of his statements and briefly state the importance of these distinctions for Information Extraction (IE)
Coreference Resolution Evaluation Based on Descriptive Specificity
International audienceThis paper introduces a new evaluation method for the coreference resolution task. Considering that coreference resolution is a matter of linking expressions to discourse referents, we set our evaluation criteron in terms of an evaluation of the denotations assigned to the expressions. This criterion requires that the coreference chains identified in one annotation stand in a one-to-one correspondence with the coreference chains in the other. To determine this correspondence and with a view to keep closer to what human interpretation of the coreference chains would be, we take into account the fact that, in a coreference chain, some expressions are more specific to their referent than others. With this observation in mind, we measure the similarity between the chains in one annotation and the chains in the other, and then compute the optimal similarity between the two annotations. Evaluation then consists in checking whether the denotations assigned to the expressions are correct or not. New measures to analyse errors are also introduced. A comparison with other methods is given at the end of the paper
Recommended from our members
Unification and Grammatical Theory
This paper informally presents a new view of grammar that has emerged from a number of distinct but related lines of investigation in theoretical and computational linguistics. Under this view, many current linguistic theories—-including Lexical-Functional Grammar (LFG), Generalized Phrase Structure Grammar (GPSG), Head-Driven Phrase Structure Grammar (HPSG), and categorial grammar (CG)—-fall within a general framework of unification grammar. In such theories the linguistic objects under study are associated with linguistic information about the objects, which information is modeled by mathematical objects called feature structures. Linguistic phenomena are modeled by constraints of equality over the feature structures; the fundamental operation upon the feature structures, allowing solution of such systems of equations, is a simple merging of their information content called unification. Although differences among these theories remain great, this new appreciation of the common threads in research paradigms previously thought ideologically incompatible provides an opportunity for a uniting of efforts and results among these areas, as well as the ability to compare previously incommensurate claims.Engineering and Applied Science
Local textual inference: can it be defined or circumscribed
This paper argues that local textual inferences come in three well-defined varieties (entailments, conventional implicatures/presuppositions, and conversational implicatures) and one less clearly defined one, generally available world knowledge. Based on this taxonomy, it discusses some of the examples in the PASCAL text suite and shows that these examples do not fall into any of them. It proposes to enlarge the test suite with examples that are more directly related to the inference patterns discussed.
La polysémie systématique dans la description lexicale
G. Nunberg et A. Zaenen : La polysémie systématique dans la description lexicale
The phenomenon of systematic polysemy offers a fruitful domain for examining the theoretical differences between lexicological and lexicographic approaches to description. We consider here the process that provides for systematic conversion of count to mass nouns in English (a chicken /chicken, an oak / oak etc.). From the point of view of lexicology, we argue, standard syntactic and pragmatic tests suggest the phenomenon should be described by means of a single unindividuated transfer function that does not distinguish between interpretations (rabbit = « meat » vs. « fur »). From the point of view of lexicography, however, these more precise interpretations are made part of explicit description via the inclusion of semantic « licences », a mechanism distinct from lexical rules.Nunberg Geoffrey, Zaenen Annie. La polysémie systématique dans la description lexicale. In: Langue française, n°113, 1997. Aux sources de la polysémie nominale, sous la direction de Pierre Cadiot et Benoît Habert. pp. 12-23
- …