53,835 research outputs found
One Homonym per Translation
The study of homonymy is vital to resolving fundamental problems in lexical
semantics. In this paper, we propose four hypotheses that characterize the
unique behavior of homonyms in the context of translations, discourses,
collocations, and sense clusters. We present a new annotated homonym resource
that allows us to test our hypotheses on existing WSD resources. The results of
the experiments provide strong empirical evidence for the hypotheses. This
study represents a step towards a computational method for distinguishing
between homonymy and polysemy, and constructing a definitive inventory of
coarse-grained senses.Comment: 8 pages, including reference
Pattern Matching and Discourse Processing in Information Extraction from Japanese Text
Information extraction is the task of automatically picking up information of
interest from an unconstrained text. Information of interest is usually
extracted in two steps. First, sentence level processing locates relevant
pieces of information scattered throughout the text; second, discourse
processing merges coreferential information to generate the output. In the
first step, pieces of information are locally identified without recognizing
any relationships among them. A key word search or simple pattern search can
achieve this purpose. The second step requires deeper knowledge in order to
understand relationships among separately identified pieces of information.
Previous information extraction systems focused on the first step, partly
because they were not required to link up each piece of information with other
pieces. To link the extracted pieces of information and map them onto a
structured output format, complex discourse processing is essential. This paper
reports on a Japanese information extraction system that merges information
using a pattern matcher and discourse processor. Evaluation results show a high
level of system performance which approaches human performance.Comment: See http://www.jair.org/ for any accompanying file
Natural language processing
Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems
Presupposition, perceptional relativity and translation theory
The intertwining of assertions and presuppositions in utterances affects the way a text is perceived in the source language (SL) and the target language (TL).
Presuppositions can be thought of as shared assumptions that form the background of the asserted meaning. To translate presuppositions as assertions, or vice versa, can distort the thematic meaning of the SL text and produce a text with a different information structure. Since a good translation is not simply concerned with transferring the propositional content of the SL text, but also its other semantic and pragmatic components, including thematic meaning, a special attention should be accorded to the translation of presupposition. This article examines the intrinsic relation between presupposition and thematic meaning, why the concept is relevant to translation theory, and how
presupposition can affect the structure and understanding of discourse. Unshared presuppositions are major obstacles in translation, as cultural concepts may be conveyed through expressions that yield presuppositions. To attain an optimal proximity to the SL text, presupposition needs to be singled out as a distinct aspect of meaning, and distinctions need to be made between definite and indefinite meaning, topic and comment, topic and focus,
presupposition and entailment, and presupposition and implicature
- …