2,491 research outputs found
Using Multiple Sources of Information for Constraint-Based Morphological Disambiguation
This thesis presents a constraint-based morphological disambiguation approach
that is applicable to languages with complex morphology--specifically
agglutinative languages with productive inflectional and derivational
morphological phenomena. For morphologically complex languages like Turkish,
automatic morphological disambiguation involves selecting for each token
morphological parse(s), with the right set of inflectional and derivational
markers. Our system combines corpus independent hand-crafted constraint rules,
constraint rules that are learned via unsupervised learning from a training
corpus, and additional statistical information obtained from the corpus to be
morphologically disambiguated. The hand-crafted rules are linguistically
motivated and tuned to improve precision without sacrificing recall. In certain
respects, our approach has been motivated by Brill's recent work, but with the
observation that his transformational approach is not directly applicable to
languages like Turkish. Our approach also uses a novel approach to unknown word
processing by employing a secondary morphological processor which recovers any
relevant inflectional and derivational information from a lexical item whose
root is unknown. With this approach, well below 1% of the tokens remains as
unknown in the texts we have experimented with. Our results indicate that by
combining these hand-crafted, statistical and learned information sources, we
can attain a recall of 96 to 97% with a corresponding precision of 93 to 94%,
and ambiguity of 1.02 to 1.03 parses per token.Comment: M.Sc. Thesis submitted to the Department of Computer Engineering and
Information Science, Bilkent University, Ankara, Turkey. Also available as:
ftp://ftp.cs.bilkent.edu.tr/pub/tech-reports/1996/BU-CEIS-9615ps.
Morphological Disambiguation by Voting Constraints
We present a constraint-based morphological disambiguation system in which
individual constraints vote on matching morphological parses, and
disambiguation of all the tokens in a sentence is performed at the end by
selecting parses that receive the highest votes. This constraint application
paradigm makes the outcome of the disambiguation independent of the rule
sequence, and hence relieves the rule developer from worrying about potentially
conflicting rule sequencing. Our results for disambiguating Turkish indicate
that using about 500 constraint rules and some additional simple statistics, we
can attain a recall of 95-96% and a precision of 94-95% with about 1.01 parses
per token. Our system is implemented in Prolog and we are currently
investigating an efficient implementation based on finite state transducers.Comment: 8 pages, Latex source. To appear in Proceedings of ACL/EACL'97
Compressed postscript also available as
ftp://ftp.cs.bilkent.edu.tr/pub/ko/acl97.ps.
Word sense disambiguation criteria: a systematic study
This article describes the results of a systematic in-depth study of the
criteria used for word sense disambiguation. Our study is based on 60 target
words: 20 nouns, 20 adjectives and 20 verbs. Our results are not always in line
with some practices in the field. For example, we show that omitting
non-content words decreases performance and that bigrams yield better results
than unigrams
A Machine learning approach to POS tagging
We have applied inductive learning of statistical decision trees
and relaxation labelling to the Natural Language Processing (NLP)
task of morphosyntactic disambiguation (Part Of Speech Tagging).
The learning process is supervised and obtains a language
model oriented to resolve POS ambiguities. This model consists
of a set of statistical decision trees expressing distribution of
tags and words in some relevant contexts.
The acquired language models are complete enough to be directly
used as sets of POS disambiguation rules, and include more complex
contextual information than simple collections of n-grams usually
used in statistical taggers.
We have implemented a quite simple and fast tagger that has been
tested and evaluated on the Wall Street Journal (WSJ) corpus with
a remarkable accuracy.
However, better results can be obtained by translating the trees
into rules to feed a flexible relaxation labelling based tagger.
In this direction we describe a tagger which is able to use
information of any kind (n-grams, automatically acquired constraints,
linguistically motivated manually written constraints, etc.), and in
particular to incorporate the machine learned decision trees.
Simultaneously, we address the problem of tagging when only
small training material is available, which is crucial in any process
of constructing, from scratch, an annotated corpus. We show that quite
high accuracy can be achieved with our system in this situation.Postprint (published version
Proceedings of the Workshop Semantic Content Acquisition and Representation (SCAR) 2007
This is the proceedings of the Workshop on Semantic Content Acquisition and Representation, held in conjunction with NODALIDA 2007, on May 24 2007 in Tartu, Estonia.</p
Selective Sampling for Example-based Word Sense Disambiguation
This paper proposes an efficient example sampling method for example-based
word sense disambiguation systems. To construct a database of practical size, a
considerable overhead for manual sense disambiguation (overhead for
supervision) is required. In addition, the time complexity of searching a
large-sized database poses a considerable problem (overhead for search). To
counter these problems, our method selectively samples a smaller-sized
effective subset from a given example set for use in word sense disambiguation.
Our method is characterized by the reliance on the notion of training utility:
the degree to which each example is informative for future example sampling
when used for the training of the system. The system progressively collects
examples by selecting those with greatest utility. The paper reports the
effectiveness of our method through experiments on about one thousand
sentences. Compared to experiments with other example sampling methods, our
method reduced both the overhead for supervision and the overhead for search,
without the degeneration of the performance of the system.Comment: 25 pages, 14 Postscript figure
- …