1,091 research outputs found
Models of Co-occurrence
A model of co-occurrence in bitext is a boolean predicate that indicates
whether a given pair of word tokens co-occur in corresponding regions of the
bitext space. Co-occurrence is a precondition for the possibility that two
tokens might be mutual translations. Models of co-occurrence are the glue that
binds methods for mapping bitext correspondence with methods for estimating
translation models into an integrated system for exploiting parallel texts.
Different models of co-occurrence are possible, depending on the kind of bitext
map that is available, the language-specific information that is available, and
the assumptions made about the nature of translational equivalence. Although
most statistical translation models are based on models of co-occurrence,
modeling co-occurrence correctly is more difficult than it may at first appear
Automatic Construction of Clean Broad-Coverage Translation Lexicons
Word-level translational equivalences can be extracted from parallel texts by
surprisingly simple statistical techniques. However, these techniques are
easily fooled by {\em indirect associations} --- pairs of unrelated words whose
statistical properties resemble those of mutual translations. Indirect
associations pollute the resulting translation lexicons, drastically reducing
their precision. This paper presents an iterative lexicon cleaning method. On
each iteration, most of the remaining incorrect lexicon entries are filtered
out, without significant degradation in recall. This lexicon cleaning technique
can produce translation lexicons with recall and precision both exceeding 90\%,
as well as dictionary-sized translation lexicons that are over 99\% correct.Comment: PostScript file, 10 pages. To appear in Proceedings of AMTA-9
Word-to-Word Models of Translational Equivalence
Parallel texts (bitexts) have properties that distinguish them from other
kinds of parallel data. First, most words translate to only one other word.
Second, bitext correspondence is noisy. This article presents methods for
biasing statistical translation models to reflect these properties. Analysis of
the expected behavior of these biases in the presence of sparse data predicts
that they will result in more accurate models. The prediction is confirmed by
evaluation with respect to a gold standard -- translation models that are
biased in this fashion are significantly more accurate than a baseline
knowledge-poor model. This article also shows how a statistical translation
model can take advantage of various kinds of pre-existing knowledge that might
be available about particular language pairs. Even the simplest kinds of
language-specific knowledge, such as the distinction between content words and
function words, is shown to reliably boost translation model performance on
some tasks. Statistical models that are informed by pre-existing knowledge
about the model domain combine the best of both the rationalist and empiricist
traditions
Automatic Discovery of Non-Compositional Compounds in Parallel Data
Automatic segmentation of text into minimal content-bearing units is an
unsolved problem even for languages like English. Spaces between words offer an
easy first approximation, but this approximation is not good enough for machine
translation (MT), where many word sequences are not translated word-for-word.
This paper presents an efficient automatic method for discovering sequences of
words that are translated as a unit. The method proceeds by comparing pairs of
statistical translation models induced from parallel texts in two languages. It
can discover hundreds of non-compositional compounds on each iteration, and
constructs longer compounds out of shorter ones. Objective evaluation on a
simple machine translation task has shown the method's potential to improve the
quality of MT output. The method makes few assumptions about the data, so it
can be applied to parallel data other than parallel texts, such as word
spellings and pronunciations.Comment: 12 pages; uses natbib.sty, here.st
Manual Annotation of Translational Equivalence: The Blinker Project
Bilingual annotators were paid to link roughly sixteen thousand corresponding
words between on-line versions of the Bible in modern French and modern
English. These annotations are freely available to the research community from
http://www.cis.upenn.edu/~melamed . The annotations can be used for several
purposes. First, they can be used as a standard data set for developing and
testing translation lexicons and statistical translation models. Second,
researchers in lexical semantics will be able to mine the annotations for
insights about cross-linguistic lexicalization patterns. Third, the annotations
can be used in research into certain recently proposed methods for monolingual
word-sense disambiguation. This paper describes the annotated texts, the
specially-designed annotation tool, and the strategies employed to increase the
consistency of the annotations. The annotation process was repeated five times
by different annotators. Inter-annotator agreement rates indicate that the
annotations are reasonably reliable and that the method is easy to replicate
Accuracy-based scoring for DOT: towards direct error minimization for data-oriented translation
In this work we present a novel technique to rescore fragments in the Data-Oriented Translation model based on their contribution to translation accuracy. We describe
three new rescoring methods, and present the initial results of a pilot experiment on a small subset of the Europarl corpus. This work is a proof-of-concept, and
is the first step in directly optimizing translation
decisions solely on the hypothesized accuracy of potential translations resulting from those decisions
Automatic Detection of Omissions in Translations
ADOMIT is an algorithm for Automatic Detection of OMIssions in Translations. The algorithm relies solely on geometric analysis of bitext maps and uses no linguistic information. This property allows it to deal equally well with omissions that do not correspond to linguistic units, such as might result from word-processing mishaps. ADOMIT has proven itself by discovering many errors in a hand-constructed gold standard for evaluating bitext mapping algorithms. Quantitative evaluation on simulated omissions showed that, even with today\u27s poor bitext mapping technology, ADOMIT is a valuable quality control tool for translators and translation bureaus
A Geometric Approach to Mapping Bitext Correspondence
The first step in most corpus-based multilingual NLP work is to construct a
detailed map of the correspondence between a text and its translation. Several
automatic methods for this task have been proposed in recent years. Yet even
the best of these methods can err by several typeset pages. The Smooth
Injective Map Recognizer (SIMR) is a new bitext mapping algorithm. SIMR's
errors are smaller than those of the previous front-runner by more than a
factor of 4. Its robustness has enabled new commercial-quality applications.
The greedy nature of the algorithm makes it independent of memory resources.
Unlike other bitext mapping algorithms, SIMR allows crossing correspondences to
account for word order differences. Its output can be converted quickly and
easily into a sentence alignment. SIMR's output has been used to align over 200
megabytes of the Canadian Hansards for publication by the Linguistic Data
Consortium.Comment: 15 pages, minor revisions on Sept. 30, 199
- …