806 research outputs found
A Survey of Word Reordering in Statistical Machine Translation: Computational Models and Language Phenomena
Word reordering is one of the most difficult aspects of statistical machine
translation (SMT), and an important factor of its quality and efficiency.
Despite the vast amount of research published to date, the interest of the
community in this problem has not decreased, and no single method appears to be
strongly dominant across language pairs. Instead, the choice of the optimal
approach for a new translation task still seems to be mostly driven by
empirical trials. To orientate the reader in this vast and complex research
area, we present a comprehensive survey of word reordering viewed as a
statistical modeling challenge and as a natural language phenomenon. The survey
describes in detail how word reordering is modeled within different
string-based and tree-based SMT frameworks and as a stand-alone task, including
systematic overviews of the literature in advanced reordering modeling. We then
question why some approaches are more successful than others in different
language pairs. We argue that, besides measuring the amount of reordering, it
is important to understand which kinds of reordering occur in a given language
pair. To this end, we conduct a qualitative analysis of word reordering
phenomena in a diverse sample of language pairs, based on a large collection of
linguistic knowledge. Empirical results in the SMT literature are shown to
support the hypothesis that a few linguistic facts can be very useful to
anticipate the reordering characteristics of a language pair and to select the
SMT framework that best suits them.Comment: 44 pages, to appear in Computational Linguistic
Topic-Specific Sentiment Analysis Can Help Identify Political Ideology
Ideological leanings of an individual can often be gauged by the sentiment
one expresses about different issues. We propose a simple framework that
represents a political ideology as a distribution of sentiment polarities
towards a set of topics. This representation can then be used to detect
ideological leanings of documents (speeches, news articles, etc.) based on the
sentiments expressed towards different topics. Experiments performed using a
widely used dataset show the promise of our proposed approach that achieves
comparable performance to other methods despite being much simpler and more
interpretable.Comment: Presented at EMNLP Workshop on Computational Approaches to
Subjectivity, Sentiment & Social Media Analysis, 201
A Call for Standardization and Validation of Text Style Transfer Evaluation
Text Style Transfer (TST) evaluation is, in practice, inconsistent.
Therefore, we conduct a meta-analysis on human and automated TST evaluation and
experimentation that thoroughly examines existing literature in the field. The
meta-analysis reveals a substantial standardization gap in human and automated
evaluation. In addition, we also find a validation gap: only few automated
metrics have been validated using human experiments. To this end, we thoroughly
scrutinize both the standardization and validation gap and reveal the resulting
pitfalls. This work also paves the way to close the standardization and
validation gap in TST evaluation by calling out requirements to be met by
future research.Comment: Accepted to Findings of ACL 202
TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning
Masked language models (MLMs) such as BERT and RoBERTa have revolutionized
the field of Natural Language Understanding in the past few years. However,
existing pre-trained MLMs often output an anisotropic distribution of token
representations that occupies a narrow subset of the entire representation
space. Such token representations are not ideal, especially for tasks that
demand discriminative semantic meanings of distinct tokens. In this work, we
propose TaCL (Token-aware Contrastive Learning), a novel continual pre-training
approach that encourages BERT to learn an isotropic and discriminative
distribution of token representations. TaCL is fully unsupervised and requires
no additional data. We extensively test our approach on a wide range of English
and Chinese benchmarks. The results show that TaCL brings consistent and
notable improvements over the original BERT model. Furthermore, we conduct
detailed analysis to reveal the merits and inner-workings of our approach.Comment: Camera-ready for NAACL 202
- …