395 research outputs found
Emerging methods for conceptual modelling in neuroimaging
Some open theoretical questions are addressed on how the mind and brain represent and process concepts, particularly as they are instantiated in particular human languages. Recordings of neuroimaging data should provide a suitable empirical basis for investigating this topic, but the complexity and variety of language demands appropriate data-driven approaches. In this review we argue for a particular suite of methodologies, based on multivariate classification techniques which have proven to be powerful tools for distinguishing neural and cognitive states in fMRI. A combination of larger scale neuroimaging studies are introduced with different monolingual and bilingual populations, and hybrid computational analyses that use encoded implementations of existing theories of conceptual organisation to probe those data. We develop a suite of methodologies that holds the promise of being able to holistically elicit, record and model neural processing during language comprehension and production
A Survey of Word Reordering in Statistical Machine Translation: Computational Models and Language Phenomena
Word reordering is one of the most difficult aspects of statistical machine
translation (SMT), and an important factor of its quality and efficiency.
Despite the vast amount of research published to date, the interest of the
community in this problem has not decreased, and no single method appears to be
strongly dominant across language pairs. Instead, the choice of the optimal
approach for a new translation task still seems to be mostly driven by
empirical trials. To orientate the reader in this vast and complex research
area, we present a comprehensive survey of word reordering viewed as a
statistical modeling challenge and as a natural language phenomenon. The survey
describes in detail how word reordering is modeled within different
string-based and tree-based SMT frameworks and as a stand-alone task, including
systematic overviews of the literature in advanced reordering modeling. We then
question why some approaches are more successful than others in different
language pairs. We argue that, besides measuring the amount of reordering, it
is important to understand which kinds of reordering occur in a given language
pair. To this end, we conduct a qualitative analysis of word reordering
phenomena in a diverse sample of language pairs, based on a large collection of
linguistic knowledge. Empirical results in the SMT literature are shown to
support the hypothesis that a few linguistic facts can be very useful to
anticipate the reordering characteristics of a language pair and to select the
SMT framework that best suits them.Comment: 44 pages, to appear in Computational Linguistic
Phraseology in Corpus-Based Translation Studies: A Stylistic Study of Two Contemporary Chinese Translations of Cervantes's Don Quijote
The present work sets out to investigate the stylistic profiles of two modern Chinese versions of
Cervantes’s Don Quijote (I): by Yang Jiang (1978), the first direct translation from Castilian to Chinese,
and by Liu Jingsheng (1995), which is one of the most commercially successful versions of the
Castilian literary classic. This thesis focuses on a detailed linguistic analysis carried out with the help
of the latest textual analytical tools, natural language processing applications and statistical packages.
The type of linguistic phenomenon singled out for study is four-character expressions (FCEXs), which
are a very typical category of Chinese phraseology. The work opens with the creation of a descriptive
framework for the annotation of linguistic data extracted from the parallel corpus of Don Quijote.
Subsequently, the classified and extracted data are put through several statistical tests. The results of
these tests prove to be very revealing regarding the different use of FCEXs in the two Chinese
translations. The computational modelling of the linguistic data would seem to indicate that among
other findings, while Liu’s use of archaic idioms has followed the general patterns of the original and
also of Yang’s work in the first half of Don Quijote I, noticeable variations begin to emerge in the
second half of Liu’s more recent version. Such an idiosyncratic use of archaisms by Liu, which may be
defined as style shifting or style variation, is then analyzed in quantitative terms through the application
of the proposed context-motivated theory (CMT). The results of applying the CMT-derived statistical
models show that the detected stylistic variation may well point to the internal consistency of the
translator in rendering the second half of Part I of the novel, which reflects his freer, more creative and
experimental style of translation. Through the introduction and testing of quantitative research methods
adapted from corpus linguistics and textual statistics, this thesis has made a major contribution to
methodological innovation in the study of style within the context of corpus-based translation studies
Syntax-based machine translation using dependency grammars and discriminative machine learning
Machine translation underwent huge improvements since the groundbreaking
introduction of statistical methods in the early 2000s, going from very
domain-specific systems that still performed relatively poorly despite the
painstakingly crafting of thousands of ad-hoc rules, to general-purpose
systems automatically trained on large collections of bilingual texts which
manage to deliver understandable translations that convey the general
meaning of the original input.
These approaches however still perform quite below the level of human
translators, typically failing to convey detailed meaning and register, and
producing translations that, while readable, are often ungrammatical and
unidiomatic.
This quality gap, which is considerably large compared to most other
natural language processing tasks, has been the focus of the research in
recent years, with the development of increasingly sophisticated models that
attempt to exploit the syntactical structure of human languages, leveraging
the technology of statistical parsers, as well as advanced machine learning
methods such as marging-based structured prediction algorithms and neural
networks.
The translation software itself became more complex in order to accommodate
for the sophistication of these advanced models: the main translation
engine (the decoder) is now often combined with a pre-processor which
reorders the words of the source sentences to a target language word order, or
with a post-processor that ranks and selects a translation according according
to fine model from a list of candidate translations generated by a coarse
model.
In this thesis we investigate the statistical machine translation problem
from various angles, focusing on translation from non-analytic languages
whose syntax is best described by fluid non-projective dependency grammars
rather than the relatively strict phrase-structure grammars or projectivedependency
grammars which are most commonly used in the literature.
We propose a framework for modeling word reordering phenomena
between language pairs as transitions on non-projective source dependency
parse graphs. We quantitatively characterize reordering phenomena for the
German-to-English language pair as captured by this framework, specifically
investigating the incidence and effects of the non-projectivity of source
syntax and the non-locality of word movement w.r.t. the graph structure.
We evaluated several variants of hand-coded pre-ordering rules in order to
assess the impact of these phenomena on translation quality.
We propose a class of dependency-based source pre-ordering approaches
that reorder sentences based on a flexible models trained by SVMs and and
several recurrent neural network architectures.
We also propose a class of translation reranking models, both syntax-free
and source dependency-based, which make use of a type of neural networks
known as graph echo state networks which is highly flexible and requires
extremely little training resources, overcoming one of the main limitations
of neural network models for natural language processing tasks
Tackling Sequence to Sequence Mapping Problems with Neural Networks
In Natural Language Processing (NLP), it is important to detect the
relationship between two sequences or to generate a sequence of tokens given
another observed sequence. We call the type of problems on modelling sequence
pairs as sequence to sequence (seq2seq) mapping problems. A lot of research has
been devoted to finding ways of tackling these problems, with traditional
approaches relying on a combination of hand-crafted features, alignment models,
segmentation heuristics, and external linguistic resources. Although great
progress has been made, these traditional approaches suffer from various
drawbacks, such as complicated pipeline, laborious feature engineering, and the
difficulty for domain adaptation. Recently, neural networks emerged as a
promising solution to many problems in NLP, speech recognition, and computer
vision. Neural models are powerful because they can be trained end to end,
generalise well to unseen examples, and the same framework can be easily
adapted to a new domain.
The aim of this thesis is to advance the state-of-the-art in seq2seq mapping
problems with neural networks. We explore solutions from three major aspects:
investigating neural models for representing sequences, modelling interactions
between sequences, and using unpaired data to boost the performance of neural
models. For each aspect, we propose novel models and evaluate their efficacy on
various tasks of seq2seq mapping.Comment: PhD thesi
- …