285 research outputs found
Unsupervised Neural Machine Translation with SMT as Posterior Regularization
Without real bilingual corpus available, unsupervised Neural Machine
Translation (NMT) typically requires pseudo parallel data generated with the
back-translation method for the model training. However, due to weak
supervision, the pseudo data inevitably contain noises and errors that will be
accumulated and reinforced in the subsequent training process, leading to bad
translation performance. To address this issue, we introduce phrase based
Statistic Machine Translation (SMT) models which are robust to noisy data, as
posterior regularizations to guide the training of unsupervised NMT models in
the iterative back-translation process. Our method starts from SMT models built
with pre-trained language models and word-level translation tables inferred
from cross-lingual embeddings. Then SMT and NMT models are optimized jointly
and boost each other incrementally in a unified EM framework. In this way, (1)
the negative effect caused by errors in the iterative back-translation process
can be alleviated timely by SMT filtering noises from its phrase tables;
meanwhile, (2) NMT can compensate for the deficiency of fluency inherent in
SMT. Experiments conducted on en-fr and en-de translation tasks show that our
method outperforms the strong baseline and achieves new state-of-the-art
unsupervised machine translation performance.Comment: To be presented at AAAI 2019; 9 pages, 4 figure
Joint Training for Neural Machine Translation Models with Monolingual Data
Monolingual data have been demonstrated to be helpful in improving
translation quality of both statistical machine translation (SMT) systems and
neural machine translation (NMT) systems, especially in resource-poor or domain
adaptation tasks where parallel data are not rich enough. In this paper, we
propose a novel approach to better leveraging monolingual data for neural
machine translation by jointly learning source-to-target and target-to-source
NMT models for a language pair with a joint EM optimization method. The
training process starts with two initial NMT models pre-trained on parallel
data for each direction, and these two models are iteratively updated by
incrementally decreasing translation losses on training data. In each iteration
step, both NMT models are first used to translate monolingual data from one
language to the other, forming pseudo-training data of the other NMT model.
Then two new NMT models are learnt from parallel data together with the pseudo
training data. Both NMT models are expected to be improved and better
pseudo-training data can be generated in next step. Experiment results on
Chinese-English and English-German translation tasks show that our approach can
simultaneously improve translation quality of source-to-target and
target-to-source models, significantly outperforming strong baseline systems
which are enhanced with monolingual data for model training including
back-translation.Comment: Accepted by AAAI 201
Regularizing Neural Machine Translation by Target-bidirectional Agreement
Although Neural Machine Translation (NMT) has achieved remarkable progress in
the past several years, most NMT systems still suffer from a fundamental
shortcoming as in other sequence generation tasks: errors made early in
generation process are fed as inputs to the model and can be quickly amplified,
harming subsequent sequence generation. To address this issue, we propose a
novel model regularization method for NMT training, which aims to improve the
agreement between translations generated by left-to-right (L2R) and
right-to-left (R2L) NMT decoders. This goal is achieved by introducing two
Kullback-Leibler divergence regularization terms into the NMT training
objective to reduce the mismatch between output probabilities of L2R and R2L
models. In addition, we also employ a joint training strategy to allow L2R and
R2L models to improve each other in an interactive update process. Experimental
results show that our proposed method significantly outperforms
state-of-the-art baselines on Chinese-English and English-German translation
tasks.Comment: Accepted by AAAI 201
Hypoxic Conditioned Medium from Rat Cerebral Cortical Cells Enhances the Proliferation and Differentiation of Neural Stem Cells Mainly through PI3-K/Akt Pathways
Purpose
To investigate the effects of hypoxic conditioned media from rat cerebral cortical cells on the proliferation and differentiation of neural stem cells (NSCs) in vitro, and to study the roles of PI3-K/Akt and JNK signal transduction pathways in these processes.
Methods
Cerebral cortical cells from neonatal Sprague–Dawley rat were cultured under hypoxic and normoxic conditions; the supernatant was collected and named ‘hypoxic conditioned medium’ (HCM) and ‘normoxic conditioned medium’ (NCM), respectively. We detected the protein levels (by ELISA) of VEGF and BDNF in the conditioned media and mRNA levels (by RT-PCR) in cerebral cortical cells. The proliferation (number and size of neurospheres) and differentiation (proportion of neurons and astrocytes over total cells) of NSCs was assessed. LY294002 and SP600125, inhibitors of PI3-K/Akt and JNK, respectively, were applied, and the phosphorylation levels of PI3-K, Akt and JNK were measured by western blot.
Results
The protein levels and mRNA expressions of VEGF and BDNF in 4% HCM and 1% HCM were both higher than that of those in NCM. The efficiency and speed of NSCs proliferation was enhanced in 4% HCM compared with 1% HCM. The highest percentage of neurons and lowest percentage of astrocytes was found in 4% HCM. However, the enhancement of NSCs proliferation and differentiation into neurons accelerated by 4% HCM was inhibited by LY294002 and SP600125, with LY294002 having a stronger inhibitory effect. The increased phosphorylation levels of PI3-K, Akt and JNK in 4% HCM were blocked by LY294002 and SP600125.
Conclusions
4%HCM could promote NSCs proliferation and differentiation into high percentage of neurons, these processes may be mainly through PI3-K/Akt pathways
- …