262 research outputs found

    Multi-channel Encoder for Neural Machine Translation

    Full text link
    Attention-based Encoder-Decoder has the effective architecture for neural machine translation (NMT), which typically relies on recurrent neural networks (RNN) to build the blocks that will be lately called by attentive reader during the decoding process. This design of encoder yields relatively uniform composition on source sentence, despite the gating mechanism employed in encoding RNN. On the other hand, we often hope the decoder to take pieces of source sentence at varying levels suiting its own linguistic structure: for example, we may want to take the entity name in its raw form while taking an idiom as a perfectly composed unit. Motivated by this demand, we propose Multi-channel Encoder (MCE), which enhances encoding components with different levels of composition. More specifically, in addition to the hidden state of encoding RNN, MCE takes 1) the original word embedding for raw encoding with no composition, and 2) a particular design of external memory in Neural Turing Machine (NTM) for more complex composition, while all three encoding strategies are properly blended during decoding. Empirical study on Chinese-English translation shows that our model can improve by 6.52 BLEU points upon a strong open source NMT system: DL4MT1. On the WMT14 English- French task, our single shallow system achieves BLEU=38.8, comparable with the state-of-the-art deep models.Comment: Accepted by AAAI-201

    Semi-Supervised Learning for Neural Machine Translation

    Full text link
    While end-to-end neural machine translation (NMT) has made remarkable progress recently, NMT systems only rely on parallel corpora for parameter estimation. Since parallel corpora are usually limited in quantity, quality, and coverage, especially for low-resource languages, it is appealing to exploit monolingual corpora to improve NMT. We propose a semi-supervised approach for training NMT models on the concatenation of labeled (parallel corpora) and unlabeled (monolingual corpora) data. The central idea is to reconstruct the monolingual corpora using an autoencoder, in which the source-to-target and target-to-source translation models serve as the encoder and decoder, respectively. Our approach can not only exploit the monolingual corpora of the target language, but also of the source language. Experiments on the Chinese-English dataset show that our approach achieves significant improvements over state-of-the-art SMT and NMT systems.Comment: Corrected a typ

    Towards Boosting Many-to-Many Multilingual Machine Translation with Large Language Models

    Full text link
    The training paradigm for machine translation has gradually shifted, from learning neural machine translation (NMT) models with extensive parallel corpora to instruction finetuning on multilingual large language models (LLMs) with high-quality translation pairs. In this paper, we focus on boosting many-to-many multilingual translation of LLMs with an emphasis on zero-shot translation directions. We demonstrate that prompt strategies adopted during finetuning are crucial to zero-shot translation and introduce a cross-lingual consistency regularization, XConST, to bridge the representation gap among different languages and improve zero-shot translation performance. XConST is not a new method, but a version of CrossConST (Gao et al., 2023a) adapted for translation instruction finetuning with LLMs. Experimental results on ALMA (Xu et al., 2023), Tower (Team, 2024), and LLaMA-2 (Touvron et al., 2023) show that our approach consistently improves translation performance. Our implementations are available at https://github.com/gpengzhi/CrossConST-LLM

    Modeling Coherence for Discourse Neural Machine Translation

    Full text link
    Discourse coherence plays an important role in the translation of one text. However, the previous reported models most focus on improving performance over individual sentence while ignoring cross-sentence links and dependencies, which affects the coherence of the text. In this paper, we propose to use discourse context and reward to refine the translation quality from the discourse perspective. In particular, we generate the translation of individual sentences at first. Next, we deliberate the preliminary produced translations, and train the model to learn the policy that produces discourse coherent text by a reward teacher. Practical results on multiple discourse test datasets indicate that our model significantly improves the translation quality over the state-of-the-art baseline system by +1.23 BLEU score. Moreover, our model generates more discourse coherent text and obtains +2.2 BLEU improvements when evaluated by discourse metrics.Comment: Accepted by AAAI201

    The Differentiation Balance of Bone Marrow Mesenchymal Stem Cells Is Crucial to Hematopoiesis.

    Get PDF
    Bone marrow mesenchymal stem cells (BMSCs), the important component and regulator of bone marrow microenvironment, give rise to hematopoietic-supporting stromal cells and form hematopoietic niches for hematopoietic stem cells (HSCs). However, how BMSC differentiation affects hematopoiesis is poorly understood. In this review, we focus on the role of BMSC differentiation in hematopoiesis. We discussed the role of BMSCs and their progeny in hematopoiesis. We also examine the mechanisms that cause differentiation bias of BMSCs in stress conditions including aging, irradiation, and chemotherapy. Moreover, the differentiation balance of BMSCs is crucial to hematopoiesis. We highlight the negative effects of differentiation bias of BMSCs on hematopoietic recovery after bone marrow transplantation. Keeping the differentiation balance of BMSCs is critical for hematopoietic recovery. This review summarises current understanding about how BMSC differentiation affects hematopoiesis and its potential application in improving hematopoietic recovery after bone marrow transplantation

    Inhibition of Bacterial Ammonia Oxidation by Organohydrazines in Soil Microcosms

    Get PDF
    Hydroxylamine oxidation by hydroxylamine oxidoreductase (HAO) is a key step for energy-yielding in support of the growth of ammonia-oxidizing bacteria (AOB). Organohydrazines have been shown to inactivate HAO from Nitrosomonas europaea, and may serve as selective inhibitors to differentiate bacterial from archaeal ammonia oxidation due to the absence of bacterial HAO gene homolog in known ammonia-oxidizing archaea (AOA). In this study, the effects of three organohydrazines on activity, abundance, and composition of AOB and AOA were evaluated in soil microcosms. The results indicate that phenylhydrazine and methylhydrazine at the concentration of 100 μmol g−1 dry weight soil completely suppressed the activity of soil nitrification. Denaturing gradient gel electrophoresis fingerprinting and sequencing analysis of bacterial ammonia monooxygenase subunit A gene (amoA) clearly demonstrated that nitrification activity change is well paralleled with the growth of Nitrosomonas europaea-like AOB in soil microcosms. No significant correlation between AOA community structure and nitrification activity was observed among all treatments during the incubation period, although incomplete inhibition of nitrification activity occurred in 2-hydroxyethylhydrazine-amended soil microcosms. These findings show that the HAO-targeted organohydrazines can effectively inhibit bacterial nitrification in soil, and the mechanism of organohydrazine affecting AOA remains unclear
    corecore