25 research outputs found

    Memory-efficient NLLB-200: Language-specific Expert Pruning of a Massively Multilingual Machine Translation Model

    Full text link
    The recently released NLLB-200 is a set of multilingual Neural Machine Translation models that cover 202 languages. The largest model is based on a Mixture of Experts architecture and achieves SoTA results across many language pairs. It contains 54.5B parameters and requires at least four 32GB GPUs just for inference. In this work, we propose a pruning method that enables the removal of up to 80% of experts without further finetuning and with a negligible loss in translation quality, which makes it feasible to run the model on a single 32GB GPU. Further analysis suggests that our pruning metrics can identify language-specific experts

    Modèle de traduction statistique à fragments enrichi par la syntaxe

    No full text
    Traditional Statistical Machine Translation models are not aware of linguistic structure. Thus, target lexical choices and word order are controlled only by surface-based statistics learned from the training corpus. However, knowledge of linguistic structure can be beneficial since it provides generic information compensating data sparsity. The purpose of our work is to study the impact of syntactic information while preserving the general framework of Phrase-Based SMT. First, we study the integration of syntactic information using a reranking approach. We define features measuring the similarity between the dependency structures of source and target sentences, as well as features of linguistic coherence of the target sentences. The importance of each feature is assessed by learning their weights through a Structured Perceptron Algorithm. The evaluation of several reranking models shows that these features often improve the quality of translations produced by the basic model, in terms of manual evaluations as opposed to automatic measures. Then, we propose different models in order to increase the quality and diversity of the search graph produced by the decoder, through filtering out uninteresting hypotheses based on the source syntactic structure. This is done either by learning limits on the phrase recordering, or by decomposing the source sentence in order to simplify the translation process. The initial evaluations of these models look promising.Les modèles de traduction automatique probabiliste traditionnel ignorent la structure syntaxique des phrases source et cible. Le choix des unités lexicales cible et de leur ordre est contrôlé uniquement par des statistiques de surface sur le corpus d'entraînement. La connaissance de la structure linguistique peut-être bénéfique, car elle fournit des informations génériques compensant la pauvreté des données directement observables. Nos travaux ont pour but d'étudier l'impact des informations syntaxiques sur un modèle de traduction probabiliste de base, fondé sur des fragments, dans le cadre d'un analyseur dépendanciel particulier, XIP, dont la performance est bien adaptée à nos besoins. Nous étudions d'abord l'intégration des informations syntaxiques dans un but de reclassement des traductions proposées par le modèle de base? Nous définissons un ensemble de traits mesurant la similarité entre les structures de dépendance source et cible, et des traits de cohérence linguistique (basés sur l'analyse cible). L'apprentissage automatique des poids de ces traits permet de détecter leurs importance. L'évaluation manuelle des différents modèles de reclassement nous a permis de montrer le potentiel de ces traits pour améliorer la qualité des traductions proposées par le modèle de base. Ensuite, nous avons proposé un modèle pour réduire la taille du graphe des hypothèses exploré par le modèle de base à l'aide de connaissances sur la structure syntaxique source. Nous avons également proposé une procédure de décomposition d'une phrase source initiale en sous-phrases pour simplifier la tâche de traduction. Les évaluations initiales de ces modèles se sont montrées prometteuses

    SMaLL-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages

    Full text link
    In recent years, multilingual machine translation models have achieved promising performance on low-resource language pairs by sharing information between similar languages, thus enabling zero-shot translation. To overcome the "curse of multilinguality", these models often opt for scaling up the number of parameters, which makes their use in resource-constrained environments challenging. We introduce SMaLL-100, a distilled version of the M2M-100 (12B) model, a massively multilingual machine translation model covering 100 languages. We train SMaLL-100 with uniform sampling across all language pairs and therefore focus on preserving the performance of low-resource languages. We evaluate SMaLL-100 on different low-resource benchmarks: FLORES-101, Tatoeba, and TICO-19 and demonstrate that it outperforms previous massively multilingual models of comparable sizes (200-600M) while improving inference latency and memory usage. Additionally, our model achieves comparable results to M2M-100 (1.2B), while being 3.6x smaller and 4.3x faster at inference. Code and pre-trained models: https://github.com/alirezamshi/small100Comment: Accepted to EMNLP 202

    Long-Tail Theory under Gaussian Mixtures

    Full text link
    We suggest a simple Gaussian mixture model for data generation that complies with Feldman's long tail theory (2020). We demonstrate that a linear classifier cannot decrease the generalization error below a certain level in the proposed model, whereas a nonlinear classifier with a memorization capacity can. This confirms that for long-tailed distributions, rare training examples must be considered for optimal generalization to new data. Finally, we show that the performance gap between linear and nonlinear models can be lessened as the tail becomes shorter in the subpopulation frequency distribution, as confirmed by experiments on synthetic and real data.Comment: accepted to ECAI 202
    corecore