199 research outputs found

    Memory-efficient NLLB-200: Language-specific Expert Pruning of a Massively Multilingual Machine Translation Model

    Full text link
    The recently released NLLB-200 is a set of multilingual Neural Machine Translation models that cover 202 languages. The largest model is based on a Mixture of Experts architecture and achieves SoTA results across many language pairs. It contains 54.5B parameters and requires at least four 32GB GPUs just for inference. In this work, we propose a pruning method that enables the removal of up to 80% of experts without further finetuning and with a negligible loss in translation quality, which makes it feasible to run the model on a single 32GB GPU. Further analysis suggests that our pruning metrics can identify language-specific experts

    Video Tutorials as Academic Writing and Research Support for Students of International Business

    Get PDF
    Many studies have made claims for the positive effects of multimedia in education; however, there is a lack of systematic and comparable research, especially when it comes to video tutorials. This study evaluates the use and benefits of short screencast video tutorials, produced with Camtasia and published on YouTube, in preparing students for research-based writing assignments. The study employs a multi-method research design, comprising an analysis of video-tutorial viewership data from YouTube and a student questionnaire on the perceived benefits of these video tutorials. The data on how the tutorials are used, as well as the questionnaire responses, enable us to highlight which aspects of these tutorials positively affect the learning process. Findings indicate that the use of such tutorials is more dependent on the type of information included (e.g., theory, instructions or examples), than their length (within the range of 3-6 min). Additionally, novice, introductory-level students appear to have received greater benefit from the tutorials than students with some previous academic writing experience

    Modèle de traduction statistique à fragments enrichi par la syntaxe

    No full text
    Traditional Statistical Machine Translation models are not aware of linguistic structure. Thus, target lexical choices and word order are controlled only by surface-based statistics learned from the training corpus. However, knowledge of linguistic structure can be beneficial since it provides generic information compensating data sparsity. The purpose of our work is to study the impact of syntactic information while preserving the general framework of Phrase-Based SMT. First, we study the integration of syntactic information using a reranking approach. We define features measuring the similarity between the dependency structures of source and target sentences, as well as features of linguistic coherence of the target sentences. The importance of each feature is assessed by learning their weights through a Structured Perceptron Algorithm. The evaluation of several reranking models shows that these features often improve the quality of translations produced by the basic model, in terms of manual evaluations as opposed to automatic measures. Then, we propose different models in order to increase the quality and diversity of the search graph produced by the decoder, through filtering out uninteresting hypotheses based on the source syntactic structure. This is done either by learning limits on the phrase recordering, or by decomposing the source sentence in order to simplify the translation process. The initial evaluations of these models look promising.Les modèles de traduction automatique probabiliste traditionnel ignorent la structure syntaxique des phrases source et cible. Le choix des unités lexicales cible et de leur ordre est contrôlé uniquement par des statistiques de surface sur le corpus d'entraînement. La connaissance de la structure linguistique peut-être bénéfique, car elle fournit des informations génériques compensant la pauvreté des données directement observables. Nos travaux ont pour but d'étudier l'impact des informations syntaxiques sur un modèle de traduction probabiliste de base, fondé sur des fragments, dans le cadre d'un analyseur dépendanciel particulier, XIP, dont la performance est bien adaptée à nos besoins. Nous étudions d'abord l'intégration des informations syntaxiques dans un but de reclassement des traductions proposées par le modèle de base? Nous définissons un ensemble de traits mesurant la similarité entre les structures de dépendance source et cible, et des traits de cohérence linguistique (basés sur l'analyse cible). L'apprentissage automatique des poids de ces traits permet de détecter leurs importance. L'évaluation manuelle des différents modèles de reclassement nous a permis de montrer le potentiel de ces traits pour améliorer la qualité des traductions proposées par le modèle de base. Ensuite, nous avons proposé un modèle pour réduire la taille du graphe des hypothèses exploré par le modèle de base à l'aide de connaissances sur la structure syntaxique source. Nous avons également proposé une procédure de décomposition d'une phrase source initiale en sous-phrases pour simplifier la tâche de traduction. Les évaluations initiales de ces modèles se sont montrées prometteuses

    Bolshev's method of confidence limit construction

    Get PDF
    Confidence intervals and regions for the parameters of a distribution are constructed, following the method due to L. N. Bolshev. This construction method is illustrated with Poisson, exponential, Bernouilli, geometric, normal and other distributions depending on parameters
    • …
    corecore