182 research outputs found
Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation
We present an easy and efficient method to extend existing sentence embedding
models to new languages. This allows to create multilingual versions from
previously monolingual models. The training is based on the idea that a
translated sentence should be mapped to the same location in the vector space
as the original sentence. We use the original (monolingual) model to generate
sentence embeddings for the source language and then train a new system on
translated sentences to mimic the original model. Compared to other methods for
training multilingual sentence embeddings, this approach has several
advantages: It is easy to extend existing models with relatively few samples to
new languages, it is easier to ensure desired properties for the vector space,
and the hardware requirements for training is lower. We demonstrate the
effectiveness of our approach for 50+ languages from various language families.
Code to extend sentence embeddings models to more than 400 languages is
publicly available.Comment: Accepted at EMNLP 202
Development of an Automated Scoring Model Using SentenceTransformers for Discussion Forums in Online Learning Environments
Due to the limitations of public datasets, research on automatic essay scoring in Indonesian has been restrained and resulted in suboptimal accuracy. In general, the main goal of the essay scoring system is to improve execution time, which is usually done manually with human judgment. This study uses a discussion forum in online learning to generate an assessment between the responses and the lecturer\u27s rubric in the automated essay scoring. A SentenceTransformers pre-trained model that can construct the highest vector embedding was proposed to identify the semantic meaning between the responses and the lecturer\u27s rubric. The effectiveness of monolingual and multilingual models was compared. This research aims to determine the model\u27s effectiveness and the appropriate model for the Automated Essay Scoring (AES) used in paired sentence Natural Language Processing tasks. The distiluse-base-multilingual-cased-v1 model, which employed the Pearson correlation method, obtained the highest performance. Specifically, it obtained a correlation value of 0.63 and a mean absolute error (MAE) score of 0.70. It indicates that the overall prediction result is enhanced when compared to the earlier regression task research
- …