1 research outputs found
Learning Transfers over Several Programming Languages
Large language models (LLMs) have recently become remarkably good at
improving developer productivity for high-resource programming languages. These
models use two kinds of data: large amounts of unlabeled code samples for
pretraining and relatively smaller amounts of labeled code samples for
fine-tuning or in-context learning. Unfortunately, many programming languages
are low-resource, lacking labeled samples for most tasks and often even lacking
unlabeled samples. Therefore, users of low-resource languages (e.g., legacy or
new languages) miss out on the benefits of LLMs. Cross-lingual transfer
learning uses data from a source language to improve model performance on a
target language. It has been well-studied for natural languages, but has
received little attention for programming languages. This paper reports
extensive experiments on four tasks using a transformer-based LLM and 11 to 41
programming languages to explore the following questions. First, how well
cross-lingual transfer works for a given task across different language pairs.
Second, given a task and target language, how to best choose a source language.
Third, the characteristics of a language pair that are predictive of transfer
performance, and fourth, how that depends on the given task.Comment: 16 pages, 5 figures, 5 table