2 research outputs found

    Program Repair with Minimal Edits Using CodeT5

    Full text link
    Programmers often struggle to identify and fix bugs in their programs. In recent years, many language models (LMs) have been proposed to fix erroneous programs and support error recovery. However, the LMs tend to generate solutions that differ from the original input programs. This leads to potential comprehension difficulties for users. In this paper, we propose an approach to suggest a correct program with minimal repair edits using CodeT5. We fine-tune a pre-trained CodeT5 on code pairs of wrong and correct programs and evaluate its performance with several baseline models. The experimental results show that the fine-tuned CodeT5 achieves a pass@100 of 91.95% and an average edit distance of the most similar correct program of 6.84, which indicates that at least one correct program can be suggested by generating 100 candidate programs. We demonstrate the effectiveness of LMs in suggesting program repair with minimal edits for solving introductory programming problems.Comment: 7 pages, 6 figures, accepted to iCAST 202

    Multi-Label Code Error Classification Using CodeT5 and ML-KNN

    No full text
    Programming is an essential skill in computer science and in a wide range of engineering-related disciplines. However, occurring errors, often referred to as “bugs” in code, can indeed be challenging to identify and rectify, both for students who are learning to program and for experienced professionals. These errors can lead to unexpected behaviors in programming. Understanding, finding, and effectively dealing with errors is an integral part of programming learning as well as software development. To classify the errors, we propose a multi-label error classification of source code for dealing with programming data by using the ML-KNN classifier with CodeT5 embeddings. In addition, several deep neural network (DNN) models, including GRU, LSTM, BiLSTM, and BiLSTM-A (attention mechanism) are also employed as baseline models to classify the errors. We trained all the models by using a large-scale dataset (original error labels) as well as modified datasets (summarized error labels) of the source code. The average classification accuracy of the proposed model is 95.91% and 84.77% for the original and summarized error-labeled datasets, respectively. The exact match accuracy is 22.57% and 27.22% respectively for the original and summarized error-labeled datasets. The comprehensive experimental results of the proposed approach are promising for multi-label error classification over the baseline models. Moreover, the findings derived from the proposed approach and data-driven analytical results hold significant promise for error classification, programming education, and related research endeavors
    corecore