48 research outputs found
Where were the repair ingredients for Defects4j bugs?
A significant body of automated program repair research has built approaches under the redundancy assumption. Patches are then heuristically generated by leveraging repair ingredients (change actions and donor code) that are found in code bases (either the buggy program itself or big code). For example, common change actions (i.e., fix patterns) are frequently mined offline and serve as an important ingredient for many patch generation engines. Although the repetitiveness of code changes has been studied in general, the literature provides little insight into the relationship between the performance of the repair system and the source code base where the change actions were mined. Similarly, donor code is another important repair ingredient to concretize patches guided by abstract patterns. Yet, little attention has been paid to where such ingredients can actually be found. Through a large scale empirical study on the execution results of 24 repair systems evaluated on realworld bugs from Defects4J, we provide a comprehensive view on the distribution of repair
ingredients that are relevant for these bugs. In particular, we show that (1) a half of bugs cannot be fixed simply because the relevant repair ingredient is not available in the search space of donor code; (2) bugs that are correctly fixed by literature tools are mostly addressed with shallow change actions; (3) programs with little history of changes can benefit from mining change actions in other programs; (4) parts of donor code to repair a given bug can be found separately at different search locations; (5) bug-triggering test cases are a rich source for donor code search
Evaluating Representation Learning of Code Changes for Predicting Patch Correctness in Program Repair
A large body of the literature of automated program repair develops
approaches where patches are generated to be validated against an oracle (e.g.,
a test suite). Because such an oracle can be imperfect, the generated patches,
although validated by the oracle, may actually be incorrect. While the state of
the art explore research directions that require dynamic information or rely on
manually-crafted heuristics, we study the benefit of learning code
representations to learn deep features that may encode the properties of patch
correctness. Our work mainly investigates different representation learning
approaches for code changes to derive embeddings that are amenable to
similarity computations. We report on findings based on embeddings produced by
pre-trained and re-trained neural networks. Experimental results demonstrate
the potential of embeddings to empower learning algorithms in reasoning about
patch correctness: a machine learning predictor with BERT transformer-based
embeddings associated with logistic regression yielded an AUC value of about
0.8 in predicting patch correctness on a deduplicated dataset of 1000 labeled
patches. Our study shows that learned representations can lead to reasonable
performance when comparing against the state-of-the-art, PATCH-SIM, which
relies on dynamic information. These representations may further be
complementary to features that were carefully (manually) engineered in the
literature