311 research outputs found
A Survey of Learning-based Automated Program Repair
Automated program repair (APR) aims to fix software bugs automatically and
plays a crucial role in software development and maintenance. With the recent
advances in deep learning (DL), an increasing number of APR techniques have
been proposed to leverage neural networks to learn bug-fixing patterns from
massive open-source code repositories. Such learning-based techniques usually
treat APR as a neural machine translation (NMT) task, where buggy code snippets
(i.e., source language) are translated into fixed code snippets (i.e., target
language) automatically. Benefiting from the powerful capability of DL to learn
hidden relationships from previous bug-fixing datasets, learning-based APR
techniques have achieved remarkable performance. In this paper, we provide a
systematic survey to summarize the current state-of-the-art research in the
learning-based APR community. We illustrate the general workflow of
learning-based APR techniques and detail the crucial components, including
fault localization, patch generation, patch ranking, patch validation, and
patch correctness phases. We then discuss the widely-adopted datasets and
evaluation metrics and outline existing empirical studies. We discuss several
critical aspects of learning-based APR techniques, such as repair domains,
industrial deployment, and the open science issue. We highlight several
practical guidelines on applying DL techniques for future APR studies, such as
exploring explainable patch generation and utilizing code features. Overall,
our paper can help researchers gain a comprehensive understanding about the
achievements of the existing learning-based APR techniques and promote the
practical application of these techniques. Our artifacts are publicly available
at \url{https://github.com/QuanjunZhang/AwesomeLearningAPR}
TransRepair: Context-aware Program Repair for Compilation Errors
Automatically fixing compilation errors can greatly raise the productivity of
software development, by guiding the novice or AI programmers to write and
debug code. Recently, learning-based program repair has gained extensive
attention and became the state-of-the-art in practice. But it still leaves
plenty of space for improvement. In this paper, we propose an end-to-end
solution TransRepair to locate the error lines and create the correct
substitute for a C program simultaneously. Superior to the counterpart, our
approach takes into account the context of erroneous code and diagnostic
compilation feedback. Then we devise a Transformer-based neural network to
learn the ways of repair from the erroneous code as well as its context and the
diagnostic feedback. To increase the effectiveness of TransRepair, we summarize
5 types and 74 fine-grained sub-types of compilations errors from two
real-world program datasets and the Internet. Then a program corruption
technique is developed to synthesize a large dataset with 1,821,275 erroneous C
programs. Through the extensive experiments, we demonstrate that TransRepair
outperforms the state-of-the-art in both single repair accuracy and full repair
accuracy. Further analysis sheds light on the strengths and weaknesses in the
contemporary solutions for future improvement.Comment: 11 pages, accepted to ASE '2
Deep Learning for Code Intelligence: Survey, Benchmark and Toolkit
Code intelligence leverages machine learning techniques to extract knowledge
from extensive code corpora, with the aim of developing intelligent tools to
improve the quality and productivity of computer programming. Currently, there
is already a thriving research community focusing on code intelligence, with
efforts ranging from software engineering, machine learning, data mining,
natural language processing, and programming languages. In this paper, we
conduct a comprehensive literature review on deep learning for code
intelligence, from the aspects of code representation learning, deep learning
techniques, and application tasks. We also benchmark several state-of-the-art
neural models for code intelligence, and provide an open-source toolkit
tailored for the rapid prototyping of deep-learning-based code intelligence
models. In particular, we inspect the existing code intelligence models under
the basis of code representation learning, and provide a comprehensive overview
to enhance comprehension of the present state of code intelligence.
Furthermore, we publicly release the source code and data resources to provide
the community with a ready-to-use benchmark, which can facilitate the
evaluation and comparison of existing and future code intelligence models
(https://xcodemind.github.io). At last, we also point out several challenging
and promising directions for future research
Too Few Bug Reports? Exploring Data Augmentation for Improved Changeset-based Bug Localization
Modern Deep Learning (DL) architectures based on transformers (e.g., BERT,
RoBERTa) are exhibiting performance improvements across a number of natural
language tasks. While such DL models have shown tremendous potential for use in
software engineering applications, they are often hampered by insufficient
training data. Particularly constrained are applications that require
project-specific data, such as bug localization, which aims at recommending
code to fix a newly submitted bug report. Deep learning models for bug
localization require a substantial training set of fixed bug reports, which are
at a limited quantity even in popular and actively developed software projects.
In this paper, we examine the effect of using synthetic training data on
transformer-based DL models that perform a more complex variant of bug
localization, which has the goal of retrieving bug-inducing changesets for each
bug report. To generate high-quality synthetic data, we propose novel data
augmentation operators that act on different constituent components of bug
reports. We also describe a data balancing strategy that aims to create a
corpus of augmented bug reports that better reflects the entire source code
base, because existing bug reports used as training data usually reference a
small part of the code base
Changeset-based Retrieval of Source Code Artifacts for Bug Localization
Modern software development is extremely collaborative and agile, with unprecedented speed and scale of activity. Popular trends like continuous delivery and continuous deployment aim at building, fixing, and releasing software with greater speed and frequency. Bug localization, which aims to automatically localize bug reports to relevant software artifacts, has the potential to improve software developer efficiency by reducing the time spent on debugging and examining code. To date, this problem has been primarily addressed by applying information retrieval techniques based on static code elements, which are intrinsically unable to reflect how software evolves over time. Furthermore, as prior approaches frequently rely on exact term matching to measure relatedness between a bug report and a software artifact, they are prone to be affected by the lexical gap that exists between natural and programming language.
This thesis explores using software changes (i.e., changesets), instead of static code elements, as the primary data unit to construct an information retrieval model toward bug localization. Changesets, which represent the differences between two consecutive versions of the source code, provide a natural representation of a software change, and allow to capture both the semantics of the source code, and the semantics of the code modification. To bridge the lexical gap between source code and natural language, this thesis investigates using topic modeling and deep learning architectures that enable creating semantically rich data representation with the goal of identifying latent connection between bug reports and source code. To show the feasibility of the proposed approaches, this thesis also investigates practical aspects related to using a bug localization tool, such retrieval delay and training data availability.
The results indicate that the proposed techniques effectively leverage historical data about bugs and their related source code components to improve retrieval accuracy, especially for bug reports that are expressed in natural language, with little to no explicit code references. Further improvement in accuracy is observed when the size of the training dataset is increased through data augmentation and data balancing strategies proposed in this thesis, although depending on the model architecture the magnitude of the improvement varies. In terms of retrieval delay, the results indicate that the proposed deep learning architecture significantly outperforms prior work, and scales up with respect to search space size
Learning representations for effective and explainable software bug detection and fixing
Software has an integral role in modern life; hence software bugs, which undermine software quality and reliability, have substantial societal and economic implications. The advent of machine learning and deep learning in software engineering has led to major advances in bug detection and fixing approaches, yet they fall short of desired precision and recall. This shortfall arises from the absence of a \u27bridge,\u27 known as learning code representations, that can transform information from source code into a suitable representation for effective processing via machine and deep learning.
This dissertation builds such a bridge. Specifically, it presents solutions for effectively learning code representations using four distinct methods?context-based, testing results-based, tree-based, and graph-based?thus improving bug detection and fixing approaches, as well as providing developers insight into the foundational reasoning. The experimental results demonstrate that using learning code representations can significantly enhance explainable bug detection and fixing, showcasing the practicability and meaningfulness of the approaches formulated in this dissertation toward improving software quality and reliability
- …