13 research outputs found
Recommended from our members
Learning to Fix Build Errors with Graph2Diff Neural Networks
Professional software developers spend a significant amount of
time fixing builds, but this has received little attention as a problem in automatic program repair. We present a new deep learning
architecture, called Graph2Diff, for automatically localizing and
fixing build errors. We represent source code, build configuration
files, and compiler diagnostic messages as a graph, and then use a
Graph Neural Network model to predict a diff. A diff specifies how
to modify the code’s abstract syntax tree, represented in the neural
network as a sequence of tokens and of pointers to code locations.
Our network is an instance of a more general abstraction which we
call Graph2Tocopo, which is potentially useful in any development
tool for predicting source code changes. We evaluate the model on
a dataset of over 500k real build errors and their resolutions from
professional developers. Compared to the approach of DeepDelta
[23], our approach tackles the harder task of predicting a more
precise diff but still achieves over double the accuracy
Learning to Fix Build Errors with Graph2Diff Neural Networks
Professional software developers spend a significant amount of
time fixing builds, but this has received little attention as a problem in automatic program repair. We present a new deep learning
architecture, called Graph2Diff, for automatically localizing and
fixing build errors. We represent source code, build configuration
files, and compiler diagnostic messages as a graph, and then use a
Graph Neural Network model to predict a diff. A diff specifies how
to modify the code’s abstract syntax tree, represented in the neural
network as a sequence of tokens and of pointers to code locations.
Our network is an instance of a more general abstraction which we
call Graph2Tocopo, which is potentially useful in any development
tool for predicting source code changes. We evaluate the model on
a dataset of over 500k real build errors and their resolutions from
professional developers. Compared to the approach of DeepDelta
[23], our approach tackles the harder task of predicting a more
precise diff but still achieves over double the accuracy
Deep Learning Recommendations for the ACL2 Interactive Theorem Prover
Due to the difficulty of obtaining formal proofs, there is increasing interest in partially or completely automating proof search in interactive theorem provers. Despite being a theorem prover with an active community and plentiful corpus of 170,000+ theorems, no deep learning system currently exists to help automate theorem proving in ACL2. We have developed a machine learning system that generates recommendations to automatically complete proofs. We show that our system benefits from the copy mechanism introduced in the context of program repair. We make our system directly accessible from within ACL2 and use this interface to evaluate our system in a realistic theorem proving environment
Code Llama: Open Foundation Models for Code
We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained
on sequences of 16k tokens and show improvements on inputs with up to 100k
tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support
infilling based on surrounding content. Code Llama reaches state-of-the-art
performance among open models on several code benchmarks, with scores of up to
53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python
7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform
every other publicly available model on MultiPL-E. We release Code Llama under
a permissive license that allows for both research and commercial use
Deep Learning for Code Intelligence: Survey, Benchmark and Toolkit
Code intelligence leverages machine learning techniques to extract knowledge
from extensive code corpora, with the aim of developing intelligent tools to
improve the quality and productivity of computer programming. Currently, there
is already a thriving research community focusing on code intelligence, with
efforts ranging from software engineering, machine learning, data mining,
natural language processing, and programming languages. In this paper, we
conduct a comprehensive literature review on deep learning for code
intelligence, from the aspects of code representation learning, deep learning
techniques, and application tasks. We also benchmark several state-of-the-art
neural models for code intelligence, and provide an open-source toolkit
tailored for the rapid prototyping of deep-learning-based code intelligence
models. In particular, we inspect the existing code intelligence models under
the basis of code representation learning, and provide a comprehensive overview
to enhance comprehension of the present state of code intelligence.
Furthermore, we publicly release the source code and data resources to provide
the community with a ready-to-use benchmark, which can facilitate the
evaluation and comparison of existing and future code intelligence models
(https://xcodemind.github.io). At last, we also point out several challenging
and promising directions for future research