2,636 research outputs found
Context2Name: A Deep Learning-Based Approach to Infer Natural Variable Names from Usage Contexts
Most of the JavaScript code deployed in the wild has been minified, a process
in which identifier names are replaced with short, arbitrary and meaningless
names. Minified code occupies less space, but also makes the code extremely
difficult to manually inspect and understand. This paper presents Context2Name,
a deep learningbased technique that partially reverses the effect of
minification by predicting natural identifier names for minified names. The
core idea is to predict from the usage context of a variable a name that
captures the meaning of the variable. The approach combines a lightweight,
token-based static analysis with an auto-encoder neural network that summarizes
usage contexts and a recurrent neural network that predict natural names for a
given usage context. We evaluate Context2Name with a large corpus of real-world
JavaScript code and show that it successfully predicts 47.5% of all minified
identifiers while taking only 2.9 milliseconds on average to predict a name. A
comparison with the state-of-the-art tools JSNice and JSNaughty shows that our
approach performs comparably in terms of accuracy while improving in terms of
efficiency. Moreover, Context2Name complements the state-of-the-art by
predicting 5.3% additional identifiers that are missed by both existing tools
An Exploratory Study on Code Attention in BERT
Many recent models in software engineering introduced deep neural models
based on the Transformer architecture or use transformer-based Pre-trained
Language Models (PLM) trained on code. Although these models achieve the state
of the arts results in many downstream tasks such as code summarization and bug
detection, they are based on Transformer and PLM, which are mainly studied in
the Natural Language Processing (NLP) field. The current studies rely on the
reasoning and practices from NLP for these models in code, despite the
differences between natural languages and programming languages. There is also
limited literature on explaining how code is modeled.
Here, we investigate the attention behavior of PLM on code and compare it
with natural language. We pre-trained BERT, a Transformer based PLM, on code
and explored what kind of information it learns, both semantic and syntactic.
We run several experiments to analyze the attention values of code constructs
on each other and what BERT learns in each layer. Our analyses show that BERT
pays more attention to syntactic entities, specifically identifiers and
separators, in contrast to the most attended token [CLS] in NLP. This
observation motivated us to leverage identifiers to represent the code sequence
instead of the [CLS] token when used for code clone detection. Our results show
that employing embeddings from identifiers increases the performance of BERT by
605% and 4% F1-score in its lower layers and the upper layers, respectively.
When identifiers' embeddings are used in CodeBERT, a code-based PLM, the
performance is improved by 21-24% in the F1-score of clone detection. The
findings can benefit the research community by using code-specific
representations instead of applying the common embeddings used in NLP, and open
new directions for developing smaller models with similar performance.Comment: Accepted in ICPC 202
- …