3,904 research outputs found
Code Completion with Neural Attention and Pointer Networks
Intelligent code completion has become an essential research task to
accelerate modern software development. To facilitate effective code completion
for dynamically-typed programming languages, we apply neural language models by
learning from large codebases, and develop a tailored attention mechanism for
code completion. However, standard neural language models even with attention
mechanism cannot correctly predict the out-of-vocabulary (OoV) words that
restrict the code completion performance. In this paper, inspired by the
prevalence of locally repeated terms in program source code, and the recently
proposed pointer copy mechanism, we propose a pointer mixture network for
better predicting OoV words in code completion. Based on the context, the
pointer mixture network learns to either generate a within-vocabulary word
through an RNN component, or regenerate an OoV word from local context through
a pointer component. Experiments on two benchmarked datasets demonstrate the
effectiveness of our attention mechanism and pointer mixture network on the
code completion task.Comment: Accepted in IJCAI 201
Open Vocabulary Learning on Source Code with a Graph-Structured Cache
Machine learning models that take computer program source code as input
typically use Natural Language Processing (NLP) techniques. However, a major
challenge is that code is written using an open, rapidly changing vocabulary
due to, e.g., the coinage of new variable and method names. Reasoning over such
a vocabulary is not something for which most NLP methods are designed. We
introduce a Graph-Structured Cache to address this problem; this cache contains
a node for each new word the model encounters with edges connecting each word
to its occurrences in the code. We find that combining this graph-structured
cache strategy with recent Graph-Neural-Network-based models for supervised
learning on code improves the models' performance on a code completion task and
a variable naming task --- with over relative improvement on the latter
--- at the cost of a moderate increase in computation time.Comment: Published in the International Conference on Machine Learning (ICML
2019), 13 page
Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective
Neural-symbolic computing has now become the subject of interest of both
academic and industry research laboratories. Graph Neural Networks (GNN) have
been widely used in relational and symbolic domains, with widespread
application of GNNs in combinatorial optimization, constraint satisfaction,
relational reasoning and other scientific domains. The need for improved
explainability, interpretability and trust of AI systems in general demands
principled methodologies, as suggested by neural-symbolic computing. In this
paper, we review the state-of-the-art on the use of GNNs as a model of
neural-symbolic computing. This includes the application of GNNs in several
domains as well as its relationship to current developments in neural-symbolic
computing.Comment: Updated version, draft of accepted IJCAI2020 Survey Pape
- …