1 research outputs found
Comparison of Syntactic and Semantic Representations of Programs in Neural Embeddings
Neural approaches to program synthesis and understanding have proliferated
widely in the last few years; at the same time graph based neural networks have
become a promising new tool. This work aims to be the first empirical study
comparing the effectiveness of natural language models and static analysis
graph based models in representing programs in deep learning systems. It
compares graph convolutional networks using different graph representations in
the task of program embedding. It shows that the sparsity of control flow
graphs and the implicit aggregation of graph convolutional networks cause these
models to perform worse than naive models. Therefore it concludes that simply
augmenting purely linguistic or statistical models with formal information does
not perform well due to the nuanced nature of formal properties introducing
more noise than structure for graph convolutional networks.Comment: 54 Pages, Imperial College London Masters Thesi