1,163 research outputs found

    Automatic repair and type binding of undeclared variables using neural networks

    Get PDF
    Over the past few years, there had been significant achievements in the deployment of deep learning for analysing the programs due to the brilliance of encoding the programs by building vector representations. Deep learning had been used in program analysis for detection of security vulnerabilities using generative adversarial networks, prediction of hidden software defects using software defect datasets. Furthermore, they had also been used for detecting as well as fixing syntax errors that are made by novice programmers by learning a trained neural machine translation on bug-free programming source codes to suggest possible fixes by finding the location of the tokens that are not in place. However, all these approaches either require defect datasets or bug-free code samples that are executable for training the deep learning model. Our neural network model is neither trained with any defect datasets nor bug-free code samples, instead it is trained using structural semantic details of ASTs where each node represents a construct appearing in the source code. This model is implemented to fix one of the most common syntax errors, such as undeclared variable errors as well as infer their type information before program compilation. By this approach, the model has achieved in correctly locating and identifying 81% of the programs on prutor dataset of 1059 programs with undeclared variable errors and also inferring their data types correctly in 80% of the programs

    sk_p: a neural program corrector for MOOCs

    Get PDF
    We present a novel technique for automatic program correction in MOOCs, capable of fixing both syntactic and semantic errors without manual, problem specific correction strategies. Given an incorrect student program, it generates candidate programs from a distribution of likely corrections, and checks each candidate for correctness against a test suite. The key observation is that in MOOCs many programs share similar code fragments, and the seq2seq neural network model, used in the natural-language processing task of machine translation, can be modified and trained to recover these fragments. Experiment shows our scheme can correct 29% of all incorrect submissions and out-performs state of the art approach which requires manual, problem specific correction strategies
    • …
    corecore