10 research outputs found

    Synthesizing Imperative Programs from Examples Guided by Static Analysis

    Full text link
    We present a novel algorithm that synthesizes imperative programs for introductory programming courses. Given a set of input-output examples and a partial program, our algorithm generates a complete program that is consistent with every example. Our key idea is to combine enumerative program synthesis and static analysis, which aggressively prunes out a large search space while guaranteeing to find, if any, a correct solution. We have implemented our algorithm in a tool, called SIMPL, and evaluated it on 30 problems used in introductory programming courses. The results show that SIMPL is able to solve the benchmark problems in 6.6 seconds on average.Comment: The paper is accepted in Static Analysis Symposium (SAS) '17. The submission version is somewhat different from the version in arxiv. The final version will be uploaded after the camera-ready version is read

    Verifix: Verified Repair of Programming Assignments

    Full text link
    Automated feedback generation for introductory programming assignments is useful for programming education. Most works try to generate feedback to correct a student program by comparing its behavior with an instructor's reference program on selected tests. In this work, our aim is to generate verifiably correct program repairs as student feedback. The student assignment is aligned and composed with a reference solution in terms of control flow, and differences in data variables are automatically summarized via predicates to relate the variable names. Failed verification attempts for the equivalence of the two programs are exploited to obtain a collection of maxSMT queries, whose solutions point to repairs of the student assignment. We have conducted experiments on student assignments curated from a widely deployed intelligent tutoring system. Our results indicate that we can generate verified feedback in up to 58% of the assignments. More importantly, our system indicates when it is able to generate a verified feedback, which is then usable by novice students with high confidence

    Hubble Spacer Telescope

    Get PDF
    Visualizing a model checker’s run on a model can be useful when trying to gain a deeper understanding of the verification of the particular model. However, it can be difficult to formalize the problem that visualization solves as it varies from person to person. Having a visualized form of a model checker’s run allows a user to pinpoint sections of the run without having to look through the entire log multiple times or having to know what to look for. This thesis presents the Hubble Spacer Telescope (HST), a visualizer for Spacer, an SMT horn clause based solver. HST combines multiple exploration graph views along with customizable lemma transformations. HST offers a variety of ways to transform lemmas so that a user can pick and choose how they want lemmas to be presented. HST’s lemma transformations allow a user to change variable names, rearrange terms in a literal, and rearrange the placement of literals within the lemma through programming by example. HST allows users to not only visually depict a Spacer exploration log but it allows users to transform lemmas produced, in a way that the user hopes, will make understanding a Spacer model checking run, easier. Given a Spacer exploration log, HST creates a raw exploration graph where clicking on a node produces the state of the model as well as the lemmas learned from said state. In addition, there is a second graph view which summarizes the exploration into its proof obligations. HST uses programming by example to simplify lemma transformations so that users only have to modify a few lemmas to transform all lemmas in an exploration log. Users can also choose between multiple transformations to better suit their needs. This thesis presents an evaluation of HST through a case study. The case study is used to demonstrate the extent of the grammar created for lemma transformations. Users have the opportunity to transform disjunctions of literals produced by Spacer into a conditional statement, customized by the contents of the predicate. Since lemma transformations are completely customizable, HST can be viewed as per each individual user’s preferences

    Semi-supervised Verified Feedback Generation

    No full text
    Students have enthusiastically taken to online programming lessons and contests. Unfortunately, they tend to struggle due to lack of personalized feedback. There is an urgent need of program analysis and repair techniques capable of handling both the scale and variations in student submissions, while ensuring quality of feedback. Towards this goal, we present a novel methodology called semi-supervised verified feedback generation. We cluster submissions by solution strategy and ask the instructor to identify or add a correct submission in each cluster. We then verify every submission in a cluster against the instructor-validated submission in the same cluster. If faults are detected in the submission then feedback suggesting fixes to them is generated. Clustering reduces the burden on the instructor and also the variations that have to be handled during feedback generation. The verified feedback generation ensures that only correct feedback is generated. We implemented a tool, named CoderAssist, based on this approach and evaluated it on dynamic programming assignments. We have designed a novel counter-example guided feedback generation algorithm capable of suggesting fixes to all faults in a submission. In an evaluation on 2226 submissions to 4 problems, CoderAssist could generate verified feedback for 1911 (85%) submissions in 1.6s each on an average. It does a good job of reducing the burden on the instructor. Only one submission had to be manually validated or added for every 16 submissions
    corecore