9 research outputs found

    Synthesizing Imperative Programs from Examples Guided by Static Analysis

    Full text link
    We present a novel algorithm that synthesizes imperative programs for introductory programming courses. Given a set of input-output examples and a partial program, our algorithm generates a complete program that is consistent with every example. Our key idea is to combine enumerative program synthesis and static analysis, which aggressively prunes out a large search space while guaranteeing to find, if any, a correct solution. We have implemented our algorithm in a tool, called SIMPL, and evaluated it on 30 problems used in introductory programming courses. The results show that SIMPL is able to solve the benchmark problems in 6.6 seconds on average.Comment: The paper is accepted in Static Analysis Symposium (SAS) '17. The submission version is somewhat different from the version in arxiv. The final version will be uploaded after the camera-ready version is read

    Automata Tutor v3

    Full text link
    Computer science class enrollments have rapidly risen in the past decade. With current class sizes, standard approaches to grading and providing personalized feedback are no longer possible and new techniques become both feasible and necessary. In this paper, we present the third version of Automata Tutor, a tool for helping teachers and students in large courses on automata and formal languages. The second version of Automata Tutor supported automatic grading and feedback for finite-automata constructions and has already been used by thousands of users in dozens of countries. This new version of Automata Tutor supports automated grading and feedback generation for a greatly extended variety of new problems, including problems that ask students to create regular expressions, context-free grammars, pushdown automata and Turing machines corresponding to a given description, and problems about converting between equivalent models - e.g., from regular expressions to nondeterministic finite automata. Moreover, for several problems, this new version also enables teachers and students to automatically generate new problem instances. We also present the results of a survey run on a class of 950 students, which shows very positive results about the usability and usefulness of the tool

    Wodel-Edu: a tool for the generation and evaluation of diagram-based exercises

    Full text link
    Creating and grading exercises are recurring tasks within higher education. When these exercises are based on diagrams – like logic circuits, automata or class diagrams – we can represent them as models, and use model-driven engineering techniques for the large-scale generation of quizzes, which can be automatically graded. This way, we propose a domain-independent tool for the generation and automated evaluation of diagram-based exercises called WODEL-EDU. WODEL-EDU is built atop WODEL, an extensible tool for model mutation, and offers seven kinds of diagram exercises. It supports code generation from the exercises for the MOODLE platform, the web, ANDROID and IOS applications. Evaluations from the professor and student perspectives show good resultsSpecial gratitude to Andrés Rico-Fernández and Jaime Velázquez Pazos for their help with the WODEL-EDU implementation, building the code generators for the ANDROID and IOS exercises applications, respectively, and to all participants in the evaluation. Project partially funded by the Spanish MICINN (PID2021-122270OB-I00, TED2021-129381B-C21

    Automated generation and correction of diagram-based exercises for Moodle

    Full text link
    One of the most time‐consuming task for teachers is creating and correcting exercises to evaluate students. This is normally performed by hand, which incurs high time costs and is error‐prone. A way to alleviate this problem is to provide an assistant tool that automates such tasks. In the case of exercises based on diagrams, they can be represented as models to enable their automated model‐based generation for any target environment, like web or mobile applications, or learning platforms like MOODLE. In this paper, we propose an automated process for synthesizing five types of diagram‐based exercises for the MOODLE platform. Being model‐based, our solution is domain‐agnostic (i.e., it can be applied to arbitrary domains like automata, electronics, or software design). We report on its use within a university course on automata theory, as well as evaluations of generality, effectiveness and efficiency, illustrating the benefits of our approachComunidad de Madrid, Grant/Award Number: S2018/TCS‐4314; Ministerio de Ciencia e Innovación, Grant/Award Numbers: PID2021‐ 122270OB‐I00, TED2021‐129381B‐C2

    Automated Grading and Feedback of Regular Expressions

    Get PDF
    To keep up with the current spread of education, there has arisen the need to have automated tools to evaluate assignments. As a part of this thesis, we have developed a technique to evaluate assignments on regular expressions (regexes). Every student is different and so is their solution, thus making it hard to have a single approach to grade it all. Hence, in addition to the existing techniques, we offer a new way of evaluating regexes. We call this the regex edit distance. The idea behind this is to find the minimal changes that we could make in a wrong answer to make its language equivalent to that of a correct answer. This approach is along the lines of the one used by Automata Tutor to grade DFAs. We also spoke to different graders and observed that they were in some sense computing the regex edit distance to assign partial credit. Computing the regex edit distance is a PSPACE-hard problem and seems computationally intractable even for college level submissions. To deal with this intractability, we look at a simpler version of regex edit distance that can be computed for many college level submissions. We hypothesize that our version of regex edit distance is a good metric for evaluating and awarding partial credit for regexes. We ran an initial study and we observed a strong relation between the partial credit awarded and our version of regex edit distance

    Automated Grading of DFA Constructions ∗

    No full text
    One challenge in making online education more effective is to develop automatic grading software that can provide meaningful feedback. This paper provides a solution to automatic grading of the standard computation-theory problem that asks a student to construct a deterministic finite automaton (DFA) from the given description of its language. We focus on how to assign partial grades for incorrect answers. Each student’s answer is compared to the correct DFA using a hybrid of three techniques devised to capture different classes of errors. First, in an attempt to catch syntactic mistakes, we compute the edit distance between the two DFA descriptions. Second, we consider the entropy of the symmetric difference of the languages of the two DFAs, and compute a score that estimates the fraction of the number of strings on which the student answer is wrong. Our third technique is aimed at capturing mistakes in reading of the problem description. For this purpose, we consider a description language MOSEL, which adds syntactic sugar to the classical Monadic Second Order Logic, and allows defining regular languages in a concise and natural way. We provide algorithms, along with optimizations, for transforming MOSEL descriptions into DFAs and vice-versa. These allow us to compute the syntactic edit distance of the incorrect answer from the correct one in terms of their logical representations. We report an experimental study that evaluates hundreds of answers submitted by (real) students by comparing grades/feedback computed by our tool with human graders. Our conclusion is that the tool is able to assign partial grades in a meaningful way, and should be preferred over the human graders for both scalability and consistency.
    corecore