2,248 research outputs found

    Hint generation in programming tutors

    Get PDF
    Programming is increasingly recognized as a useful and important skill. Online programming courses that have appeared in the past decade have proven extremely popular with a wide audience. Learning in such courses is however not as effective as working directly with a teacher, who can provide students with immediate relevant feedback. The field of intelligent tutoring systems seeks to provide such feedback automatically. Traditionally, tutors have depended on a domain model defined by the teacher in advance. Creating such a model is a difficult task that requires a lot of knowledgeengineering effort, especially in complex domains such as programming. A potential solution to this problem is to use data-driven methods. The idea is to build the domain model by observing how students have solved an exercise in the past. New students can then be given feedback that directs them along successful solution paths. Implementing this approach is particularly challenging for programming domains, since the only directly observable student actions are not easily interpretable. We present two novel approaches to creating a domain model for programming exercises in a data-driven fashion. The first approach models programming as a sequence of textual rewrites, and learns rewrite rules for transforming programs. With these rules new student-submitted programs can be automatically debugged. The second approach uses structural patterns in programs’ abstract syntax trees to learn rules for classifying submissions as correct or incorrect. These rules can be used to find erroneous parts of an incorrect program. Both models support automatic hint generation. We have implemented an online application for learning programming and used it to evaluate both approaches. Results indicate that hints generated using either approach have a positive effect on student performance

    Automating Human Tutor-Style Programming Feedback: Leveraging GPT-4 Tutor Model for Hint Generation and GPT-3.5 Student Model for Hint Validation

    Full text link
    Generative AI and large language models hold great promise in enhancing programming education by automatically generating individualized feedback for students. We investigate the role of generative AI models in providing human tutor-style programming hints to help students resolve errors in their buggy programs. Recent works have benchmarked state-of-the-art models for various feedback generation scenarios; however, their overall quality is still inferior to human tutors and not yet ready for real-world deployment. In this paper, we seek to push the limits of generative AI models toward providing high-quality programming hints and develop a novel technique, GPT4Hints-GPT3.5Val. As a first step, our technique leverages GPT-4 as a ``tutor'' model to generate hints -- it boosts the generative quality by using symbolic information of failing test cases and fixes in prompts. As a next step, our technique leverages GPT-3.5, a weaker model, as a ``student'' model to further validate the hint quality -- it performs an automatic quality validation by simulating the potential utility of providing this feedback. We show the efficacy of our technique via extensive evaluation using three real-world datasets of Python programs covering a variety of concepts ranging from basic algorithms to regular expressions and data analysis using pandas library

    Towards an Intelligent Tutor for Mathematical Proofs

    Get PDF
    Computer-supported learning is an increasingly important form of study since it allows for independent learning and individualized instruction. In this paper, we discuss a novel approach to developing an intelligent tutoring system for teaching textbook-style mathematical proofs. We characterize the particularities of the domain and discuss common ITS design models. Our approach is motivated by phenomena found in a corpus of tutorial dialogs that were collected in a Wizard-of-Oz experiment. We show how an intelligent tutor for textbook-style mathematical proofs can be built on top of an adapted assertion-level proof assistant by reusing representations and proof search strategies originally developed for automated and interactive theorem proving. The resulting prototype was successfully evaluated on a corpus of tutorial dialogs and yields good results.Comment: In Proceedings THedu'11, arXiv:1202.453

    Authoring Example-based Tutors for Procedural Tasks

    Get PDF
    Researchers who have worked on authoring systems for intelligent tutoring systems (ITSs) have examined how examples may form the basis for authoring. In this chapter, we describe several such systems, consider their commonalities and differences, and reflect on the merit of such an approach. It is not surprising perhaps that several tutor developers have explored how examples can be used in the authoring process. In a broader context, educators and researchers have long known the power of examples in learning new material. Students can gather much information by poring over a worked example, applying what they learn to novel problems. Often these worked examples prove more powerful than direct instruction in the domain. For example, Reed and Bolstad (1991) found that students learning solely by worked examples exhibited much greater learning than those learning instruction based on procedures. By extension then, since tutor authoring can be considered to be teaching a tabula rasa tutor, tutor authoring by use of examples may be as powerful as directly programming the instruction, while being easier to do

    A metacognitive feedback scaffolding system for pedagogical apprenticeship

    Get PDF
    This thesis addresses the issue of how to help staff in Universities learn to give feedback with the main focus on helping teaching assistants (TAs) learn to give feedback while marking programming assignments. The result is an innovative approach which has been implemented in a novel computer support system called McFeSPA. The design of McFeSPA is based on an extensive review of the research literature on feedback. McFeSPA has been developed based on relevant work in educational psychology and Artificial Intelligence in EDucation (AIED) e.g. scaffolding the learner, ideas about andragogy, feedback patterns, research into the nature and quality of feedback and cognitive apprenticeship. McFeSPA draws on work on feedback patterns that have been proposed within the Pedagogical Patterns Project (PPP) to provide guidance on structuring the feedback report given to the student by the TA. The design also draws on the notion of andragogy to support the TA. McFeSPA is the first Intelligent Tutoring System (ITS) that supports adults learning to help students by giving quality feedback. The approach taken is more than a synthesis of these key ideas: the scaffolding framework has been implemented both for the domain of programming and the feedback domain itself; the programming domain has been structured for training TAs to give better feedback and as a framework for the analysis of students’ performance. The construction of feedback was validated by a small group of TAs. The TAs employed McFeSPA in a realistic situation that was supported by McFeSPA which uses scaffolding to support the TA and then fade. The approach to helping TAs become better feedback givers, which is instantiated in McFeSPA, has been validated through an experimental study with a small group of TAs using a triangulation approach. We found that our participants learned differently by using McFeSPA. The evaluation indicates that 1) providing content scaffolding (i.e. detailed feedback about the content using contingent hints) in McFeSPA can help almost all TAs increase their knowledge/understanding of the issues of learning to give feedback; 2) providing metacognitive scaffolding (i.e. each level of detailed feedback in contingent hint, this can also be general pop-up messages in using the system apart from feedback that encourage the participants to give good feedback) in McFeSPA helped all TAs reflect on/rethink their skills in giving feedback; and 3) when the TAs obtained knowledge about giving quality feedback, providing adaptable fading of TAs using McFeSPA allowed the TAs to learn alone without any support
    • …
    corecore