2,602 research outputs found

    Towards an Intelligent Tutor for Mathematical Proofs

    Get PDF
    Computer-supported learning is an increasingly important form of study since it allows for independent learning and individualized instruction. In this paper, we discuss a novel approach to developing an intelligent tutoring system for teaching textbook-style mathematical proofs. We characterize the particularities of the domain and discuss common ITS design models. Our approach is motivated by phenomena found in a corpus of tutorial dialogs that were collected in a Wizard-of-Oz experiment. We show how an intelligent tutor for textbook-style mathematical proofs can be built on top of an adapted assertion-level proof assistant by reusing representations and proof search strategies originally developed for automated and interactive theorem proving. The resulting prototype was successfully evaluated on a corpus of tutorial dialogs and yields good results.Comment: In Proceedings THedu'11, arXiv:1202.453

    Learning gain differences between ChatGPT and human tutor generated algebra hints

    Full text link
    Large Language Models (LLMs), such as ChatGPT, are quickly advancing AI to the frontiers of practical consumer use and leading industries to re-evaluate how they allocate resources for content production. Authoring of open educational resources and hint content within adaptive tutoring systems is labor intensive. Should LLMs like ChatGPT produce educational content on par with human-authored content, the implications would be significant for further scaling of computer tutoring system approaches. In this paper, we conduct the first learning gain evaluation of ChatGPT by comparing the efficacy of its hints with hints authored by human tutors with 77 participants across two algebra topic areas, Elementary Algebra and Intermediate Algebra. We find that 70% of hints produced by ChatGPT passed our manual quality checks and that both human and ChatGPT conditions produced positive learning gains. However, gains were only statistically significant for human tutor created hints. Learning gains from human-created hints were substantially and statistically significantly higher than ChatGPT hints in both topic areas, though ChatGPT participants in the Intermediate Algebra experiment were near ceiling and not even with the control at pre-test. We discuss the limitations of our study and suggest several future directions for the field. Problem and hint content used in the experiment is provided for replicability

    The Design and Use of Tools for Teaching Logic

    Get PDF

    AUTOMATIC GENERATION OF INTELLIGENT TUTORING CAPABILITIES VIA EDUCATIONAL DATA MINING

    Get PDF
    Intelligent Tutoring Systems (ITSs) that adapt to an individual student’s needs have shown significant improvement in achievement over non-adaptive instruction (Murray 1999). This improvement occurs due to the individualized instruction and feedback that an ITS provides. In order to achieve the benefits that ITSs provide, we must find a way to simplify their creation. Therefore, we have created methods that can use data to automatically generate hints to adapt computer-aided instruction to help individual students. Our MDP method uses data from past student attempts on given problem to generate a graph of likely paths students take to solve a problem. These graphs can be used by educators to clearly understand how students are solving the problem or to provide hints for new students working the problem by pointing them down a successful path to solve the problem. We introduce the Hint Factory which is an implementation of the MDP method in an actual tutor used to solve logic proofs. We show that the Hint Factory can successfully help students solve more problems and show that students with access to hints are more likely to attempt harder problems than those without hints. In addition, we have enhanced the MDP method by creating a “utility” function that allows MDPs to be created when the problem solution may not be labeled. We show that this utility function performs as well as the traditional MDP method for our logic problems. We also created a Bayesian Knowledge Base to combine the information from multiple MDPs into a single corpus that will allow the Hint Factory to provide hints on new problems where no student data exists. Finally, we applied the MDP method to create models for other domains, including Stoichiometry and Algebra. This work shows that it is possible to use data to create ITS capabilities, primarily hint generation, automatically in ways that can help students solve more and more difficult problems, and builds a foundation for effective visualization and exploration of student work for both teachers and researchers

    Automated Feedback for Learning Code Refactoring

    Get PDF

    Exploring the visualization of student behavior in interactive learning environments

    Get PDF
    My research combines Interactive Learning Environments (ILE), Educational Data Mining (EDM) and Information Visualization (Info-Vis) to inform analysts, educators and researchers about user behavior in software, specifically in CBEs, which include intelligent tutoring systems, computer aided instruction tools, and educational games. InVis is a novel visualization technique and tool I created for exploring, navigating, and understanding user interaction data. InVis reads in user-interaction data logged from students using educational systems and constructs an Interaction Network from those logs. Using this data InVis provides an interactive environment to allow instructors and education researchers to navigate and explore to build new insights and discoveries about student learning. I conducted a three-point user study, which included a quantitative task analysis, qualitative feedback, and a validated usability survey. Through this study, I show that creating an Interaction Network and visualizing it with InVis is an effective means of providing information to users about student behavior. In addition to this, I also provide four use-cases describing how InVis has been used to confirm hypotheses and debug software tutors. A major challenge in visualizing and exploring the Interaction Network is network's complexity, there are too many nodes and edges presented to understand the data efficiently. In a typical Interaction Network for twenty students, it is common to have hundreds of nodes, which to make sense of, has proven to be too many. I present a network reduction method, based on edge frequencies, which lowers the number of edges and nodes by roughly 90\\% while maintaining the most important elements of the Interaction Network. Next, I compare the results of this method with three alternative approaches and show our reduction method produces the preferred results. I also present an ordering detection method for identifying solution path redundancy because of student action orders. This method reduces the number of nodes and edges further and advances the resulting network towards the structure of a simple graph. Understanding the successful student solutions is only a portion of the behaviors we are interested in as researchers and educators using computer based educational systems, student difficulties are also important. To address areas of student difficulty, I present three different methods and two visual representations to draw the attention of the user to nodes where students had difficulty. Those methods include presenting the nodes with the highest number of successful students, the nodes with the highest number of failing students, and the expected difficulty of each state. Combined with a visual representation, these methods can draw the focus of users to potentially important nodes, which contain areas of difficulty for students. Lastly, I present the latest version of the InVis tool, which is a platform for investigating student behavior in computer based educational systems. Through the continued use of this tool, new researchers can investigate many new hypotheses, research questions and student behaviors, with the potential to facilitate a wide range of new discoveries

    Evaluating the Effectiveness of tutorial dialogue instruction in a Explotary learning context

    Get PDF
    [Proceedings of] ITS 2006, 8th International Conference on Intelligent Tutoring Systems, 26-30 June 2006, Jhongli, Taoyuan County, TaiwanIn this paper we evaluate the instructional effectiveness of tutorial dialogue agents in an exploratory learning setting. We hypothesize that the creative nature of an exploratory learning environment creates an opportunity for the benefits of tutorial dialogue to be more clearly evidenced than in previously published studies. In a previous study we showed an advantage for tutorial dialogue support in an exploratory learning environment where that support was administered by human tutors [9]. Here, using a similar experimental setup and materials, we evaluate the effectiveness of tutorial dialogue agents modeled after the human tutors from that study. The results from this study provide evidence of a significant learning benefit of the dialogue agentsThis project is supported by ONR Cognitive and Neural Sciences Division, Grant number N000140410107proceedingPublicad

    Hint generation in programming tutors

    Get PDF
    Programming is increasingly recognized as a useful and important skill. Online programming courses that have appeared in the past decade have proven extremely popular with a wide audience. Learning in such courses is however not as effective as working directly with a teacher, who can provide students with immediate relevant feedback. The field of intelligent tutoring systems seeks to provide such feedback automatically. Traditionally, tutors have depended on a domain model defined by the teacher in advance. Creating such a model is a difficult task that requires a lot of knowledgeengineering effort, especially in complex domains such as programming. A potential solution to this problem is to use data-driven methods. The idea is to build the domain model by observing how students have solved an exercise in the past. New students can then be given feedback that directs them along successful solution paths. Implementing this approach is particularly challenging for programming domains, since the only directly observable student actions are not easily interpretable. We present two novel approaches to creating a domain model for programming exercises in a data-driven fashion. The first approach models programming as a sequence of textual rewrites, and learns rewrite rules for transforming programs. With these rules new student-submitted programs can be automatically debugged. The second approach uses structural patterns in programs’ abstract syntax trees to learn rules for classifying submissions as correct or incorrect. These rules can be used to find erroneous parts of an incorrect program. Both models support automatic hint generation. We have implemented an online application for learning programming and used it to evaluate both approaches. Results indicate that hints generated using either approach have a positive effect on student performance
    corecore