58,747 research outputs found

    Exploring machine learning methods to automatically identify students in need of assistance

    Full text link
    Copyright 2015 ACM. Methods for automatically identifying students in need of assistance have been studied for decades. Initially, the work was based on somewhat static factors such as students' educational background and results from various questionnaires, while more recently, constantly accumulating data such as progress with course assignments and behavior in lectures has gained attention. We contribute to this work with results on early detection of students in need of assistance, and provide a starting point for using machine learning techniques on naturally accumulating programming process data. When combining source code snapshot data that is recorded from students' programming process with machine learning methods, we are able to detect high- and low-performing students with high accuracy already after the very first week of an introductory programming course. Comparison of our results to the prominent methods for predicting students' performance using source code snapshot data is also provided. This early information on students' performance is beneficial from multiple viewpoints. Instructors can target their guidance to struggling students early on, and provide more challenging assignments for high-performing students. Moreover, students that perform poorly in the introductory programming course, but who nevertheless pass, can be monitored more closely in their future studies

    Exploring Machine Learning Methods to Automatically Identify Students in Need of Assistance

    Get PDF
    ABSTRACT Methods for automatically identifying students in need of assistance have been studied for decades. Initially, the work was based on somewhat static factors such as students' educational background and results from various questionnaires, while more recently, constantly accumulating data such as progress with course assignments and behavior in lectures has gained attention. We contribute to this work with results on early detection of students in need of assistance, and provide a starting point for using machine learning techniques on naturally accumulating programming process data. When combining source code snapshot data that is recorded from students' programming process with machine learning methods, we are able to detect high-and low-performing students with high accuracy already after the very first week of an introductory programming course. Comparison of our results to the prominent methods for predicting students' performance using source code snapshot data is also provided. This early information on students' performance is beneficial from multiple viewpoints. Instructors can target their guidance to struggling students early on, and provide more challenging assignments for high-performing students. Moreover, students that perform poorly in the introductory programming course, but who nevertheless pass, can be monitored more closely in their future studies

    Automata Tutor v3

    Full text link
    Computer science class enrollments have rapidly risen in the past decade. With current class sizes, standard approaches to grading and providing personalized feedback are no longer possible and new techniques become both feasible and necessary. In this paper, we present the third version of Automata Tutor, a tool for helping teachers and students in large courses on automata and formal languages. The second version of Automata Tutor supported automatic grading and feedback for finite-automata constructions and has already been used by thousands of users in dozens of countries. This new version of Automata Tutor supports automated grading and feedback generation for a greatly extended variety of new problems, including problems that ask students to create regular expressions, context-free grammars, pushdown automata and Turing machines corresponding to a given description, and problems about converting between equivalent models - e.g., from regular expressions to nondeterministic finite automata. Moreover, for several problems, this new version also enables teachers and students to automatically generate new problem instances. We also present the results of a survey run on a class of 950 students, which shows very positive results about the usability and usefulness of the tool

    Early identification of novice programmers' challenges in coding using machine learning techniques

    Full text link
    It is well known that many first year undergraduate university students struggle with learning to program. Educational Data Mining (EDM) applies machine learning and statistics to information generated from educational settings. In this PhD project, EDM is used to study first semester novice programmers, using data collected from students as they work on computers to complete their normal weekly laboratory exercises. Analysis of the generated snapshots has shown the potential for early identification of students who later struggle in the course. The aim of this study is to propose a method for early identification of "at risk" students while providing suggestions on how they can improve their coding style. This PhD project is within its final year

    LEARNING HOW STUDENTS ARE LEARNING IN PROGRAMMING LAB SESSIONS

    Get PDF
    Department of Computer Science and EngineeringProgramming lab sessions help students learn to program in a practical way. Although these sessions are typically valuable to students, it is not uncommon for some participants to fall behind throughout the sessions and leave without fully grasping the concepts covered during the session. In my thesis, I will be presenting LabEX, a system for instructors to understand students' progress and learning experience during programming lab sessions. LabEX utilizes statistical techniques that help distinguishing struggling students and understand their degree of struggle. LabEX also helps instructors to provide in-situ feedback to students with its real-time code review. LabEX was evaluated in an entry-level programming course taken by more than two hundred students in UNIST, establishing that it increases the quality of programming lab sessions.ope

    Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps

    Full text link
    Concept maps can be used to concisely represent important information and bring structure into large document collections. Therefore, we study a variant of multi-document summarization that produces summaries in the form of concept maps. However, suitable evaluation datasets for this task are currently missing. To close this gap, we present a newly created corpus of concept maps that summarize heterogeneous collections of web documents on educational topics. It was created using a novel crowdsourcing approach that allows us to efficiently determine important elements in large document collections. We release the corpus along with a baseline system and proposed evaluation protocol to enable further research on this variant of summarization.Comment: Published at EMNLP 201
    corecore