1 research outputs found

    Mining Student Submission Information to Refine Plagiarism Detection

    Get PDF
    Plagiarism is becoming an increasingly important issue in introductory programming courses. There are several tools to assist with plagiarism detection, but they are not effective for more basic programming assignments, like those in introductory courses. The proliferation of auto-grading platforms creates an opportunity to capture additional information about how students develop the solutions to their programming assignments. In this research, we identify how to extract information from an online autograding platform, Mimir Classroom, that can be useful in revealing patterns in solution development. We explore how and to what extent this additional information can be used to better support instructors when identifying cases of probable plagiarism. We have developed a tool that takes the raw student assignment submissions from Mimir, analyzes them, and produces data sets and visualizations that help instructors to refine information extracted by existing plagiarism detection platforms. The instructors can then take this information to further investigate any probable cases of plagiarism that have been found by the tool. Our main goal is to give insight into student behaviors and identify signals that can be effective indicatives of plagiarism. Furthermore, the framework can enable the analysis of other aspects of students’ solution development processes that may be useful when reasoning about their learning. As an initial exploration scenario of the framework developed in this work, we have used student code submissions from the CSCE 121: Introduction to Program Design and Concepts course at Texas A&M University. We experimented with the student code submissions from the Fall 2018 and Fall 2019 offerings of the course
    corecore