3 research outputs found

    Investigating the Essential of Meaningful Automated Formative Feedback for Programming Assignments

    Full text link
    This study investigated the essential of meaningful automated feedback for programming assignments. Three different types of feedback were tested, including (a) What's wrong - what test cases were testing and which failed, (b) Gap - comparisons between expected and actual outputs, and (c) Hint - hints on how to fix problems if test cases failed. 46 students taking a CS2 participated in this study. They were divided into three groups, and the feedback configurations for each group were different: (1) Group One - What's wrong, (2) Group Two - What's wrong + Gap, (3) Group Three - What's wrong + Gap + Hint. This study found that simply knowing what failed did not help students sufficiently, and might stimulate system gaming behavior. Hints were not found to be impactful on student performance or their usage of automated feedback. Based on the findings, this study provides practical guidance on the design of automated feedback

    Combating anonymousness in populous CS1 and CS2 courses

    No full text

    Efficient Use of Teaching Technologies with Programming Education

    Get PDF
    Learning and teaching programming are challenging tasks that can be facilitated by using different teaching technologies. Visualization systems are software systems that can be used to help students in forming proper mental models of executed program code. They provide different visual and textual cues that help student in abstracting the meaning of a program code or an algorithm. Students also need to constantly practice the skill of programming by implementing programming assignments. These can be automatically assessed by other computer programs but parts of the evaluation need to be assessed manually by teachers or teaching assistants.There are a lot of existing tools that provide partial solutions to the practical problems of programming courses: visualizing program code, assessing student programming submissions automatically or rubrics that help keeping manual assessment consistent. Taking these tools into use is not straightforward. To succeed, the teacher needs to find the suitable tools and properly integrate them into the course infrastructure supporting the whole learning process. As many programming courses are mass courses, it is a constant struggle between providing sufficient personal guidance and feedback while retaining a reasonable workload for the teacher.This work answers to the question "How can the teaching of programming be effectively assisted using teaching technologies?" As a solution, different learning taxonomies are presented from Computer Science perspective and applied to visualization examples so the examples could be used to better support deeper knowledge and the whole learning process within a programming course. Then, different parts of the assessment process of programming assignments are studied to find the best practices in supporting the process, especially when multiple graders are being used, to maintain objectivity, consistency and reasonable workload in the grading.The results of the work show that teaching technologies can be a valuable aid for the teacher to support the learning process of the students and to help in the practical organization of the course without hindering the learning results or personalized feedback the students receive from their assignments. This thesis presents new visualization categories that allow deeper cognitive development and examples on how to integrate them efficiently into the course infrastructure. This thesis also presents a survey of computer-assisted assessment tools and assessable features for teachers to use in their programming assignments. Finally, the concept of rubric-based assessment tools is introduced to facilitate the manual assessment part of programming assignments
    corecore