Automatic marking of Shell programs for students coursework assessment

Abstract

The number of students in any programming language course is usually large; more than 100 students is not uncommon in some universities. The member of staff teaching such a course has to mark, perhaps weekly, a very large number of program assignments. Manual marking and assessing is therefore an arduous task. The aim of this work is to describe a computer system for automatic marking and assessment of students' programs written in Unix Bourne Shell. In this study, a student's program will be assessed by testing its dynamic correctness and its maintainability. For dynamic correctness to be checked the program will be run against sets of input data supplied by the teacher, whereas for maintainability the student's program will be tested statically. The program text will be analysed, and its typographic style and its complexity measured. The typographic assessment in this system is adaptable to reflect the change of emphasis as a course progresses. This study presents the results generated from the assessment of a typical class of students in a Shell programming course. The experience with the development of the typographic assessment system has been generally positive. The results have shown that it is feasible to automate the assessment of this quality factor, as well as dynamic testing. Realistic grading can be achieved and useful information feedback can be obtained. The system is useful to both the students learning programming in Shell, (Arthur, L. J. and Burns, T., 1996) and the staff who are teaching the course. Although the work here is focused on the Bourne Shell, (Bourne, S. R., 1987) the study is still valid, with little or no change, to all other shells. The method used can also be applied, with some modification, to other programming languages. Furthermore this method is not limited to university and teaching, it can also be used in other fields for the purposes of software quality assessment

    Similar works