83 research outputs found

    Automatic marking of Shell programs for students coursework assessment

    Get PDF
    The number of students in any programming language course is usually large; more than 100 students is not uncommon in some universities. The member of staff teaching such a course has to mark, perhaps weekly, a very large number of program assignments. Manual marking and assessing is therefore an arduous task. The aim of this work is to describe a computer system for automatic marking and assessment of students' programs written in Unix Bourne Shell. In this study, a student's program will be assessed by testing its dynamic correctness and its maintainability. For dynamic correctness to be checked the program will be run against sets of input data supplied by the teacher, whereas for maintainability the student's program will be tested statically. The program text will be analysed, and its typographic style and its complexity measured. The typographic assessment in this system is adaptable to reflect the change of emphasis as a course progresses. This study presents the results generated from the assessment of a typical class of students in a Shell programming course. The experience with the development of the typographic assessment system has been generally positive. The results have shown that it is feasible to automate the assessment of this quality factor, as well as dynamic testing. Realistic grading can be achieved and useful information feedback can be obtained. The system is useful to both the students learning programming in Shell, (Arthur, L. J. and Burns, T., 1996) and the staff who are teaching the course. Although the work here is focused on the Bourne Shell, (Bourne, S. R., 1987) the study is still valid, with little or no change, to all other shells. The method used can also be applied, with some modification, to other programming languages. Furthermore this method is not limited to university and teaching, it can also be used in other fields for the purposes of software quality assessment

    Automatic Image Marking Process

    Get PDF
    Abstract-Efficient evaluation of student programs and timely processing of feedback is a critical challenge for faculty. Despite persistent efforts and significant advances in this field, there is still room for improvement. Therefore, the present study aims to analyse the system of automatic assessment and marking of computer science programming students’ assignments in order to save teachers or lecturers time and effort. This is because the answers are marked automatically and the results returned within a very short period of time. The study develops a statistical framework to relate image keywords to image characteristics based on optical character recognition (OCR) and then provides analysis by comparing the students’ submitted answers with the optimal results. This method is based on Latent Semantic Analysis (LSA), and the experimental results achieve high efficiency and more accuracy by using such a simple yet effective technique in automatic marking

    Customizable and scalable automated assessment of C/C++ programming assignments

    Get PDF
    The correction of exercises in programming courses is a laborious task that has traditionally been performed in a manual way. This situation, in turn, delays the access by students to feedback that can contribute significantly to their training as future professionals. Over the years, several approaches have been proposed to automate the assessment of students' programs. Static analysis is a known technique that can partially simulate the process of manual code review performed by lecturers. As such, it is a plausible option to assess whether students' solutions meet the requirements imposed on the assignments. However, implementing a personalized analysis beyond the rules included in existing tools may be a complex task for the lecturer without a mechanism that guides the work. In this paper, we present a method to provide automated and specific feedback to immediately inform students about their mistakes in programming courses. To that end, we developed the CAC++ library, which enables constructing tailored static analysis programs for C/C++ practices. The library allows for great flexibility and personalization of verifications to adjust them to each particular task, overcoming the limitations of most of the existing assessment tools. Our approach to providing specific feedback has been evaluated for a period of three academic years in a course related to object-oriented programming. The library allowed lecturers to reduce the size of the static analysis programs developed for this course. During this period, the academic results improved and undergraduates positively valued the aid offered when undertaking the implementation of assignments.Universidad de Cádiz, Grant/Award Numbers: sol-201500054192-tra, sol-201600064680-tra; Ministerio de Ciencia, Innovación y Universidades, Grant/Award Number: RTI2018-093608-B-C33; European Regional Development Fun

    Teaching programming through paperless assignments: an empirical evaluation of instructor feedback

    Get PDF
    This paper considers how facilities afforded by electronic assignment handling can contribute to the quality of Internet-based teaching of programming. It reports a study comparing the nature, form, and quality of feedback provided by instructors on 90 paper and electronic assignments in an introductory CS course and notes effective strategies for electronic marking

    Exploring Problem Solving Paths in a Java Programming Course

    Get PDF
    Assessment of students’ programming submissions has been the focus of interest in many studies. Although the final submissions capture the whole program, they often tell very little about how it was developed. In this paper, we are able to look at intermediate programming steps using a unique dataset that captures a series of snapshots showing how students developed their program over time. We assessed each of these intermediate steps and performed a fine-grained concept-based analysis on each step to identify the most common programming paths. Analysis of results showed that most of the students tend to incrementally build the program and improve its correctness. This finding provides us with evidence that intermediate programming steps are important, and need to be taken into account for not only improving user modelling in educational programming systems, but also for providing better feedback to students

    Formative computer based assessment in diagram based domains

    Get PDF
    This research argues that the formative assessment of student coursework in free-form, diagram-based domains can be automated using CBA techniques in a way which is both feasible and useful. Formative assessment is that form of assessment in which the objective is to assist the process of learning undertaken by the student. The primary deliverable associated with formative assessment is feedback. CBA courseware provides facilities to implement the full lifecycle of an exercise through an integrated, online system. This research demonstrates that CBA offers unique opportunities for student learning through formative assessment, including allowing students to correct their solutions over a larger number of submissions than it would be feasible to allow within the context of traditional assessment forms. The approach to research involves two main phases. The first phase involves designing and implementing an assessment course using the CourseMarker / DATsys CBA system. This system, in common with may other examples of CBA courseware, was intended primarily to conduct summative assessment. The benefits and limitations of the system are identified. The second phase identifies three extensions to the architecture which encapsulate the difference in requirements between summative assessment and formative assessment, presents a design for the extensions, documents their implementation as extensions to the CourseMarker / DATsys architecture and evaluates their contribution. The three extensions are novel extensions for free-form CBA which allow the assessment of the aesthetic layout of student diagrams, the marking of student solutions where multiple model solutions are acceptable and the prioritisation and truncation of feedback prior to its presentation to the student. Evaluation results indicate that the student learning process can be assisted through formative assessment which is automated using CBA courseware. The students learn through an iterative process in which feedback upon a submitted student coursework solution is used by the student to improve their solution, after which they may re-submit and receive further feedback

    Computer-based assessment system for e-learning applied to programming education

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201
    corecore