7,414 research outputs found

    Software Verification and Graph Similarity for Automated Evaluation of Students' Assignments

    Get PDF
    In this paper we promote introducing software verification and control flow graph similarity measurement in automated evaluation of students' programs. We present a new grading framework that merges results obtained by combination of these two approaches with results obtained by automated testing, leading to improved quality and precision of automated grading. These two approaches are also useful in providing a comprehensible feedback that can help students to improve the quality of their programs We also present our corresponding tools that are publicly available and open source. The tools are based on LLVM low-level intermediate code representation, so they could be applied to a number of programming languages. Experimental evaluation of the proposed grading framework is performed on a corpus of university students' programs written in programming language C. Results of the experiments show that automatically generated grades are highly correlated with manually determined grades suggesting that the presented tools can find real-world applications in studying and grading

    Programming assignments automatic grading: review of tools and implementations

    Full text link
    Automatic grading of programming assignments is an important topic in academic research. It aims at improving the level of feedback given to students and optimizing the professor time. Several researches have reported the development of software tools to support this process. Then, it is helpfulto get a quickly and good sight about their key features. This paper reviews an ample set of tools forautomatic grading of programming assignments. They are divided in those most important mature tools, which have remarkable features; and those built recently, with new features. The review includes the definition and description of key features e.g. supported languages, used technology, infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis shows good improvements in this research field, these include security, more language support, plagiarism detection, etc. On the other hand, the lack of a grading model for assignments is identified as an important gap in the reviewed tools. Thus, a characterization of evaluation metrics to grade programming assignments is provided as first step to get a model. Finally new paths in this research field are proposed

    Improving Grading and Feedback of Programming Assignments Using Version Control: An Experience Report

    Get PDF
    Leaving meaningful, actionable feedback that students will read and, most importantly, follow-up on, is essential for strengthening their programming skills. In addition, being capable with version control platforms, such as git, is a desired skill in industry. Could a marriage between the two, leaving meaningful feedback for student submissions in a version control system, lead them to be better programmers while improving the time and quality of instructors’ feedback? This experience report describes how we used GitHub Classroom for programming assignment submission and assessment in CS2. We provide examples of typical feedback using various assessment mechanisms, describe the process of assignment submission for students, the assessment process for instructors, and reflect on students’ reception towards the process and the value, in terms of time and quality, for the instructor

    Investigating the Essential of Meaningful Automated Formative Feedback for Programming Assignments

    Full text link
    This study investigated the essential of meaningful automated feedback for programming assignments. Three different types of feedback were tested, including (a) What's wrong - what test cases were testing and which failed, (b) Gap - comparisons between expected and actual outputs, and (c) Hint - hints on how to fix problems if test cases failed. 46 students taking a CS2 participated in this study. They were divided into three groups, and the feedback configurations for each group were different: (1) Group One - What's wrong, (2) Group Two - What's wrong + Gap, (3) Group Three - What's wrong + Gap + Hint. This study found that simply knowing what failed did not help students sufficiently, and might stimulate system gaming behavior. Hints were not found to be impactful on student performance or their usage of automated feedback. Based on the findings, this study provides practical guidance on the design of automated feedback

    Automatic Image Marking Process

    Get PDF
    Abstract-Efficient evaluation of student programs and timely processing of feedback is a critical challenge for faculty. Despite persistent efforts and significant advances in this field, there is still room for improvement. Therefore, the present study aims to analyse the system of automatic assessment and marking of computer science programming students’ assignments in order to save teachers or lecturers time and effort. This is because the answers are marked automatically and the results returned within a very short period of time. The study develops a statistical framework to relate image keywords to image characteristics based on optical character recognition (OCR) and then provides analysis by comparing the students’ submitted answers with the optimal results. This method is based on Latent Semantic Analysis (LSA), and the experimental results achieve high efficiency and more accuracy by using such a simple yet effective technique in automatic marking

    Ethical Implementation of an Automated Essay Scoring (AES) System: A Case Study of Student and Instructor Use, Satisfaction, and Perceptions of AES in a Business Law Course

    Get PDF
    A pilot study of a vendor provided automated essay scoring system was conducted in a Business Law class of 27 students. Students answered a business law fact pattern question which was reviewed and graded by the textbook vendor utilizing artificial intelligence software. Students were surveyed on their use, satisfaction, perceptions and technical issues utilizing the Write Experience automated essay scoring (AES) software. The instructor also chronicles the adoption, set up and use of an AES. Also detailed are the advantages and disadvantages of utilizing such software in an undergraduate course environment where some students may not be technologically adept or may lack motivation to experiment with a new testing procedure

    A Restful Framework for Writing, Running, and Evaluating Code in Multiple Academic Settings

    Get PDF
    In academia, students and professors want a well-structured and implemented framework for writing and running code in both testing and learning environments. The current limitations of the paper and pencil medium have led to the creation of many different online grading systems. However, no known system provides all of the essential features our client is interested in. Our system, developed in conjunction with Doctor Halterman, offers the ability to build modules from flat files, allow code to be compiled and run in the browser, provide users with immediate feedback, support multiple languages, and offer a module designed specifically for an examination environment

    Authoring and Sharing of Programming Exercises

    Get PDF
    In recent years, a number of exercises have been developed and published for educating students in the field of Computer Science. But these exercises exist in their own silos. There is no apparent mechanism to share these exercises among researchers and instructors in an effective and efficient manner. Moreover, the developers of these programming exercises are generally using a proprietary system for automatic submission and grading of these exercises. Each of these systems dictates the persistent format of an exercise that may not be inter-operable with other automatic submission and grading systems. This project provides a solution to this problem by modeling a programming exercise into a Learning Object metadata definition. This metadata definition describes the learning resource in terms of its contents, classifications, lifecycle and several other relevant properties. A learning Object (LO) is persisted in a repository along with its metadata. This repository supports simple and advanced queries to retrieve LO s and export them to various commercially available or home-grown e-learning systems. In a simple query, keywords given by the user are matched against a number of metadata elements whereas an advanced query allows a user to specify values for specific metadata elements
    • 

    corecore