50,065 research outputs found

    Interface-based Programming Assignments and Automatic Grading of Java Programs

    Get PDF
    AutoGrader is a framework developed at Miami University for the automatic grading of student programming assignments written in the Java programming language. AutoGrader leverages the abstract concept of interfaces, brought out by the Java interface language construct, in both the assignment and grading of programming assignments. The use of interfaces reinforces the role of procedural abstraction in ob ject-oriented programming and allows for a common API to all student code. This common API then enables automatic grading of program functionality. AutoGrader provides a simple instructor API and enables the automatic testing of student code through the Java language features of interfaces and reflection1 . AutoGrader also supports static code analysis using PMD [4] to detect possible bugs, dead code, suboptimal, and overcomplicated code. While AutoGrader is written in and only handles Java programs, this style of automated grading is adaptable to any language that supports (or can mimic) named interfaces and/or abstract functions and that also supports runtime reflection

    Programming assignments automatic grading: review of tools and implementations

    Full text link
    Automatic grading of programming assignments is an important topic in academic research. It aims at improving the level of feedback given to students and optimizing the professor time. Several researches have reported the development of software tools to support this process. Then, it is helpfulto get a quickly and good sight about their key features. This paper reviews an ample set of tools forautomatic grading of programming assignments. They are divided in those most important mature tools, which have remarkable features; and those built recently, with new features. The review includes the definition and description of key features e.g. supported languages, used technology, infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis shows good improvements in this research field, these include security, more language support, plagiarism detection, etc. On the other hand, the lack of a grading model for assignments is identified as an important gap in the reviewed tools. Thus, a characterization of evaluation metrics to grade programming assignments is provided as first step to get a model. Finally new paths in this research field are proposed

    A tool assisting teachers in automating the assessment of programming assignments

    Full text link
    Automating the assessment of programming assignments brings benefits for both students and teachers, since it helps the formers to gain a timely feedback and releases the latter from tedious tasks. The related literature in the domain has usually focused on the assessment process and the tools required for it, proposing libraries and systems that teachers can use in this process. However, few of them have work rowards reducing the effort and time teacher require to properly set up new assessente processes. This paper describes our experience with the analysis and design of a new tool to support teachers in visually developing automatic grades of programming assignments, introducing the underlying concepts and technologies and presenting the system architecture

    Semi-automatic assessment of unrestrained Java code: a Library, a DSL, and a workbench to assess exams and exercises

    Full text link
    © ACM 2015. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in http://dx.doi.org/10.1145/2729094.2742615Automated marking of multiple-choice exams is of great interest in university courses with a large number of students. For this reason, it has been systematically implanted in almost all universities. Automatic assessment of source code is however less extended. There are several reasons for that. One reason is that almost all existing systems are based on output comparison with a gold standard. If the output is the expected, the code is correct. Otherwise, it is reported as wrong, even if there is only one typo in the code. Moreover, why it is wrong remains a mystery. In general, assessment tools treat the code as a black box, and they only assess the externally observable behavior. In this work we introduce a new code assessment method that also verifies properties of the code, thus allowing to mark the code even if it is only partially correct. We also report about the use of this system in a real university context, showing that the system automatically assesses around 50% of the work.This work has been partially supported by the EU (FEDER) and the Spanish Ministerio de Economíay Competitividad (Secretaría de Estado de Investigación, Desarrollo e Innovación) under grant TIN2013-44742-C4-1-R and by the Generalitat Valenciana under grant PROMETEOII2015/013. David Insa was partially supported by the Spanish Ministerio de Educación under FPU grant AP2010-4415.Insa Cabrera, D.; Silva, J. (2015). Semi-automatic assessment of unrestrained Java code: a Library, a DSL, and a workbench to assess exams and exercises. ACM. https://doi.org/10.1145/2729094.2742615SK. A Rahman and M. Jan Nordin. A review on the static analysis approach in the automated programming assessment systems. In National Conference on Programming 07, 2007.K. Ala-Mutka. A survey of automated assessment approaches for programming assignments. In Computer Science Education, volume 15, pages 83--102, 2005.C. Beierle, M. Kula, and M. Widera. Automatic analysis of programming assignments. In Proc. der 1. E-Learning Fachtagung Informatik (DeLFI '03), volume P-37, pages 144--153, 2003.J. Biggs and C. Tang. Teaching for Quality Learning at University : What the Student Does (3rd Edition). In Open University Press, 2007.P. Denny, A. Luxton-Reilly, E. Tempero, and J. Hendrickx. CodeWrite: Supporting student-driven practice of java. In Proceedings of the 42nd ACM technical symposium on Computer science education, pages 09--12, 2011.R. Hendriks. Automatic exam correction. 2012.P. Ihantola, T. Ahoniemi, V. Karavirta, and O. Seppala. Review of recent systems for automatic assessment of programming assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research, pages 86--93, 2010.H. Kitaya and U. Inoue. An online automated scoring system for Java programming assignments. In International Journal of Information and Education Technology, volume 6, pages 275--279, 2014.M.-J. Laakso, T. Salakoski, A. Korhonen, and L. Malmi. Automatic assessment of exercises for algorithms and data structures - a case study with TRAKLA2. In Proceedings of Kolin Kolistelut/Koli Calling - Fourth Finnish/Baltic Sea Conference on Computer Science Education, pages 28--36, 2004.Y. Liang, Q. Liu, J. Xu, and D. Wang. The recent development of automated programming assessment. In Computational Intelligence and Software Engineering, pages 1--5, 2009.K. A. Naudé, J. H. Greyling, and D. Vogts. Marking student programs using graph similarity. In Computers & Education, volume 54, pages 545--561, 2010.A. Pears, S. Seidman, C. Eney, P. Kinnunen, and L. Malmi. Constructing a core literature for computing education research. In SIGCSE Bulletin, volume 37, pages 152--161, 2005.F. Prados, I. Boada, J. Soler, and J. Poch. Automatic generation and correction of technical exercices. In International Conference on Engineering and Computer Education (ICECE 2005), 2005.M. Supic, K. Brkic, T. Hrkac, Z. Mihajlovic, and Z. Kalafatic. Automatic recognition of handwritten corrections for multiple-choice exam answer sheets. In Information and Communication Technology, Electronics and Microelectronics (MIPRO), pages 1136--1141, 2014.S. Tung, T. Lin, and Y. Lin. An exercise management system for teaching programming. In Journal of Software, 2013.T. Wang, X. Su, Y. Wang, and P. Ma. Semantic similarity-based grading of student programs. In Information and Software Technology, volume 49, pages 99--107, 2007

    Software Verification and Graph Similarity for Automated Evaluation of Students' Assignments

    Get PDF
    In this paper we promote introducing software verification and control flow graph similarity measurement in automated evaluation of students' programs. We present a new grading framework that merges results obtained by combination of these two approaches with results obtained by automated testing, leading to improved quality and precision of automated grading. These two approaches are also useful in providing a comprehensible feedback that can help students to improve the quality of their programs We also present our corresponding tools that are publicly available and open source. The tools are based on LLVM low-level intermediate code representation, so they could be applied to a number of programming languages. Experimental evaluation of the proposed grading framework is performed on a corpus of university students' programs written in programming language C. Results of the experiments show that automatically generated grades are highly correlated with manually determined grades suggesting that the presented tools can find real-world applications in studying and grading

    Automatic Image Marking Process

    Get PDF
    Abstract-Efficient evaluation of student programs and timely processing of feedback is a critical challenge for faculty. Despite persistent efforts and significant advances in this field, there is still room for improvement. Therefore, the present study aims to analyse the system of automatic assessment and marking of computer science programming students’ assignments in order to save teachers or lecturers time and effort. This is because the answers are marked automatically and the results returned within a very short period of time. The study develops a statistical framework to relate image keywords to image characteristics based on optical character recognition (OCR) and then provides analysis by comparing the students’ submitted answers with the optimal results. This method is based on Latent Semantic Analysis (LSA), and the experimental results achieve high efficiency and more accuracy by using such a simple yet effective technique in automatic marking

    The BOSS system for on-line submission and assessment of computing assignments

    Get PDF
    Practical computing courses which involve significant amounts of programming continue to suffer from increasing student numbers. This makes their delivery and management more difficult to achieve effectively with the available resources. One solution to this problem is to develop methods for automating the submission and testing of student programs to support the marking effort and to enable the division of marking tasks among several individuals while ensuring consistency and rigour throughout. We have developed such methods in our system, called BOSS, and have successfully deployed different versions of it on several courses over a number of years. Here, we describe the original system and its recent enhancements, and discuss the benefits it has provided us with, both in terms of administration and in improving the learning process

    Deepening computer programming skills by using web-based peer assessment

    Get PDF
    Peer assessment is a method of motivating students, involving students marking and providing feedback on other students' work. This paper reports on the design and implementation of a novel web-based peer assessment system for computer programming courses, and discusses its deployment on a large programming module. The results indicate that this peer assessment system has successfully helped students to develop their understanding of computer programming
    corecore