22,879 research outputs found

    Semi-automatic assessment of unrestrained Java code: a Library, a DSL, and a workbench to assess exams and exercises

    Full text link
    © ACM 2015. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in http://dx.doi.org/10.1145/2729094.2742615Automated marking of multiple-choice exams is of great interest in university courses with a large number of students. For this reason, it has been systematically implanted in almost all universities. Automatic assessment of source code is however less extended. There are several reasons for that. One reason is that almost all existing systems are based on output comparison with a gold standard. If the output is the expected, the code is correct. Otherwise, it is reported as wrong, even if there is only one typo in the code. Moreover, why it is wrong remains a mystery. In general, assessment tools treat the code as a black box, and they only assess the externally observable behavior. In this work we introduce a new code assessment method that also verifies properties of the code, thus allowing to mark the code even if it is only partially correct. We also report about the use of this system in a real university context, showing that the system automatically assesses around 50% of the work.This work has been partially supported by the EU (FEDER) and the Spanish Ministerio de Economíay Competitividad (Secretaría de Estado de Investigación, Desarrollo e Innovación) under grant TIN2013-44742-C4-1-R and by the Generalitat Valenciana under grant PROMETEOII2015/013. David Insa was partially supported by the Spanish Ministerio de Educación under FPU grant AP2010-4415.Insa Cabrera, D.; Silva, J. (2015). Semi-automatic assessment of unrestrained Java code: a Library, a DSL, and a workbench to assess exams and exercises. ACM. https://doi.org/10.1145/2729094.2742615SK. A Rahman and M. Jan Nordin. A review on the static analysis approach in the automated programming assessment systems. In National Conference on Programming 07, 2007.K. Ala-Mutka. A survey of automated assessment approaches for programming assignments. In Computer Science Education, volume 15, pages 83--102, 2005.C. Beierle, M. Kula, and M. Widera. Automatic analysis of programming assignments. In Proc. der 1. E-Learning Fachtagung Informatik (DeLFI '03), volume P-37, pages 144--153, 2003.J. Biggs and C. Tang. Teaching for Quality Learning at University : What the Student Does (3rd Edition). In Open University Press, 2007.P. Denny, A. Luxton-Reilly, E. Tempero, and J. Hendrickx. CodeWrite: Supporting student-driven practice of java. In Proceedings of the 42nd ACM technical symposium on Computer science education, pages 09--12, 2011.R. Hendriks. Automatic exam correction. 2012.P. Ihantola, T. Ahoniemi, V. Karavirta, and O. Seppala. Review of recent systems for automatic assessment of programming assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research, pages 86--93, 2010.H. Kitaya and U. Inoue. An online automated scoring system for Java programming assignments. In International Journal of Information and Education Technology, volume 6, pages 275--279, 2014.M.-J. Laakso, T. Salakoski, A. Korhonen, and L. Malmi. Automatic assessment of exercises for algorithms and data structures - a case study with TRAKLA2. In Proceedings of Kolin Kolistelut/Koli Calling - Fourth Finnish/Baltic Sea Conference on Computer Science Education, pages 28--36, 2004.Y. Liang, Q. Liu, J. Xu, and D. Wang. The recent development of automated programming assessment. In Computational Intelligence and Software Engineering, pages 1--5, 2009.K. A. Naudé, J. H. Greyling, and D. Vogts. Marking student programs using graph similarity. In Computers & Education, volume 54, pages 545--561, 2010.A. Pears, S. Seidman, C. Eney, P. Kinnunen, and L. Malmi. Constructing a core literature for computing education research. In SIGCSE Bulletin, volume 37, pages 152--161, 2005.F. Prados, I. Boada, J. Soler, and J. Poch. Automatic generation and correction of technical exercices. In International Conference on Engineering and Computer Education (ICECE 2005), 2005.M. Supic, K. Brkic, T. Hrkac, Z. Mihajlovic, and Z. Kalafatic. Automatic recognition of handwritten corrections for multiple-choice exam answer sheets. In Information and Communication Technology, Electronics and Microelectronics (MIPRO), pages 1136--1141, 2014.S. Tung, T. Lin, and Y. Lin. An exercise management system for teaching programming. In Journal of Software, 2013.T. Wang, X. Su, Y. Wang, and P. Ma. Semantic similarity-based grading of student programs. In Information and Software Technology, volume 49, pages 99--107, 2007

    Software Verification and Graph Similarity for Automated Evaluation of Students' Assignments

    Get PDF
    In this paper we promote introducing software verification and control flow graph similarity measurement in automated evaluation of students' programs. We present a new grading framework that merges results obtained by combination of these two approaches with results obtained by automated testing, leading to improved quality and precision of automated grading. These two approaches are also useful in providing a comprehensible feedback that can help students to improve the quality of their programs We also present our corresponding tools that are publicly available and open source. The tools are based on LLVM low-level intermediate code representation, so they could be applied to a number of programming languages. Experimental evaluation of the proposed grading framework is performed on a corpus of university students' programs written in programming language C. Results of the experiments show that automatically generated grades are highly correlated with manually determined grades suggesting that the presented tools can find real-world applications in studying and grading

    Evaluation of a tool for Java structural specification checking

    Get PDF
    Although a number of tools for evaluating Java code functionality and style exist, little work has been done in a distance learning context on automated marking of Java programs with respect to structural specifications. Such automated checks support human markers in assessing students’ work and evaluating their own marking; online automated marking; students checking code before submitting it for marking; and question setters evaluating the completeness of questions set. This project developed and evaluated a prototype tool that performs an automated check of a Java program’s correctness with respect to a structural specification. Questionnaires and interviews were used to gather feedback on the usefulness of the tool as a marking aid to humans, and on its potential usefulness to students for self-assessment when working on their assignments. Markers were asked to compare the usefulness of structural specification testing as compared to other kinds of support, including syntax error assistance, style checking and functionality testing. Initial results suggest that most markers using the structural specification checking tool found it to be useful, and some reported that it increased their accuracy in marking. Reasons for not using the tool included lack of time and the simplicity of the assignment it was trialled on. Some reservations were expressed about reliance on tools for assessment, both for markers and for students. The need for advice on incorporating tools in marking workflow is suggested

    Marking complex assignments using peer assessment with an electronic voting system and an automated feedback tool

    Get PDF
    The work described in this paper relates to the development and use of a range of initiatives in order to mark complex masters' level assignments related to the development of computer web applications. In the past such assignments have proven difficult to mark since they assess a range of skills including programming, human computer interaction and design. Based on the experience of several years marking such assignments, the module delivery team decided to adopt an approach whereby the students marked each other's practical work using an electronic voting system (EVS). The results of this are presented in the paper along with statistical comparison with the tutors' marking, providing evidence for the efficacy of the approach. The second part of the assignment related to theory and documentation. This was marked by the tutors using an automated feedback tool. It was found that the time to mark the work was reduced by more than 30% in all cases compared to previous years. More importantly it was possible to provide good quality individual feedback to learners rapidly. Feedback was delivered to all within three weeks of the test submission datePeer reviewe

    Introductory programming: a systematic literature review

    Get PDF
    As computing becomes a mainstream discipline embedded in the school curriculum and acts as an enabler for an increasing range of academic disciplines in higher education, the literature on introductory programming is growing. Although there have been several reviews that focus on specific aspects of introductory programming, there has been no broad overview of the literature exploring recent trends across the breadth of introductory programming. This paper is the report of an ITiCSE working group that conducted a systematic review in order to gain an overview of the introductory programming literature. Partitioning the literature into papers addressing the student, teaching, the curriculum, and assessment, we explore trends, highlight advances in knowledge over the past 15 years, and indicate possible directions for future research

    Computing with CodeRunner at Coventry University:Automated summative assessment of Python and C++ code.

    Get PDF
    CodeRunner is a free open-source Moodle plugin for automatically marking student code. We describe our experience using CodeRunner for summative assessment in our first year undergraduate programming curriculum at Coventry University. We use it to assess both Python3 and C++14 code (CodeRunner supports other languages also). We give examples of our questions and report on how key metrics have changed following its use at Coventry.Comment: 4 pages. Accepted for presentation at CEP2

    Automata Tutor v3

    Full text link
    Computer science class enrollments have rapidly risen in the past decade. With current class sizes, standard approaches to grading and providing personalized feedback are no longer possible and new techniques become both feasible and necessary. In this paper, we present the third version of Automata Tutor, a tool for helping teachers and students in large courses on automata and formal languages. The second version of Automata Tutor supported automatic grading and feedback for finite-automata constructions and has already been used by thousands of users in dozens of countries. This new version of Automata Tutor supports automated grading and feedback generation for a greatly extended variety of new problems, including problems that ask students to create regular expressions, context-free grammars, pushdown automata and Turing machines corresponding to a given description, and problems about converting between equivalent models - e.g., from regular expressions to nondeterministic finite automata. Moreover, for several problems, this new version also enables teachers and students to automatically generate new problem instances. We also present the results of a survey run on a class of 950 students, which shows very positive results about the usability and usefulness of the tool
    corecore