15,673 research outputs found

    Early experiences of computer‐aided assessment and administration when teaching computer programming

    Get PDF
    This paper describes early experiences with the Ceilidh system currently being piloted at over 30 institutions of higher education. Ceilidh is a course‐management system for teaching computer programming whose core is an auto‐assessment facility. This facility automatically marks students programs from a range of perspectives, and may be used in an iterative manner, enabling students to work towards a target level of attainment. Ceilidh also includes extensive course‐administration and progress‐monitoring facilities, as well as support for other forms of assessment including short‐answer marking and the collation of essays for later hand‐marking. The paper discusses the motivation for developing Ceilidh, outlines its major facilities, then summarizes experiences of developing and actually using it at the coal‐face over three years of teaching

    Automated Feedback for 'Fill in the Gap' Programming Exercises

    Get PDF
    Timely feedback is a vital component in the learning process. It is especially important for beginner students in Information Technology since many have not yet formed an effective internal model of a computer that they can use to construct viable knowledge. Research has shown that learning efficiency is increased if immediate feedback is provided for students. Automatic analysis of student programs has the potential to provide immediate feedback for students and to assist teaching staff in the marking process. This paper describes a “fill in the gap” programming analysis framework which tests students’ solutions and gives feedback on their correctness, detects logic errors and provides hints on how to fix these errors. Currently, the framework is being used with the Environment for Learning to Programming (ELP) system at Queensland University of Technology (QUT); however, the framework can be integrated into any existing online learning environment or programming Integrated Development Environment (IDE

    Program analysis and evaluation using QUIMERA

    Get PDF
    During last years, a new challenge rose up inside the programming communities: the programming contests. Programming contests can vary slightly in the rules but all of them are intended to assess the competitor skills concerning the ability to solve problems using a computer. These contests raise up three kind of challenges: to create a nice problem statement (for the members of the scientific committee); to solve the problem in a good way (for the programmers); to find a fair way to assess the results (for the judges). This paper presents a web-based application, QUIMERA intended to be a full programming-contest management system, as well as an automatic judge. Besides the traditional dynamic approach for program evaluation, QUIMERA still provides static analysis of the program for a more fine assessment of solutions. Static analysis takes profit from the technology developed for compilers and language-based tools and is supported by source code analysis and software metrics.(undefined

    Building Robust E-learning Software Systems Using Web Technologies

    Get PDF
    Building a robust e-learning software platform represents a major challenge for both the project manager and the development team. Since functionalities of these software systems improves and grows by the day, several aspects must be taken into consideration – e.g. workflows, use-casesor alternative scenarios – in order to create a well standardized and fully functional integrated learning management system. The paper will focus on a model of implementation for an e-learning software system, analyzing its features, its functional mechanisms as well as exemplifying an implementation algorithm. A list of some of the mostly used web technologies (both server-side and client-side) will be analyzed and a discussion over major security leaks of web applicationswill also be put in discussion.E-learning, E-testing, Web Technology, Software System, Web Platform

    Semi-automatic assessment of unrestrained Java code: a Library, a DSL, and a workbench to assess exams and exercises

    Full text link
    © ACM 2015. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in http://dx.doi.org/10.1145/2729094.2742615Automated marking of multiple-choice exams is of great interest in university courses with a large number of students. For this reason, it has been systematically implanted in almost all universities. Automatic assessment of source code is however less extended. There are several reasons for that. One reason is that almost all existing systems are based on output comparison with a gold standard. If the output is the expected, the code is correct. Otherwise, it is reported as wrong, even if there is only one typo in the code. Moreover, why it is wrong remains a mystery. In general, assessment tools treat the code as a black box, and they only assess the externally observable behavior. In this work we introduce a new code assessment method that also verifies properties of the code, thus allowing to mark the code even if it is only partially correct. We also report about the use of this system in a real university context, showing that the system automatically assesses around 50% of the work.This work has been partially supported by the EU (FEDER) and the Spanish Ministerio de Economíay Competitividad (Secretaría de Estado de Investigación, Desarrollo e Innovación) under grant TIN2013-44742-C4-1-R and by the Generalitat Valenciana under grant PROMETEOII2015/013. David Insa was partially supported by the Spanish Ministerio de Educación under FPU grant AP2010-4415.Insa Cabrera, D.; Silva, J. (2015). Semi-automatic assessment of unrestrained Java code: a Library, a DSL, and a workbench to assess exams and exercises. ACM. https://doi.org/10.1145/2729094.2742615SK. A Rahman and M. Jan Nordin. A review on the static analysis approach in the automated programming assessment systems. In National Conference on Programming 07, 2007.K. Ala-Mutka. A survey of automated assessment approaches for programming assignments. In Computer Science Education, volume 15, pages 83--102, 2005.C. Beierle, M. Kula, and M. Widera. Automatic analysis of programming assignments. In Proc. der 1. E-Learning Fachtagung Informatik (DeLFI '03), volume P-37, pages 144--153, 2003.J. Biggs and C. Tang. Teaching for Quality Learning at University : What the Student Does (3rd Edition). In Open University Press, 2007.P. Denny, A. Luxton-Reilly, E. Tempero, and J. Hendrickx. CodeWrite: Supporting student-driven practice of java. In Proceedings of the 42nd ACM technical symposium on Computer science education, pages 09--12, 2011.R. Hendriks. Automatic exam correction. 2012.P. Ihantola, T. Ahoniemi, V. Karavirta, and O. Seppala. Review of recent systems for automatic assessment of programming assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research, pages 86--93, 2010.H. Kitaya and U. Inoue. An online automated scoring system for Java programming assignments. In International Journal of Information and Education Technology, volume 6, pages 275--279, 2014.M.-J. Laakso, T. Salakoski, A. Korhonen, and L. Malmi. Automatic assessment of exercises for algorithms and data structures - a case study with TRAKLA2. In Proceedings of Kolin Kolistelut/Koli Calling - Fourth Finnish/Baltic Sea Conference on Computer Science Education, pages 28--36, 2004.Y. Liang, Q. Liu, J. Xu, and D. Wang. The recent development of automated programming assessment. In Computational Intelligence and Software Engineering, pages 1--5, 2009.K. A. Naudé, J. H. Greyling, and D. Vogts. Marking student programs using graph similarity. In Computers & Education, volume 54, pages 545--561, 2010.A. Pears, S. Seidman, C. Eney, P. Kinnunen, and L. Malmi. Constructing a core literature for computing education research. In SIGCSE Bulletin, volume 37, pages 152--161, 2005.F. Prados, I. Boada, J. Soler, and J. Poch. Automatic generation and correction of technical exercices. In International Conference on Engineering and Computer Education (ICECE 2005), 2005.M. Supic, K. Brkic, T. Hrkac, Z. Mihajlovic, and Z. Kalafatic. Automatic recognition of handwritten corrections for multiple-choice exam answer sheets. In Information and Communication Technology, Electronics and Microelectronics (MIPRO), pages 1136--1141, 2014.S. Tung, T. Lin, and Y. Lin. An exercise management system for teaching programming. In Journal of Software, 2013.T. Wang, X. Su, Y. Wang, and P. Ma. Semantic similarity-based grading of student programs. In Information and Software Technology, volume 49, pages 99--107, 2007

    Automatic Grading of Programming Assignments

    Get PDF
    Solving practical problems is one of the important aspects of learning programming languages. But the assessment of programming problems is not straightforward. It involves time consuming and tedious steps required to compile and test the solution. In this project, I have developed a online tool, Javabrat that allows the students and language learners to practice Java and Scala problems. Javabrat automatically assesses the user\u27s program and provides the instant feedback to the user. The users can also contribute their own programming problems to the existing problem set. I have also developed a plugin for a learning management system, Moodle. This plugin allows the instructors to create the Java programming assignments in Moodle. The Moodle plugin also facilitates automatic grading of the Java problems
    corecore