7 research outputs found

    Increasing the similarity of programming code structures to accelerate the marking process in a new semi-automated assessment approach

    Get PDF
    The increased number of students (in higher education) learning programming languages makes the efficient and effective assessment of student work more important. Thus, academic researchers have focused on the automation of programming assignment marking. However, the fully automated approach to marking has its issues. This study provides an approach geared towards the reduction of marking times while providing comprehensive, effective and consistent feedback on novice programmers’ code script. To assess novices’ code script, a new semi-automated assessment approach has been developed. This paper focuses on the semi-automatic assessment of programming code segments, partially explaining the increasing similarity between code segments using generic rules. The code segments referred to are ‘for’ and ‘while’ loops and sequence parts of code script. The initial results and findings for the proposed approach are positive and point to the need for further research in this area

    Improving marking efficiency for longer programming solutions based on a semi-automated assessment approach

    Get PDF
    In recent years, many students in higher education have begun to learn programming languages. In doing so they will complete a variety of programming tasks of varying degrees of complexity. The students need to get consistent and personalised feedback to develop their programming skills. Human markers can provide personalised feedback using traditional manual approaches to assessment, but they may provide inconsistent feedback (especially for long programming solutions) since marking the programming solutions of multiple students can represent a significant workload for them. While full-automated assessment systems are the best to provide consistent feedback, they may not provide sufficiently personalised feedback for novice programmers. This study develops a novel semi-automated assessment approach in order to improve efficiency of human marker in the marking process and increase consistency of feedback (for both short and long programming solutions). It advocates the reuse of human marker’s comments for similar code snippets, defined as segmented marking in this study. New full and partial marking models are developed based on segmented marking and they are tested by expert markers. The findings show that the two models are similar in efficiency, but that a partial marking approach potentially offers an improved efficiency for longer programming solutions. Such a finding has significant potential to reduce time spent on marking throughout the sector, which would have significant impact on both resourcing and timeliness of feedback

    Adaptive assessment in the class of programming

    Get PDF
    Διπλωματική εργασία--Πανεπιστήμιο Μακεδονίας, Θεσσαλονίκη, 2009.This paper presents P.A.T. (Programming Adaptive Testing), a computerized adaptive testing system for assessing students’ programming knowledge. P.A.T. was used in two high school programming classes by 73 students. After research was carried out, it was found helpful in increasing students’ cognitive domain skills

    Efficient Use of Teaching Technologies with Programming Education

    Get PDF
    Learning and teaching programming are challenging tasks that can be facilitated by using different teaching technologies. Visualization systems are software systems that can be used to help students in forming proper mental models of executed program code. They provide different visual and textual cues that help student in abstracting the meaning of a program code or an algorithm. Students also need to constantly practice the skill of programming by implementing programming assignments. These can be automatically assessed by other computer programs but parts of the evaluation need to be assessed manually by teachers or teaching assistants.There are a lot of existing tools that provide partial solutions to the practical problems of programming courses: visualizing program code, assessing student programming submissions automatically or rubrics that help keeping manual assessment consistent. Taking these tools into use is not straightforward. To succeed, the teacher needs to find the suitable tools and properly integrate them into the course infrastructure supporting the whole learning process. As many programming courses are mass courses, it is a constant struggle between providing sufficient personal guidance and feedback while retaining a reasonable workload for the teacher.This work answers to the question "How can the teaching of programming be effectively assisted using teaching technologies?" As a solution, different learning taxonomies are presented from Computer Science perspective and applied to visualization examples so the examples could be used to better support deeper knowledge and the whole learning process within a programming course. Then, different parts of the assessment process of programming assignments are studied to find the best practices in supporting the process, especially when multiple graders are being used, to maintain objectivity, consistency and reasonable workload in the grading.The results of the work show that teaching technologies can be a valuable aid for the teacher to support the learning process of the students and to help in the practical organization of the course without hindering the learning results or personalized feedback the students receive from their assignments. This thesis presents new visualization categories that allow deeper cognitive development and examples on how to integrate them efficiently into the course infrastructure. This thesis also presents a survey of computer-assisted assessment tools and assessable features for teachers to use in their programming assignments. Finally, the concept of rubric-based assessment tools is introduced to facilitate the manual assessment part of programming assignments

    Procjena vrednovanja eseja: kriteriji vrednovanja koje su izradili nastavnici i rubrike. Individualna i međusobna pouzdanost ocjenjivača i mišljenja nastavnika o tome

    Get PDF
    Rater reliability plays a key role in essay assessment, which has to be valid, reliable and effective. The aims of this study are: to determine intra/inter reliability variations based on two sets of grades that five teachers/raters produced via assessing argumentative essays written by 10 students learning French as a foreign language in accordance with the criteria they had developed and with a rubric; to understand the criteria they used in the assessment process; and to note what the raters/teachers who used rubrics for the first time within the scope of this study think about rubrics. Quantitative data set has revealed that intra-rater reliability between the grades assigned, through the use of teacher-developed criteria and the rubrics, is low, that inter-rater reliability is again low for the grades based on teacher-developed criteria, and that inter-rater reliability is more consistent for assessments completed through the use of rubrics. Qualitative data obtained during individual interviews have shown that raters employed different criteria. During the second round of individual interviews following the use of rubrics, raters have noted that rubrics helped them to become more objective, contributed positively to the assessment process, and can be utilized to support students’ learning and to enhance teachers’ instruction.Pouzdanost ocjenjivača ima ključnu ulogu u vrednovanju eseja, koje mora biti valjano, pouzdano i učinkovito. Ciljevi ovoga istraživanja su: odrediti individualnu i međusobnu pouzdanost ocjenjivača na temelju dviju skupina ocjena koje je pet nastavnika/ocjenjivača dalo tijekom vrednovanja 10 raspravljačkih eseja koje su napisali studenti koji uče francuski jezik kao strani jezik, a vrednovanje se provodilo u skladu s kriterijima koje su nastavnici sami izradili i uz pomoć rubrika; razumjeti kriterije kojima su se koristili u procesu vrednovanja; zabilježiti što ocjenjivači/nastavnici koji su se prvi put u procesu vrednovanja koristili rubrikama misle o takvom načinu vrednovanja. Kvantitativni podaci pokazali su da je individualna pouzdanost ocjenjivača s obzirom na ocjene koje su dali na temelju vlastitih kriterija vrednovanja i na temelju rubrika niska; da je međusobna pouzdanost ocjenjivača niska i kada se radi o ocjenama na temelju vlastitih kriterija te da je međusobna pouzdanost ocjenjivača veća u procesu vrednovanja uz pomoć rubrika. Kvalitativni podaci dobiveni putem metode individualnih intervjua pokazali su da su se ocjenjivači koristili različitim kriterijima. Tijekom drugoga kruga individualnih intervjua nakon primjene rubrika ocjenjivači su primijetili da su im rubrike pomogle u postizanju veće objektivnosti, da su pozitivno utjecale na proces vrednovanja i da se mogu koristiti kako bi pomogli studentima u procesu učenja i kako bi poboljšali provedbu nastavnoga procesa

    Semi-automated assessment of programming languages for novice programmers

    Get PDF
    There has recently been an increased emphasis on the importance of learning programming languages, not only in higher education but also in secondary schools. Students of a variety of departments such as physics, mathematics and engineering have also started learning programming languages as part of their academic courses. Assessment of students programming solutions is therefore important for developing their programming skills. Many Computer Based Assessment (CBA) systems utilise multiple-choice questions (MCQ) to evaluate students performance. However, MCQs lack the ability to comprehensively assess students knowledge. Thus, other forms of programming solutions are required to assess students knowledge. This research aims to develop a semi-automated assessment framework for novice programmers, utilising a computer to support the marking process. The research also focuses on ensuring the consistency of feedback. A novel marking process model is developed based on the semi-automated assessment approach which supports a new way of marking, termed segmented marking . A study is carried out to investigate and demonstrate the feasibility of the segmented marking technique. In addition, the new marking process model is developed based on the results of the feasibility study, and two novel marking process models are presented based on segmented marking, namely the full-marking and partial-marking process models. The Case-Based Reasoning (CBR) cycle is adopted in the marking process models in order to ensure the consistency of feedback. User interfaces of the prototype marking tools (full and partial) are designed and developed based on the marking process models and the user interface design requirements. The experimental results show that the full and partial marking techniques are feasible for use in formative assessment. Furthermore, the results also highlight that the tools are capable of providing consistent and personalised feedback and that they considerably reduce markers workload

    A semi-automatic computer-aided assessment framework for primary mathematics

    Get PDF
    Assessment and feedback processes shape students behaviour, learning and skill development. Computer-aided assessments are increasingly being used to support problem-solving, marking and feedback activities. However, many computer-aided assessment environments only replicate traditional pencil-and-paper tasks. Attention is on grading and providing feedback on the final product of assessment tasks rather than the processes of problem solving. Focusing on steps and problem-solving processes can help teachers to diagnose strengths and weaknesses, discover problem-solving strategies, and to provide appropriate feedback to students. This thesis presents a semi-automatic framework for capturing and marking students solution steps in the context of elementary school mathematics. The first focus is on providing an interactive touch-based tool called MuTAT to facilitate interactive problem solving for students. The second focus is on providing a marking tool named Marking Assistant which utilises the case-based reasoning artificial intelligence methodology to carry out marking and feedback activities more efficiently and consistently. Results from studies carried out with students showed that the MuTAT prototype tool was usable, and performance scores on it were comparable to those obtained when paper-and-pencil was used. More importantly, the MuTAT provided more explicit information on the problem-solving process, showing the students thinking. The captured data allowed for the detection of arithmetic strategies used by the students. Exploratory studies conducted using the Marking Assistant prototype showed that 26% savings in marking time can be achieved compared to traditional paper-and-pencil marking and feedback. The broad feedback capabilities the research tools provided can enable teachers to evaluate whether intended learning outcomes are being achieved and so decide on required pedagogical interventions. The implications of these results are that innovative CAA environments can enable more direct and engaging assessments which can reduce staff workloads while improving the quality of assessment and feedback for students
    corecore