134,165 research outputs found

    AI-Enhanced Auto-Correction of Programming Exercises: How Effective is GPT-3.5?

    Get PDF
    Timely formative feedback is considered as one of the most important drivers for effective learning. Delivering timely and individualized feedback is particularly challenging in large classes in higher education. Recently Large Language Models such as GPT-3 became available to the public that showed promising results on various tasks such as code generation and code explanation. This paper investigates the potential of AI in providing personalized code correction and generating feedback. Based on existing student submissions of two different real-world assignments, the correctness of the AI-aided e-assessment as well as the characteristics such as fault localization, correctness of hints, and code style suggestions of the generated feedback are investigated. The results show that 73% of the submissions were correctly identified as either correct or incorrect. In 59% of these cases, GPT-3.5 also successfully generated effective and high-quality feedback. Additionally, GPT-3.5 exhibited weaknesses in its evaluation, including localization of errors that were not the actual errors, or even hallucinated errors. Implications and potential new usage scenarios are discussed

    Graphical enhancements of the Virtual Programming Lab

    Get PDF
    [EN] It is generally recognised that providing consistent, meaningful written feedback is not an easy task, especially when dealing with large classes. Feedback does needs to be effective, meaning it has to be appropriate and timely and needs to be individual, where possible. Automated feedback within Computer Science has been around since the 1960’s with the main goals in relation to computer programming being to implement an automatic assessment tool to provide consistent feedback and to alleviate examiners' workloads. The Virtual Programming Lab, a plugin for the Virtual Learning Environment Moodle, is one such tool that allows for Automated Feedback on computer code. This paper presents enhancements to the Virtual Programming that have been developed to make interacting with the tool more user-friendly and provide more graphical feedback to teachers. The enhancements developed provide in-depth graphical feedback on assessment grades within a class and also teacher focused graphical views which provide more in-depth analysis of assessment submissions. Discussed feedback shows that the enhancements developed were all positively received with feedback highlighting the benefits of each.http://ocs.editorial.upv.es/index.php/HEAD/HEAD18Mooney, A.; Hegart Kelly, E. (2018). Graphical enhancements of the Virtual Programming Lab. Editorial Universitat Politècnica de València. 645-653. https://doi.org/10.4995/HEAD18.2018.8054OCS64565

    Developing Applications to Automatically Grade Introductory Visual Basic Courses

    Get PDF
    There are many unique challenges associated with introductory programming courses. For novice programmers, the challenges of their first programming class can lead to a great deal of stress and frustration. Regular programming assignments is often key to developing an understanding of best practices and the coding process. Students need practice with these new concepts to reinforce the underlying principles. Providing timely and consistent feedback on these assignments can be a challenge for instructors, particularly in large classes. Plagiarism is also a concern. Unfortunately traditional tools are not well suited to introductory courses. This paper describes how AppGrader, a static code assessment tool can be used to address the challenges of an introductory programming class. The tool assesses student’s understanding and application of programming fundaments as defined in the current ACM/IEEE Information Technology Curriculum Guidelines. Results from a bench test and directions for future research are provided

    Timely Student Feedback

    Get PDF
    Students are not only interested in their grades but they are also interested in feedback (Mulliner & Tucker, 2017), as this is an important element of their learning cycle (Gibbons et al., 2018). Together with lecturers they agree that for this to be effective, it must be returned quickly so that it can be acted on within the context of their learning (Denton et al., 2008; Mulliner & Tucker, 2017). However, the delivery of timely and effective feedback can be a burden on lecturers, particularly if they are responsible for large classes and in the early stage of their career. These and similar challenges contribute towards the low approval ratings on feedback provided by students across our national third-level institutions (Gibbons et al., 2018). This report explores the benefits and functionality of alternative applications that can help address these challenges and give feedback (1) within class, using TurningPoint, (2) within one to two weeks, using Peergrade, and (3) up to fifteen working days, using the virtual learning environment (VLE) Brightspace. These timeframes have been chosen as research suggests that a policy of providing feedback with fifteen working days can increase student satisfaction (Mulliner & Tucker, 2017). The benefit of these applications for both student and lecturer are established here and a poster artefact has been developed to provide a condensed summary of the applications along with additional resources to help ensure their successful implementation

    USING LIVE STUDENT PEER ASSESSMENT WITH AUTOMATED INSTANT FEEDBACK

    Get PDF
    Peer assessment and feedback enables students to develop objectivity in relation to standards which can then be transferred to their own work (Liu & Carless, 2006). However, providing feedback, particularly in large classes, can be labour intensive (eg. collating scores and comments). As such, it can be challenging to provide effective feedback in a timely manner which has been shown to promote retention and the correction of inaccurate responses (Epstein et al., 2002). We have recently utilised the online student data and engagement system (SRES, Liu et al., 2017) to run our peer assessments of our student oral presentations within our undergraduate chemistry laboratories. Students are able to grade their peers’ presentations in real time via mobile devices which is captured by SRES, alongside the Academic(s) grading. The system automatically collates both student and academics scores and immediately posts this grade and feedback to the Learning Management System (LMS) of the presenting student(s). Students have immediate access to this feedback to construct self-reflections or to discuss their performance with their teacher whilst the experience is still “fresh”. We will discuss its implementation and how it addresses topics such as mitigating academic misconduct, improving student engagement and reducing the academic burden in running these assessments. REFERENCES Epstein, M.L., Lazarus, A.D., Calvano, T.B. et al. (2002). Immediate Feedback Assessment Technique Promotes Learning and Corrects Inaccurate first Responses. The Psychological Record, 52, 187–201. Liu, D. Y. T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In Learning Analytics: Fundaments, Applications, and Trends, Peña-Ayala, A., Ed. Springer International Publishing. Liu, N. & Carless, D. (2006). Peer feedback: the learning element of peer assessment. Teaching in Higher Education, 11(3), 279-290

    Designing for Ballet Classes: Identifying and Mitigating Communication Challenges Between Dancers and Teachers

    Get PDF
    Dancer-teacher communication in a ballet class can be challenging: ballet is one of the most complex forms of movements, and learning happens through multi-faceted interactions with studio tools (mirror, barre, and floor) and the teacher. We conducted an interview-based qualitative study with seven ballet teachers and six dancers followed by an open-coded analysis to explore the communication challenges that arise while teaching and learning in the ballet studio. We identified key communication issues, including adapting to multi-level dancer expertise, transmitting and realigning development goals, providing personalized corrections and feedback, maintaining the state of flow, and communicating how to properly use tools in the environment. We discuss design implications for crafting technological interventions aimed at mitigating these communication challenges

    A novel real-time detailed feedback collection and interaction tool for large classes

    Full text link
    CONTEXT The existing approaches towards interactive student feedback and question collection have two major drawbacks. First, instructors are only provided with generic feedback, typically at the end of a teaching period. Second, students are not provided with opportunity to give content-specific feedback or raise questions on specific sections of content during the lecture. The latter is especially important in large classes with students from non-English speaking background who are generally less likely to actively participate in discussions. PURPOSE This paper aims to develop and test a user-friendly and automated platform, with efficient data management structure, which facilitates real-time collection and addressing of students’ feedback on a specific part of topics discussed during the class. APPROACH A web-based software has been developed using annotation technologies which assists students in providing anonymous and content-specific comments on lecture materials. The software equips lecturers with a real-time platform for retrieval of comments classified based on their area, type and frequency. The platform contains features that enable users, teachers, for analytical assessment of both teaching and learning performance. RESULTS The trial use of the designed platform has primarily increased engagement of students in the class discussions. This trend is specifically observed amongst international students who have gained confidence to raise more questions, compared to the traditional teaching methods. Concurrently, the lecturer has dramatically decreased his response period to students’ queries to nearly real-time through receiving classified comments adhered to a particular part of the lecture. CONCLUSIONS The software allows for structured analysis of course materials and students’ feedback which can be further used to update teaching standards. Moreover, it improves teacher-student relationships through timely and purposeful addressing of instructional issues

    Mobile reflections (MoRe) pilot, developing reflection within initial teacher training for students with dyslexia

    Get PDF
    The MoRe (Mobile Reflections) pilot was designed to explore whether the use of freely available Web 2.0 technology and mobile phones could assist dyslexic student teachers to develop reflective skills by capturing their reflections using audio within a shared online learning space

    What Really Matters: Assessing Individual Problem-Solving Performance in the Context of Biological Sciences

    Get PDF
    The evaluation of higher-level cognitive skills can augment traditional discipline-based knowledge testing by providing timely assessment of individual student problem-solving abilities that are critical for success in any professional development program. However, the wide-spread acceptance and implementation of higher level cognitive skills analysis has been delayed by the lack of rapid, valid, and reliable quantified-scoring techniques. At the University of New Mexico School of Medicine, Department of Biochemistry & Molecular Biology, we have developed an examination format that can be routinely and sequentially implemented for both formative and summative assessments of individual students in large classes. Rather than providing results in terms of an individual student’s knowledge base in a single academic discipline or group of disciplines, this type of examination provides information on performance in the application of specific problem-solving skills, which we term “domains,” to a contextual clinical or scientific problem. These domains, derived from the scientific method, are tested across various academic disciplines, and are reported in terms of the following: Initial and sequential hypothesis generation, investigation of these hypotheses, evaluation of newly acquired data, integration of basic science mechanisms with new information to explain the basis of the problem, and reflection on one’s own professional development in the context of the examination. The process for criterion referenced quantified grading of the examination is outlined in this paper. This process involves relatively rapid scoring, and permits the timely use of the resulting information for individual student feedback as well as curricular improvement. Data regarding grading consistency and comparison with other measures of student performance is also presented in this paper. An analysis of the performance characteristics of this examination, which has been utilized for over 10 years in a variety of course settings, indicates that it is valid, reliable, and utilizable. As such, the methodology is now routinely used in several undergraduate and graduate level biochemistry classes to monitor the development of individual student problem-solving abilities

    About using Mobile Reflections

    Get PDF
    a short guid
    • …
    corecore