19 research outputs found

    PeerWise - The Marmite of Veterinary Student Learning

    Get PDF
    PeerWise is a free online student-centred collaborative learning tool with which students anonymously author, answer, and evaluate multiple choice questions (MCQs). Features such as commenting on questions, rating questions and comments, and appearing on leaderboards, can encourage healthy competition, engage students in reflection and debate, and enhance their communication skills. PeerWise has been used in diverse subject areas but never previously in Veterinary Medicine. The Veterinary undergraduates at the University of Glasgow are a distinct cohort; academically gifted and often highly strategic in their learning due to time pressures and volume of course material. In 2010-11 we introduced PeerWise into 1st year Veterinary Biomolecular Sciences in the Glasgow Bachelor of Veterinary Medicine and Surgery programme. To scaffold PeerWise use, a short interactive session introduced students to the tool and to the basic principles of good MCQ authorship. Students were asked to author four and answer forty MCQs throughout the academic year. Participation was encouraged by an allocation of up to 5% of the final year mark and inclusion of studentauthored questions in the first summative examination. Our analysis focuses on engagement of the class with the\ud tool and their perceptions of its use. All 141 students in the class engaged with PeerWise and the majority contributed beyond that which was stipulated. Student engagement with PeerWise prior to a summative exam was positively correlated to exam score, yielding a relationship that was highly significant (p<0.001). Student perceptions of PeerWise were predominantly positive with explicit recognition of its value as a learning and revision tool, and more than two thirds of the class in agreement that question authoring and answering reinforced their learning. There was clear polarisation of views, however, and those students who did not like PeerWise were vociferous in their dislike, the biggest criticism being lack of moderation by staff

    Enhancing university student engagement using online multiple choice questions and answers

    Get PDF
    For many education providers, student engagement can be a major issue. Given the positive correlation between engagement and good performance, providers are continually looking for ways to engage students in the learning process. The growth of student digital literacy, the wide proliferation of online tools and the understanding of why online gaming can be addictive have combined to create a set of tools that providers can leverage to enhance engagement. One such tool is Peerwise, https://peerwise.cs.auckland.ac.nz/, an online, multiple choice question (MCQ) and answer tool in which students create questions that are answered by other students. Why use MCQs? Using MCQs tests knowledge, provides reassurance of learning, identifies gaps and makes this data available to student and provider. Students use this information to focus their time on areas requiring additional work [1], benefiting from the early feedback provided. Formative assess- ments using MCQs are beneficial in preparing students for summative testing and are appreciated and liked by students [2]. Providers can use this information to determine how the material is being received and react accordingly. Students use Peerwise to create MCQs that are answered, rated and commented on by their peers. Students’ engagement in Peerwise earns trophies for contributing regular use and for providing feedback, all of which act to stimulate further engagement, using the principles of gamification. Bournemouth University, a public university in the UK with over 18,000 students, has been embedding Peerwise in under-graduate and post-graduate units since 2014. The results experienced by Bournemouth University have been beneficial and correlate with other studies of using Peerwise [3] [4]. A statistically significant improvement was seen by one cohort of students compared to the previous year where Peerwise was not used. However, no correlation was found between Peerwise participation and a student’s unit mark. The processes followed by Bournemouth University and the advantages and disadvantages, backed by qualitative and quantitative data, will be presented so that other institutions can gain an informed view of the merits of Peerwise for their own teaching and learning environments

    Asynchronous Assistance: a Social Network Analysis of Influencing Peer Interactions in PeerWise

    Get PDF
    This mixed methods, investigative case study explored student patterns of use within the online PeerWise platform to identify the most influencing activities and to build a model capable of predicting performance based on these influencing activities. Peerwise is designed to facilitate student peer-to-peer engagement through creating, answering and ranking multiple choice questions; this study sought to understand the relationship between student engagement in Peerwise and learning performance. To address the research question, various usage metrics were explored, visualized and modelled, using social network analysis with Gephi, Tableau and Python. These findings were subsequently analyzed in light of the qualitative survey data gathered. The most significant activity metrics were evaluated leading to rich data visualisations and identified the activities that influenced academic performance in this study. The alignment of the key qualitative and quantitative findings converged on answering questions as having the greatest positive impact on learner performance. Furthermore, from a quantitative perspective the Average Comment Length and Average Explanation Length correlated positively with superior academic performance. Qualitatively, the motivating nature of PeerWise community also engaged learners. The key limitation of the size of the data set within the investigative case study suggests further research, with additional student cohorts as part of an action research paradigm, to broaden these findings

    The role of community feedback in the student example authoring process: An evaluation of AnnotEx

    Get PDF
    This paper explores a new approach to engage students in authoring educational content. This approach was implemented in AnnotEx (Example Annotator) system, which allows students to annotate computer programming examples with line-by-line explanations and review annotations produced by ther peers. A controlled study of AnnotEx presented in this paper evaluated the impact of the community peer-reviewing process on the quality of produced annotations and student learning. The study confirmed that community feedback increases the volume and the quality of produced annotations and positively affects the work of weaker students. The peer-rating process enabled the community to distinguish good and bad annotations. Peer comments provided efficient guidelines for improving annotations and caused a significant increase in quality. © 2010 Becta

    Using Peerwise to improve engagement and learning.

    Get PDF
    This paper assesses the experiences of Bournemouth University in using the online multiple choice question (MCQ) tool, Peerwise, in student learning and engagement. MCQs are excellent for developing and testing knowledge, providing reassurance and identifying development needs. The creation of MCQs reinforces learning by tasking students to generate challenging questions. Peerwise supports self-direction and flexibility, which is embraced by students. Bournemouth University started embedding Peerwise within teaching units in 2014. The intention was to transform the approach of students towards the non-assessed elements of the unit. Peerwise was used in an undergraduate business unit consisting of 50 students over at 15 week period. 804 questions were created and 3,345 answers were recorded. 10% of the unit marks were allocated to Peerwise use. Qualitative feedback from students was very positive. Correlation analysis showed a very weak relationship, 0.120, between the number of questions answered and the overall unit mark. Self-assessment of the change in learning was statistically significantly better for students who used Peerwise compared to those who did not. Overall, the evaluation of the Peerwise was positive with many lessons learnt. Six recommendations for the further use of Peerwise were developed, including improving the scaffolding to students, refining the way quality is assessed and developing evaluation criteria

    Comparing the effect of EDPA and FDPA on university students’ HOTS

    Get PDF
    A study was conducted on 120 students from two classes studying the Cognitive Sciences and Ethics course in Universiti Utara Malaysia.One class was treated with an editable drill and practice application (EDPA) while the other class received a fixed drill and practice application (FDPA).The purpose was to assess the effects of EDPA and FDPA on higher order thinking skills (HOTS).The main difference of the two applications is EDPA allow students to add and modify items based on personal inquiries while FDPA does not.From the literature review it seems that students if given the opportunity to ask questions tend to come up with both basic and deep questions. While the basic questions allowed students to acquire only basic knowledge, deeper questions allowed students to garner reflective skills which in turn should develop better HOTS.This led to the assumption that the use of EDPA is more effective than FDPA in promoting HOTS.Based on the independent-groups t-test results it was concluded that there was a significant difference in HOTS scores for the EDPA and FDPA.The results showed that students who were subjected to EDPA had better HOTS scores than those subjected to FDPA

    E-assessment: Past, present and future

    Get PDF
    This review of e-assessment takes a broad definition, including any use of a computer in assessment, whilst focusing on computer-marked assessment. Drivers include increased variety of assessed tasks and the provision of instantaneous feedback, as well as increased objectivity and resource saving. From the early use of multiple-choice questions and machine-readable forms, computer-marked assessment has developed to encompass sophisticated online systems, which may incorporate interoperability and be used in students’ own homes. Systems have been developed by universities, companies and as part of virtual learning environments. Some of the disadvantages of selected-response question types can be alleviated by techniques such as confidence-based marking. The use of electronic response systems (‘clickers’) in classrooms can be effective, especially when coupled with peer discussion. Student authoring of questions can also encourage dialogue around learning. More sophisticated computer-marked assessment systems have enabled mathematical questions to be broken down into steps and have provided targeted and increasing feedback. Systems that use computer algebra and provide answer matching for short-answer questions are discussed. Computer-adaptive tests use a student’s response to previous questions to alter the subsequent form of the test. More generally, e-assessment includes the use of peer-assessment and assessed e-portfolios, blogs, wikis and forums. Predictions for the future include the use of e-assessment in MOOCs (massive open online courses); the use of learning analytics; a blurring of the boundaries between teaching, assessment and learning; and the use of e-assessment to free human markers to assess what they can assess more authentically

    Detecting students-at-risk in computer programming classes with learning analytics from students’ digital footprints

    Get PDF
    Different sources of data about students, ranging from static demographics to dynamic behavior logs, can be harnessed from a variety sources at Higher Education Institutions. Combining these assembles a rich digital footprint for students, which can enable institutions to better understand student behaviour and to better prepare for guiding students towards reaching their academic potential. This paper presents a new research methodology to automatically detect students ``at-risk'' of failing an assignment in computer programming modules (courses) and to simultaneously support adaptive feedback. By leveraging historical student data, we built predictive models using students' offline (static) information including student characteristics and demographics, and online (dynamic) resources using programming and behaviour activity logs. Predictions are generated weekly during semester. Overall, the predictive and personalised feedback helped to reduce the gap between the lower and higher-performing students. Furthermore, students praised the prediction and the personalised feedback, conveying strong recommendations for future students to use the system. We also found that students who followed their personalised guidance and recommendations performed better in examinations

    On the Quality of Crowdsourced Programming Assignments

    Get PDF
    Crowdsourcing has been used in computer science education to alleviate the teachers’ workload in creating course content, and as a learning and revision method for students through its use in educational systems. Tools that utilize crowdsourcing can act as a great way for students to further familiarize themselves with the course concepts, all while creating new content for their peers and future course iterations. In this study, student-created programming assignments from the second week of an introductory Java programming course are examined alongside the peer reviews these assignments received. The quality of the assignments and the peer reviews is inspected, for example, through comparing the peer reviews with expert reviews using inter-rater reliability. The purpose of this study is to inspect what kinds of programming assignments novice students create, and whether the same novice students can act as reliable reviewers. While it is not possible to draw definite conclusions from the results of this study due to limitations concerning the usability of the tool, the results seem to indicate that novice students are able to recognise differences in programming assignment quality, especially with sufficient guidance and well thought-out instructions
    corecore