15 research outputs found

    A validation study of students' end comments: Comparing comments by students, a writing instructor, and a content instructor

    Get PDF
    In order to include more writing assignments in large classrooms, some instructors have been utilizing peer review. However, many instructors are hesitant to use peer review because they are uncertain of whether students are capable of providing reliable and valid ratings and comments. Previous research has shown that students are in fact capable of rating their peers papers reliably and with the same accuracy as instructors. On the other hand, relatively little research has focused on the quality of students' comments. This study is a first in-depth analysis of students' comments in comparison with a writing instructor's and a content instructor's comments. Over 1400 comment segments, which were provided by undergraduates, a writing instructor, and a content instructor, were coded for the presence of 29 different feedback features. Overall, our results support the use of peer review: students' comments seem to be fairly similar to instructors' comments. Based on the main differences between students and the two types of instructors, we draw implications for training students and instructors on providing feedback. Specifically, students should be trained to focus on content issues, while content instructors should be encouraged to provide more solutions and explanations

    Augmenting Assessment with Learning Analytics

    Full text link
    Learning analytics as currently deployed has tended to consist of large-scale analyses of available learning process data to provide descriptive or predictive insight into behaviours. What is sometimes missing in this analysis is a connection to human-interpretable, actionable, diagnostic information. To gain traction, learning analytics researchers should work within existing good practice particularly in assessment, where high quality assessments are designed to provide both student and educator with diagnostic or formative feedback. Such a model keeps the human in the analytics design and implementation loop, by supporting student, peer, tutor, and instructor sense-making of assessment data, while adding value from computational analyses

    Writing in natural sciences: Understanding the effects of different types of reviewers on the writing process

    No full text
    In undergraduate natural science courses, two types of evaluators are commonly used to assess student writing: graduate-student teaching assistants (TAs) or peers. The current study examines how well these approaches to evaluation support student writing. These differences between the two possible evaluators are likely to affect multiple aspects of the writing process: first draft quality, amount and types of feedback provided, amount and types of revisions, and final draft quality. Therefore, we examined how these aspects of the writing process were affected when undergraduate students wrote papers to be evaluated by a group of peers versus their TA. Several interesting results were found. First, the quality of the students' first draft was greater when they were writing for their peers than when writing for their TA. In terms of feedback, students provided longer comments, and they also focused more on the prose than the TAs. Finally, more revisions were made if the students received feedback from their peers-especially prose revisions. Despite all of the benefits seen with peers as evaluators, there was only a moderate difference in final draft quality. This result indicates that while peer-review is helpful, there continues to be a need for research regarding how to enhance the benefits

    The effects of skill diversity on commenting and revisions

    No full text
    The use of peer assessment to evaluate students' writing is one recommended method that makes writing assignments possible in large content classes (i. e., more than 75 students). However, many instructors and students worry about whether students of all ability levels are capable of helping their peers. We examine how ability pairing (e. g., high-ability student with high-ability student versus high-ability student with low-ability student) changes key characteristics of feedback to determine which pairings are likely to benefit students most. A web-based reciprocal peer-review system was used to facilitate the peer review of students' writing of two papers. Over 1,100 comments given to writers from their peers were coded for several relevant categories: type of feedback, type of criticism, focus of problem, focus of solution, and implementation. Overall, creating peer-review groups such that students receive feedback from someone of a dissimilar ability appeared to be most beneficial. High-ability writers received similar kinds of feedback from high-ability versus low-ability peers. By contrast, the low-ability writers received more comments that identified problems focusing on substance issues from high-ability reviewers. In addition, the low-ability writers implemented a higher percentage of the comments from the high-ability reviewers. © 2012 Springer Science+Business Media B.V

    Student Perception of Scalable Peer-Feedback Design in Massive Open Online Courses

    Get PDF
    There is scarcity of research on scalable peer-feedback design and student’s peer-feedback perceptions and therewith their use in Massive Open Online Courses (MOOCs). To address this gap, this study explored the use of peer-feedback design with the purpose of getting insight into student perceptions as well as into providing design guidelines. The findings of this pilot study indicate that peer-feedback training with the focus on clarity, transparency and the possibility to practice beforehand increases students willingness to partici- pate in future peer-feedback activities and training, increases their perceived usefulness, preparedness and general attitude regarding peer-feedback. The results of this pilot will be used as a basis for future large-scale experiments to compare different designs
    corecore