5 research outputs found

    Impact of Annotation Difficulty on Automatically Detecting Problem Localization of Peer-Review Feedback

    Get PDF
    We believe that providing assessment on students ’ reviewing performance will enable students to improve the quality of their peer reviews. We focus on assessing one particular aspect of the textual feedback contained in a peer review – the presence or absence of problem localization; feedback containing problem localization has been shown to be associated with increased understanding and implementation of the feedback. While in prior work we demonstrated the feasibility of learning to predict problem localization using linguistic features automatically extracted from textual feedback, we hypothesize that inter-annotator disagreement on labeling problem localization might impact both the accuracy and the content of the predictive models. To test this hypothesis, we compare the use of feedback examples where problem localization is labeled with differing levels of annotator agreement, for both training and testing our models. Our results show that when models are trained and tested using only feedback where annotators agree on problem localization, the models both perform with high accuracy, and contain rules involving just two simple linguistic features. In contrast, when training and testing using feedback examples where annotators both agree and disagree, the model performance slightly drops, but the learned rules capture more subtle patterns of problem localization. Keywords problem localization in text comments, data mining of peer reviews, inter-annotator agreement, natural langua

    Evaluating topic-word review analysis for understanding student peer review performance

    Get PDF
    © 2013 International Educational Data Mining Society. All rights reserved. Topic modeling is widely used for content analysis of textual documents. While the mined topic terms are considered as a semantic abstraction of the original text, few people evaluate the accuracy of humans’ interpretation of them in the context of an application based on the topic terms. Previously, we proposed RevExplore, an interactive peer-review analytic tool that supports teachers in making sense of large volumes of student peer reviews. To better evaluate the functionality of RevExplore, in this paper we take a closer look at its Natural Language Processing component which automatically compares two groups of reviews at the topic-word level. We employ a user study to evaluate our topic extraction method, as well as the topic-word analysis approach in the context of educational peer-review analysis. Our results show that the proposed method is better than a baseline in terms of capturing student reviewing/writing performance. While users generally identify student writing/reviewing performance correctly, participants who have prior teaching or peer-review experience tend to have better performance on our review exploration tasks, as well as higher satisfaction towards the proposed review analysis approach

    Natural language processing techniques for researching and improving peer feedback

    Get PDF
    Peer review has been viewed as a promising solution for improving studennts' writing, which still remains a great challenge for educators. However, one core problem with peer review of writing is that potentially useful feedbback from peers is not always presented in ways that lead to revision. Our prior investigations found that whether students implement feedback is significantly correlated with two feedback features: localization information and concrete solutions. But focusing on feedback features is time-intensive for researchers and instructors. We apply data mining and Natural Languagee Processing techniques to automatically code reviews for these feedback features. Our results show that it is feasible to provide intelligent suppport to peer review systems to automatically assess students' reviewing performance with respect to problem localization and solution. We also show that similar research conclusions about helpfulness perceptions of feedback across students and different expert types can be drawn from automatically coded data and from hand-coded data. © Earli

    Helpfulness Guided Review Summarization

    Get PDF
    User-generated online reviews are an important information resource in people's everyday life. As the review volume grows explosively, the ability to automatically identify and summarize useful information from reviews becomes essential in providing analytic services in many review-based applications. While prior work on review summarization focused on different review perspectives (e.g. topics, opinions, sentiment, etc.), the helpfulness of reviews is an important informativeness indicator that has been less frequently explored. In this thesis, we investigate automatic review helpfulness prediction and exploit review helpfulness for review summarization in distinct review domains. We explore two paths for predicting review helpfulness in a general setting: one is by tailoring existing helpfulness prediction techniques to a new review domain; the other is by using a general representation of review content that reflects review helpfulness across domains. For the first one, we explore educational peer reviews and show how peer-review domain knowledge can be introduced to a helpfulness model developed for product reviews to improve prediction performance. For the second one, we characterize review language usage, content diversity and helpfulness-related topics with respect to different content sources using computational linguistic features. For review summarization, we propose to leverage user-provided helpfulness assessment during content selection in two ways: 1) using the review-level helpfulness ratings directly to filter out unhelpful reviews, 2) developing sentence-level helpfulness features via supervised topic modeling for sentence selection. As a demonstration, we implement our methods based on an extractive multi-document summarization framework and evaluate them in three user studies. Results show that our helpfulness-guided summarizers outperform the baseline in both human and automated evaluation for camera reviews and movie reviews. While for educational peer reviews, the preference for helpfulness depends on student writing performance and prior teaching experience

    Identifying problem localization in peer-review feedback

    Get PDF
    In this paper, we use supervised machine learning to automatically identify the problem localization of peer-review feedback. Using five features extracted via Natural Language Processing techniques, the learned model significantly outperforms a standard baseline. Our work suggests that it is feasible for future tutoring systems to generate assessments regarding the use of localization in student peer reviews. © 2010 Springer-Verlag
    corecore