2 research outputs found
Systematic Review of Approaches to Improve Peer Assessment at Scale
Peer Assessment is a task of analysis and commenting on student's writing by
peers, is core of all educational components both in campus and in MOOC's.
However, with the sheer scale of MOOC's & its inherent personalised open ended
learning, automatic grading and tools assisting grading at scale is highly
important. Previously we presented survey on tasks of post classification,
knowledge tracing and ended with brief review on Peer Assessment (PA), with
some initial problems. In this review we shall continue review on PA from
perspective of improving the review process itself. As such rest of this review
focus on three facets of PA namely Auto grading and Peer Assessment Tools (we
shall look only on how peer reviews/auto-grading is carried), strategies to
handle Rogue Reviews, Peer Review Improvement using Natural Language
Processing. The consolidated set of papers and resources so used are released
in https://github.com/manikandan-ravikiran/cs6460-Survey-2.Comment: This is a review assignment, work on progress. Expected to be updated
regularl
Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining
There is an increasing focus on model-based dialog evaluation metrics such as
ADEM, RUBER, and the more recent BERT-based metrics. These models aim to assign
a high score to all relevant responses and a low score to all irrelevant
responses. Ideally, such models should be trained using multiple relevant and
irrelevant responses for any given context. However, no such data is publicly
available, and hence existing models are usually trained using a single
relevant response and multiple randomly selected responses from other contexts
(random negatives). To allow for better training and robust evaluation of
model-based metrics, we introduce the DailyDialog++ dataset, consisting of (i)
five relevant responses for each context and (ii) five adversarially crafted
irrelevant responses for each context. Using this dataset, we first show that
even in the presence of multiple correct references, n-gram based metrics and
embedding based metrics do not perform well at separating relevant responses
from even random negatives. While model-based metrics perform better than
n-gram and embedding based metrics on random negatives, their performance drops
substantially when evaluated on adversarial examples. To check if large scale
pretraining could help, we propose a new BERT-based evaluation metric called
DEB, which is pretrained on 727M Reddit conversations and then finetuned on our
dataset. DEB significantly outperforms existing models, showing better
correlation with human judgements and better performance on random negatives
(88.27% accuracy). However, its performance again drops substantially, when
evaluated on adversarial responses, thereby highlighting that even large-scale
pretrained evaluation models are not robust to the adversarial examples in our
dataset. The dataset and code are publicly available.Comment: Accepted for publication in TAC