1 research outputs found
Hierarchical Bi-Directional Self-Attention Networks for Paper Review Rating Recommendation
Review rating prediction of text reviews is a rapidly growing technology with
a wide range of applications in natural language processing. However, most
existing methods either use hand-crafted features or learn features using deep
learning with simple text corpus as input for review rating prediction,
ignoring the hierarchies among data. In this paper, we propose a Hierarchical
bi-directional self-attention Network framework (HabNet) for paper review
rating prediction and recommendation, which can serve as an effective
decision-making tool for the academic paper review process. Specifically, we
leverage the hierarchical structure of the paper reviews with three levels of
encoders: sentence encoder (level one), intra-review encoder (level two) and
inter-review encoder (level three). Each encoder first derives contextual
representation of each level, then generates a higher-level representation, and
after the learning process, we are able to identify useful predictors to make
the final acceptance decision, as well as to help discover the inconsistency
between numerical review ratings and text sentiment conveyed by reviewers.
Furthermore, we introduce two new metrics to evaluate models in data imbalance
situations. Extensive experiments on a publicly available dataset (PeerRead)
and our own collected dataset (OpenReview) demonstrate the superiority of the
proposed approach compared with state-of-the-art methods.Comment: Accepted by COLING 202