research

Grounding truth via ordinal annotation

Abstract

The question of how to best annotate affect within available content has been a milestone challenge for affective computing. Appropriate methods and tools addressing that question can provide better estimations of the ground truth which, in turn, may lead to more efficient affect detection and more reliable models of affect. This paper introduces a rank-based real-time annotation tool, we name AffectRank, and compares it against the popular rating-based real-time FeelTrace tool through a proofof- concept video annotation experiment. Results obtained suggest that the rank-based (ordinal) annotation approach proposed yields significantly higher inter-rater reliability and, thereby, approximation of the underlying ground truth. The key findings of the paper demonstrate that the current dominant practice in continuous affect annotation via rating-based labeling is detrimental to advancements in the field of affective computing.The authors would like to thank all annotators that participated in the reported experiments. We would also like to thank Gary Hili and Ryan Abela for providing access to the Eryi dataset. The work is supported, in part, by the EU-funded FP7 ICT iLearnRW project (project no: 318803).peer-reviewe

    Similar works