1 research outputs found

    Don't classify ratings of affect ; rank them!

    Get PDF
    How should affect be appropriately annotated and how should machine learning best be employed to map manifestations of affect to affect annotations? What is the use of ratings of affect for the study of affective computing and how should we treat them? These are the key questions this paper attempts to address by investigating the impact of dissimilar representations of annotated affect on the efficacy of affect modelling. In particular, we compare several different binary-class and pairwise preference representations for automatically learning from ratings of affect. The representations are compared and tested on three datasets: one synthetic dataset (testing “in vitro”) and two affective datasets (testing “in vivo”). The synthetic dataset couples a number of attributes with generated rating values. The two affective datasets contain physiological and contextual user attributes, and speech attributes, respectively; these attributes are coupled with ratings of various affective and cognitive states. The main results of the paper suggest that ratings (when used) should be naturally transformed to ordinal (ranked) representations for obtaining more reliable and generalisable models of affect. The findings of this paper have a direct impact on affect annotation and modelling research but, most importantly, challenge the traditional state-of-practice in affective computing and psychometrics at large.peer-reviewe
    corecore