Instead of using a single ground truth for language processing tasks, several
recent studies have examined how to represent and predict the labels of the set
of annotators. However, often little or no information about annotators is
known, or the set of annotators is small. In this work, we examine a corpus of
social media posts about conflict from a set of 13k annotators and 210k
judgements of social norms. We provide a novel experimental setup that applies
personalization methods to the modeling of annotators and compare their
effectiveness for predicting the perception of social norms. We further provide
an analysis of performance across subsets of social situations that vary by the
closeness of the relationship between parties in conflict, and assess where
personalization helps the most