Labelling user data is a central part of the design and evaluation of
pervasive systems that aim to support the user through situation-aware
reasoning. It is essential both in designing and training the system to
recognise and reason about the situation, either through the definition of a
suitable situation model in knowledge-driven applications, or through the
preparation of training data for learning tasks in data-driven models. Hence,
the quality of annotations can have a significant impact on the performance of
the derived systems. Labelling is also vital for validating and quantifying the
performance of applications. In particular, comparative evaluations require the
production of benchmark datasets based on high-quality and consistent
annotations. With pervasive systems relying increasingly on large datasets for
designing and testing models of users' activities, the process of data
labelling is becoming a major concern for the community. In this work we
present a qualitative and quantitative analysis of the challenges associated
with annotation of user data and possible strategies towards addressing these
challenges. The analysis was based on the data gathered during the 1st
International Workshop on Annotation of useR Data for UbiquitOUs Systems
(ARDUOUS) and consisted of brainstorming as well as annotation and
questionnaire data gathered during the talks, poster session, live annotation
session, and discussion session