On strategies for budget-based online annotation in human activity recognition

Abstract

Bootstrapping activity recognition systems in ubiquitous and mobile computing scenarios often comes with the challenge of obtaining reliable ground truth annotations. A promising approach to overcome these difficulties involves obtaining online activity annotations directly from users. However, such direct engagement has its limitations as users typically show only limited tolerance for unwanted interruptions such as prompts for annotations. In this paper we explore the effectiveness of approaches to online, user-based annotation of activity data. Our central assumption is the existence of a fixed, limited budget of annotations a user is willing to provide. We evaluate different strategies on how to spend such a budget most effectively. Using the Opportunity benchmark we simulate online annotation scenarios for a variety of budget configurations and we show that effective online annotation can still be achieved using reduced annotation effort.</p

    Similar works