research

Gathering realistic authentication performance data through field trials

Abstract

Most evaluations of novel authentication mechanisms have been conducted under laboratory conditions. We argue that the results of short-term usage under laboratory conditions do not predict user performance “in the wild”, because there is insufficient time between enrolment and testing, the number of authentications is low, and authentication is presented as a primary task, rather then the secondary task as it is “in the wild”. User generated reports of performance on the other hand provide subjective data, so reports on frequency of use, time intervals, and success or failure of authentication are subject to the vagaries of users ’ memories. Studies on authentication that provide objective performance data under real-world conditions are rare. In this paper, we present our experiences with a study method that tries to control frequency and timing of authentication, and collects reliable performance data, while maintaining ecological validity of the authentication context at the same time. We describe the development of an authentication server called APET, which allows us to prompt users enrolled in trial cohorts to authenticate at controlled intervals, and report our initial experiences with trials. We conclude by discussing remaining challenges in obtaining reliable performance data through a field trial method such as this one

    Similar works