1,151 research outputs found
Better the Devil You Know: A User Study of Two CAPTCHAs and a Possible Replacement
CAPTCHAs are difficult for humans to use, causing frustration. Alternatives have been proposed, but user studies equate usability to solvability. We consider the user perspective to include workload and context of use. We assess traditional text-based CAPTCHAs alongside PlayThru, a 'gamified' verification mechanism, and NoBot, which uses face biometrics. A total of 87 participants were tasked with ticket-buying across three conditions: (1) all three mechanisms in comparison, and NoBot three times (2) on a laptop, and (3) on a tablet. A range of quantitative and qualitative measurements explored the user perspective. Quantitative results showed that participants completed reCAPTCHAs quickest, followed by PlayThru and NoBot. Participants were critical of NoBot in comparison but praised it in isolation. Despite reporting negative experiences with reCAPTCHAs, they were the preferred mechanism, due to familiarity and a sense of security and control. Although slower, participants praised NoBot's completion speeds, but regarded using personal images as invading privacy
"I don’t like putting my face on the Internet!": An acceptance study of face biometrics as a CAPTCHA replacement
Biometric technologies have the potential to reduce the effort involved in securing personal activities online, such as purchasing goods and services. Verifying that a user session on a website is attributable to a real human is one candidate application, especially as the existing CAPTCHA technology is burdensome and can frustrate users. Here we examine the viability of biometrics as part of the consumer experience in this space. We invited 87 participants to take part in a lab study, using a realistic ticket-buying website with a range of human verification mechanisms including a face biometric technology. User perceptions and accep- tance of the various security technologies were explored through interviews and a range of questionnaires within the study. The results show that some users wanted reassurance that their personal image will be protected or discarded af- ter verifying, whereas others felt that if they saw enough people using face biometrics they would feel assured that it was trustworthy. Face biometrics were seen by some par- ticipants to be more suitable for high-security contexts, and by others as providing extra personal data that had unac- ceptable privacy implications
Towards robust experimental design for user studies in security and privacy
Background: Human beings are an integral part of computer
security, whether we actively participate or simply
build the systems. Despite this importance, understanding
users and their interaction with security is a blind spot
for most security practitioners and designers. / Aim: Define principles for conducting experiments into
usable security and privacy, to improve study robustness
and usefulness. / Data: The authors’ experiences conducting several research
projects complemented with a literature survey.
Method: We extract principles based on relevance to the
advancement of the state of the art. We then justify our
choices by providing published experiments as cases of
where the principles are and are not followed in practice
to demonstrate the impact. Each principle is a discipline specific
instantiation of desirable experiment-design elements
as previously established in the domain of philosophy
of science. / Results: Five high-priority principles – (i) give participants
a primary task; (ii) incorporate realistic risk;
(iii) avoid priming the participants; (iv) perform doubleblind
experiments whenever possible and (v) think carefully
about how meaning is assigned to the terms threat
model, security, privacy, and usability. / Conclusion: The principles do not replace researcher
acumen or experience, however they can provide a valuable
service for facilitating evaluation, guiding younger
researchers and students, and marking a baseline common
language for discussing further improvements
Applying Cognitive Control Modes to Identify Security Fatigue Hotspots
Security tasks can burden the individual, to the extent that security fatigue promotes habits that undermine security. Here we revisit a series of user-centred studies which focus on security mechanisms as part of regular routines, such as two-factor authentication. By examining routine security behaviours, these studies expose perceived contributors and consequences of security fatigue, and the strategies that a person may adopt when feeling overburdened by security. Behaviours and strategies are framed according to a model of cognitive control modes, to explore the role of human performance and error in producing security fatigue. Security tasks are then considered in terms of modes such as unconscious routines and knowledge-based ad-hoc approaches. Conscious attention can support adaptation to novel security situations, but is error-prone and tiring; both simple security routines and technology-driven automation can minimise effort, but may miss cues from the environment that a nuanced response is required
Usable biometrics for an ageing population
In this chapter, we examine the implications of ageing for the usability of biometric solutions. We first set out what usability means, and which factors need to be considered when designing a solution that is ‘usable’. We review usability successes and issues with past biometric techniques, in the context of a set of solutions, before considering how usability will be affected for ageing users because of the physical and cognitive changes they undergo. Finally, we identify the opportunities and challenges that ageing presents for researchers, developers and operators of biometric systems
Dead on Arrival: Recovering from Fatal Flaws in Email Encryption Tools
Background. Since Whitten and Tygar’s seminal study of PGP 5.0 in 1999, there have been continuing efforts to produce email encryption tools for adoption by a wider user base, where these efforts vary in how well they consider the usability and utility needs of prospective users. Aim. We conducted a study aiming to assess the user experience of two open-source encryption software tools – Enigmail and Mailvelope. Method. We carried out a three-part user study (installation, home use, and debrief) with two groups of users using either Enigmail or Mailvelope. Users had access to help during installation (installation guide and experimenter with domain-specific knowledge), and were set a primary task of organising a mock flash mob using encrypted emails in the course of a week. Results. Participants struggled to install the tools – they would not have been able to complete installation without help. Even with help, setup time was around 40 minutes. Participants using Mailvelope failed to encrypt their initial emails due to usability problems. Participants said they were unlikely to continue using the tools after the study, indicating that their creators must also consider utility. Conclusions. Through our mixed study approach, we conclude that Mailvelope and Enigmail had too many software quality and usability issues to be adopted by mainstream users. Methodologically, the study made us rethink the role of the experimenter as that of a helper assisting novice users with setting up a demanding technology
Don't work. Can't work? Why it's time to rethink security warnings
As the number of Internet users has grown, so have the security threats that they face online. Security warnings are one key strategy for trying to warn users about those threats; but recently, it has been questioned whether they are effective. We conducted a study in which 120 participants brought their own laptops to a usability test of a new academic article summary tool. They encountered a PDF download warning for one of the papers. All participants noticed the warning, but 98 (81.7%) downloaded the PDF file that triggered it. There was no significant difference between responses to a brief generic warning, and a longer specific one. The participants who heeded the warning were overwhelmingly female, and either had previous experience with viruses or lower levels of computing skills. Our analysis of the reasons for ignoring warnings shows that participants have become desensitised by frequent exposure and false alarms, and think they can recognise security risks. At the same time, their answers revealed some misunderstandings about security threats: for instance, they rely on anti-virus software to protect them from a wide range of threats, and do not believe that PDF files can infect their machine with viruses. We conclude that security warnings in their current forms are largely ineffective, and will remain so, unless the number of false positives can be reduced
"`They brought in the horrible key ring thing!" Analysing the Usability of Two-Factor Authentication in UK Online Banking
To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions -- especially those adding a token -- to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience
The Security Blanket of the Chat World: An Analytic Evaluation and a User Study of Telegram
The computer security community has advocated
widespread adoption of secure communication tools to protect
personal privacy. Several popular communication tools have
adopted end-to-end encryption (e.g., WhatsApp, iMessage), or
promoted security features as selling points (e.g., Telegram,
Signal). However, previous studies have shown that users may
not understand the security features of the tools they are using,
and may not be using them correctly. In this paper, we present a
study of Telegram using two complementary methods: (1) a labbased
user study (11 novices and 11 Telegram users), and (2) a
hybrid analytical approach combining cognitive walk-through
and heuristic evaluation to analyse Telegram’s user interface.
Participants who use Telegram feel secure because they feel
they are using a secure tool, but in reality Telegram offers
limited security benefits to most of its users. Most participants
develop a habit of using the less secure default chat mode at all
times. We also uncover several user interface design issues that
impact security, including technical jargon, inconsistent use of
terminology, and making some security features clear and others
not. For instance, use of the end-to-end-encrypted Secret Chat
mode requires both the sender and recipient be online at the same
time, and Secret Chat does not support group conversations
- …