5 research outputs found
A Framework for Reasoning About the Human in the Loop
Many secure systems rely on a “human in the loop” to perform security-critical functions. However, humans often
fail in their security roles. Whenever possible, secure system designers should find ways of keeping humans out of
the loop. However, there are some tasks for which feasible or cost effective alternatives to humans are not available.
In these cases secure system designers should engineer their systems to support the humans in the loop and maximize
their chances of performing their security-critical functions successfully. We propose a framework for reasoning
about the human in the loop that provides a systematic approach to identifying potential causes for human failure.
This framework can be used by system designers to identify problem areas before a system is built and proactively
address deficiencies. System operators can also use this framework to analyze the root cause of security
failures that have been attributed to “human error.” We provide examples to illustrate the applicability of this
framework to a variety of secure systems design problems, including anti-phishing warnings and password policies
Behavioral Response to Phishing Risk
Tools that aim to combat phishing attacks must take into account
how and why people fall for them in order to be effective. This
study reports a pilot survey of 232 computer users to reveal
predictors of falling for phishing emails, as well as trusting
legitimate emails. Previous work suggests that people may be
vulnerable to phishing schemes because their awareness of the
risks is not linked to perceived vulnerability or to useful strategies
in identifying phishing emails. In this survey, we explore what
factors are associated with falling for phishing attacks in a roleplay
exercise. Our data suggest that deeper understanding of the
web environment, such as being able to correctly interpret URLs
and understanding what a lock signifies, is associated with less
vulnerability to phishing attacks. Perceived severity of the
consequences does not predict behavior. These results suggest that
educational efforts should aim to increase users’ intuitive
understanding, rather than merely warning them about risks
Lessons Learned from the Deployment of a Smartphone-Based Access-Control System
Grey is a smartphone-based system by which a user can exercise
her authority to gain access to rooms in our university building,
and by which she can delegate that authority to other users.We present
findings from a trial of Grey, with emphasis on how common
usability principles manifest themselves in a smartphone-based security
application. In particular, we demonstrate aspects of the system
that gave rise to failures, misunderstandings, misperceptions,
and unintended uses; network effects and new flexibility enabled by
Grey; and the implications of these for user behavior.We argue that
the manner in which usability principles emerged in the context of
Grey can inform the design of other such applications
P3P Deployment on Websites
We studied the deployment of computer-readable privacy policies encoded using
the standard W3C Platform for Privacy Preferences (P3P) format to inform
questions about P3P’s usefulness to end users and researchers. We found that P3P
adoption is increasing overall and that P3P adoption rates greatly vary across industries.
We found that P3P had been deployed on 10% of the sites returned in the
top-20 results of typical searches, and on 21% of the sites returned in the top-20
results of e-commerce searches. We examined a set of over 5,000 web sites in both
2003 and 2006 and found that P3P deployment among these sites increased over
that time period, although we observed decreases in some sectors. In the Fall of 2007
we observed 470 new P3P policies created over a two month period. We found high
rates of syntax errors among P3P policies, but much lower rates of critical errors
that prevent a P3P user agent from interpreting them.We also found that most P3P
policies have discrepancies with their natural language counterparts. Some of these
discrepancies can be attributed to ambiguities, while others cause the two policies
to have completely different meanings. Finally, we show that the privacy policies of
P3P-enabled popular websites are similar to the privacy policies of popular websites
that do not use P3P
Getting Users to Pay Attention to Anti-Phishing Education: Evaluation of Retention and Transfer
Educational materials designed to teach users not to fall for
phishing attacks are widely available but are often ignored by
users. In this paper, we extend an embedded training methodology
using learning science principles in which phishing education is
made part of a primary task for users. The goal is to motivate
users to pay attention to the training materials. In embedded
training, users are sent simulated phishing attacks and trained after
they fall for the attacks. Prior studies tested users immediately
after training and demonstrated that embedded training improved
users’ ability to identify phishing emails and websites. In the
present study, we tested users to determine how well they retained
knowledge gained through embedded training and how well they
transferred this knowledge to identify other types of phishing
emails. We also compared the effectiveness of the same training
materials delivered via embedded training and delivered as regular
email messages. In our experiments, we found that: (a) users learn
more effectively when the training materials are presented after
users fall for the attack (embedded) than when the same training
materials are sent by email (non-embedded); (b) users retain and
transfer more knowledge after embedded training than after nonembedded
training; and (c) users with higher Cognitive Reflection
Test (CRT) scores are more likely than users with lower CRT
scores to click on the links in the phishing emails from companies
with which they have no account