6 research outputs found

    You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings

    No full text
    Many popular web browsers now include active phishing warnings since research has shown that passive warnings are often ignored. In this laboratory study we examine the effectiveness of these warnings and examine if, how, and why they fail users. We simulated a spear phishing attack to expose users to browser warnings. We found that 97% of our sixty participants fell for at least one of the phishing messages that we sent them. However, we also found that when presented with the active warnings, 79% of participants heeded them, which was not the case for the passive warning that we tested—where only one participant heeded the warnings. Using a model from the warning sciences we analyzed how users perceive warning messages and offer suggestions for creating more effective phishing warnings

    You've Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings

    No full text
    Many popular web browsers now include active phishing warnings since research has shown that passive warnings are often ignored. In this laboratory study we examine the effectiveness of these warnings and examine if, how, and why they fail users. We simulated a spear phishing attack to expose users to browser warnings. We found that 97% of our sixty participants fell for at least one of the phishing messages that we sent them. However, we also found that when presented with the active warnings, 79% of participants heeded them, which was not the case for the passive warning that we tested—where only one participant heeded the warnings. Using a model from the warning sciences we analyzed how users perceive warning messages and offer suggestions for creating more effective phishing warnings

    Phinding Phish: Evaluating Anti-Phishing Tools

    No full text
    There are currently dozens of freely available tools to combat phishing and other web-based scams, many of which are web browser extensions that warn users when they are browsing a suspected phishing site. We developed an automated test bed for testing antiphishing tools. We used 200 verified phishing URLs from two sources and 516 legitimate URLs to test the effectiveness of 10 popular anti-phishing tools. Only one tool was able to consistently identify more than 90% of phishing URLs correctly; however, it also incorrectly identified 42% of legitimate URLs as phish. The performance of the other tools varied considerably depending on the source of the phishing URLs. Of these remaining tools, only one correctly identified over 60% of phishing URLs from both sources. Performance also changed significantly depending on the freshness of the phishing URLs tested. Thus we demonstrate that the source of phishing URLs and the freshness of the URLs tested can significantly impact the results of anti-phishing tool testing. We also demonstrate that many of the tools we tested were vulnerable to simple exploits. In this paper we describe our anti-phishing tool test bed, summarize our findings, and offer observations about the effectiveness of these tools as well as ways they might be improved

    Crying Wolf: An Empirical Study of SSL Warning Effectiveness

    No full text
    Web users are shown an invalid certificate warning when their browser cannot validate the identity of the websites they are visiting. While these warnings often appear in benign situations, they can also signal a man-in-the-middle attack. We conducted a survey of over 400 Internet users to examine their reactions to and understanding of current SSL warnings. We then designed two new warnings using warnings science principles and lessons learned from the survey. We evaluated warnings used in three popular web browsers and our two warnings in a 100- participant, between-subjects laboratory study. Our warnings performed significantly better than existing warnings, but far too many participants exhibited dangerous behavior in all warning conditions. Our results suggest that, while warnings can be improved, a better approach may be to minimize the use of SSL warnings altogether by blocking users from making unsafe connections and eliminating warnings in benign situations

    P3P Deployment on Websites

    No full text
    We studied the deployment of computer-readable privacy policies encoded using the standard W3C Platform for Privacy Preferences (P3P) format to inform questions about P3P’s usefulness to end users and researchers. We found that P3P adoption is increasing overall and that P3P adoption rates greatly vary across industries. We found that P3P had been deployed on 10% of the sites returned in the top-20 results of typical searches, and on 21% of the sites returned in the top-20 results of e-commerce searches. We examined a set of over 5,000 web sites in both 2003 and 2006 and found that P3P deployment among these sites increased over that time period, although we observed decreases in some sectors. In the Fall of 2007 we observed 470 new P3P policies created over a two month period. We found high rates of syntax errors among P3P policies, but much lower rates of critical errors that prevent a P3P user agent from interpreting them.We also found that most P3P policies have discrepancies with their natural language counterparts. Some of these discrepancies can be attributed to ambiguities, while others cause the two policies to have completely different meanings. Finally, we show that the privacy policies of P3P-enabled popular websites are similar to the privacy policies of popular websites that do not use P3P

    Do or do not, there is no try: User engagement may not improve security outcomes

    No full text
    Computer security problems often occur when there are disconnects between users' understanding of their role in computer security and what is expected of them. To help users make good security decisions more easily, we need insights into the challenges they face in their daily computer usage. We built and deployed the Security Behavior Observatory (SBO) to collect data on user behavior and machine configurations from participants' home computers. Combining SBO data with user interviews, this paper presents a qualitative study comparing users' attitudes, behaviors, and understanding of computer security to the actual states of their computers. Qualitative inductive thematic analysis of the interviews produced “engagement” as the overarching theme, whereby participants with greater engagement in computer security and maintenance did not necessarily have more secure computer states. Thus, user engagement alone may not be predictive of computer security. We identify several other themes that inform future directions for better design and research into security interventions. Our findings emphasize the need for better understanding of how users' computers get infected, so that we can more effectively design user-centered mitigations
    corecore