57 research outputs found

    Why Users (Don’t) Use Password Managers at a Large Educational Institution

    Get PDF
    We quantitatively investigated the current state of Password Manager (PM) usage and general password habits at a large, private university in the United States. Building on prior qualitative findings from SOUPS 2019, we survey n=277 faculty, staff, and students, finding that 77% of our participants already use PMs, but users of third-party PMs, as opposed to browser-based PMs, were significantly less likely to reuse their passwords across accounts. The largest factor encouraging PM adoption is perceived ease-of-use, indicating that communication and institutional campaigns should focus more on usability factors. Additionally, our work indicates the need for design improvements for browser-based PMs to encourage less password reuse as they are more widely adopted

    I Think They're Trying To Tell Me Something: Advice Sources and Selection for Digital Security

    Get PDF
    Users receive a multitude of digital- and physical-security advice every day. Indeed, if we implemented all the security advice we received, we would never leave our houses or use the Internet. Instead, users selectively choose some advice to accept and some (most) to reject; however, it is unclear whether they are effectively prioritizing what is most important or most useful. If we can understand from where users take security advice and how they subsequently develop security behaviors, we can develop more effective security interventions. As a first step, we conducted 25 semi-structured interviews of security-sensitive (those users who deal with sensitive data or hold security clearances) and general users. These interviews resulted in several key findings: (1) users' main sources of digital-security advice include IT professionals, workplaces, and negative events, whether experienced personally or retold through TV; (2) users determine whether to accept digital-security advice based on the trustworthiness of the advice-source, as they feel inadequately able to evaluate the advice content; (3) users reject advice for many reasons, from believing that someone else is responsible for their security to finding that the advice contains too much marketing material or threatens their privacy; and (4) security-sensitive users differ from general users in a number of ways, including feeling that digital-security advice is more useful in their day-to-day lives and relying heavily on their workplace as a source of security information. These and our other findings inform a set of design recommendations for enhancing the efficacy of digital-security advice

    Where is the Digital Divide? A Survey of Security, Privacy, and Socioeconomics

    Get PDF
    The behavior of the least-secure user can influence security and privacy outcomes for everyone else. Thus, it is important to understand the factors that influence the security and privacy of a broad variety of people. Prior work has suggested that users with differing socioeconomic status (SES) may behave differently; however, no research has examined how SES, advice sources, and resources relate to the security and privacy incidents users report. To address this question, we analyze a 3,000 respondent, census-representative telephone survey. We find that, contrary to prior assumptions, people with lower educational attainment report equal or fewer incidents as more educated people, and that users’ experiences are significantly correlated with their advice sources, regardless of SES or resources

    Designing a Data-Driven Survey System: Leveraging Participants’ Online Data to Personalize Surveys

    Get PDF
    User surveys are essential to user-centered research in many fields, including human-computer interaction (HCI). Survey personalization—specifically, adapting questionnaires to the respondents’ profiles and experiences—can improve reliability and quality of responses. However, popular survey platforms lack usable mechanisms for seamlessly importing participants’ data from other systems. This paper explores the design of a data-driven survey system to fill this gap. First, we conducted formative research, including a literature review and a survey of researchers ( = 52), to understand researchers’ practices, experiences, needs, and interests in a data-driven survey system. We designed and implemented a minimum viable product called Data-Driven Surveys (DDS), which enables including respondents’ data from online service accounts (Fitbit, Instagram, Spotify, GitHub, etc.) in survey questions, answers, and flow/logic. Our system is free, open source, and extensible. It can enhance the survey research experience for both researchers and respondents

    A Summary of Survey Methodology Best Practices for Security and Privacy Researchers

    Get PDF
    "Given a choice between dancing pigs and security, users will pick dancing pigs every time," warns an oft-cited quote from well-known security researcher Bruce Schneier. This issue of understanding how to make security tools and mechanisms work better for humans (often categorized as usability, broadly construed) has become increasingly important over the past 17 years, as illustrated by the growing body of research. Usable security and privacy research has improved our understanding of how to help users stay safe from phishing attacks, and control access to their accounts, as just three examples. One key technique for understanding and improving how human decision making affects security is the gathering of self-reported data from users. This data is typically gathered via survey and interview studies, and serves to inform the broader security and privacy community about user needs, behaviors, and beliefs. The quality of this data, and the validity of subsequent research results, depends on the choices researchers make when designing their experiments. Contained here is a set of essential guidelines for conducting self-report usability studies distilled from prior work in survey methodology and related fields. Other fields that rely on self-report data, such as the health and social sciences, have established guidelines and recommendations for collecting high quality self-report data

    How Well Do My Results Generalize? Comparing Security and Privacy Survey Results from MTurk and Web Panels to the U.S.

    Get PDF
    Security and privacy researchers often rely on data collected from Amazon Mechanical Turk (MTurk) to evaluate security tools, to understand users' privacy preferences, to measure online behavior, and for other studies. While the demographics of MTurk are broader than some other options, researchers have also recently begun to use census-representative web-panels to sample respondents with more representative demographics. Yet, we know little about whether security and privacy results from either of these data sources generalize to a broader population. In this paper, we compare the results of a survey about security and privacy knowledge, experiences, advice, and internet behavior distributed using MTurk (n=480), a nearly census-representative web-panel (n=428), and a probabilistic telephone sample (n=3,000) statistically weighted to be accurate within 2.7% of the true prevalence in the U.S. Surprisingly, we find that MTurk responses are slightly more representative of the U.S. population than are responses from the census-representative panel, except for users who hold no more than a high-school diploma or who are 50 years of age or older. Further, we find that statistical weighting of MTurk responses to balance demographics does not significantly improve generalizability. This leads us to hypothesize that differences between MTurkers and the general public are due not to demographics, but to differences in factors such as internet skill. Overall, our findings offer tempered encouragement for researchers using MTurk samples and enhance our ability to appropriately contextualize and interpret the results of crowdsourced security and privacy research
    corecore