9 research outputs found

    "Yeah, it does have a...Windows `98 Vibe'': Usability Study of Security Features in Programmable Logic Controllers

    Full text link
    Programmable Logic Controllers (PLCs) drive industrial processes critical to society, e.g., water treatment and distribution, electricity and fuel networks. Search engines (e.g., Shodan) have highlighted that Programmable Logic Controllers (PLCs) are often left exposed to the Internet, one of the main reasons being the misconfigurations of security settings. This leads to the question -- why do these misconfigurations occur and, specifically, whether usability of security controls plays a part? To date, the usability of configuring PLC security mechanisms has not been studied. We present the first investigation through a task-based study and subsequent semi-structured interviews (N=19). We explore the usability of PLC connection configurations and two key security mechanisms (i.e., access levels and user administration). We find that the use of unfamiliar labels, layouts and misleading terminology exacerbates an already complex process of configuring security mechanisms. Our results uncover various (mis-) perceptions about the security controls and how design constraints, e.g., safety and lack of regular updates (due to long term nature of such systems), provide significant challenges to realization of modern HCI and usability principles. Based on these findings, we provide design recommendations to bring usable security in industrial settings at par with its IT counterpart

    Multiuser Privacy and Security Conflicts in the Cloud

    Get PDF
    Collaborative cloud platforms make it easier and more convenient for multiple users to work together on files (GoogleDocs, Office365) and store and share them (Dropbox, OneDrive). However, this can lead to privacy and security conflicts between the users involved, for instance when a user adds someone to a shared folder or changes its permissions. Such multiuser conflicts (MCs), though known to happen in the literature, have not yet been studied in-depth. In this paper, we report a study with 1,050 participants about MCs they experienced in the cloud. We show what are the MCs that arise when multiple users work together in the cloud and how and why they arise, what is the prevalence and severity of MCs, what are their consequences on users, and how do users work around MCs. We derive recommendations for designing mechanisms to help users avoid, mitigate, and resolve MCs in the cloud

    Voice App Developer Experiences with Alexa and Google Assistant: Juggling Risks, Liability, and Security

    Full text link
    Voice applications (voice apps) are a key element in Voice Assistant ecosystems such as Amazon Alexa and Google Assistant, as they provide assistants with a wide range of capabilities that users can invoke with a voice command. Most voice apps, however, are developed by third parties - i.e., not by Amazon/Google - and they are included in the ecosystem through marketplaces akin to smartphone app stores but with crucial differences, e.g., the voice app code is not hosted by the marketplace and is not run on the local device. Previous research has studied the security and privacy issues of voice apps in the wild, finding evidence of bad practices by voice app developers. However, developers' perspectives are yet to be explored. In this paper, we report a qualitative study of the experiences of voice app developers and the challenges they face. Our findings suggest that: 1) developers face several risks due to liability pushed on to them by the more powerful voice assistant platforms, which are linked to negative privacy and security outcomes on voice assistant platforms; and 2) there are key issues around monetization, privacy, design, and testing rooted in problems with the voice app certification process. We discuss the implications of our results for voice app developers, platforms, regulators, and research on voice app development and certification.Comment: To be presented at USENIX Security 202

    Co-creating a Transdisciplinary Map of Technology-mediated Harms, Risks and Vulnerabilities: Challenges, Ambivalences and Opportunities

    Full text link
    The phrase "online harms" has emerged in recent years out of a growing political willingness to address the ethical and social issues associated with the use of the Internet and digital technology at large. The broad landscape that surrounds online harms gathers a multitude of disciplinary, sectoral and organizational efforts while raising myriad challenges and opportunities for the crossing entrenched boundaries. In this paper we draw lessons from a journey of co-creating a transdisciplinary knowledge infrastructure within a large research initiative animated by the online harms agenda. We begin with a reflection of the implications of mapping, taxonomizing and constructing knowledge infrastructures and a brief review of how online harm and adjacent themes have been theorized and classified in the literature to date. Grounded on our own experience of co-creating a map of online harms, we then argue that the map -- and the process of mapping -- perform three mutually constitutive functions, acting simultaneously as method, medium and provocation. We draw lessons from how an open-ended approach to mapping, despite not guaranteeing consensus, can foster productive debate and collaboration in ethically and politically fraught areas of research. We end with a call for CSCW research to surface and engage with the multiple temporalities, social lives and political sensibilities of knowledge infrastructures.Comment: 21 pages, 8 figures, to appear in The 26th ACM Conference On Computer-Supported Cooperative Work And Social Computing. October 13-18, 2023. Minneapolis, MN US

    Multiuser Privacy and Security Conflicts in the Cloud

    No full text
    Collaborative cloud platforms make it easier and more convenient for multiple users to work together on fles (GoogleDocs, Ofce365) and store and share them (Dropbox, OneDrive). However, this can lead to privacy and security conficts between the users involved, for instance when a user adds someone to a shared folder or changes its permissions. Such multiuser conficts (MCs), though known to happen in the literature, have not yet been studied in-depth. In this paper, we report a study with 1,050 participants about MCs they experienced in the cloud. We show what are the MCs that arise when multiple users work together in the cloud and how and why they arise, what is the prevalence and severity of MCs, what are their consequences on users, and how do users work around MCs. We derive recommendations for designing mechanisms to help users avoid, mitigate, and resolve MCs in the cloud

    Privacy Norms for Smart Home Personal Assistants

    No full text
    Smart Home Personal Assistants (SPA) have a complex ecosystem that enables them to carry out various tasks on behalf of the user with just voice commands. SPA capabilities are continually growing, with over a hundred thousand third-party skills in Amazon Alexa, covering several categories, from tasks within the home (e.g. managing smart devices) to tasks beyond the boundaries of the home (e.g. purchasing online, booking a ride). In the SPA ecosystem, information flows through several entities including SPA providers, third-party skills providers, providers of Smart Devices, other users and external parties. Prior studies have not explored privacy norms in the SPA ecosystem, i.e., the acceptability of these information flows. In this paper, we study privacy norms in SPAs based on Contextual Integrity through a large-scale study with 1,738 participants. We also study the influence that the Contextual Integrity parameters and personal factors have on the privacy norms. Further, we identify the similarities in terms of the Contextual Integrity parameters of the privacy norms studied to distill more general privacy norms, which could be useful, for instance, to establish suitable privacy defaults in SPA. We finally provide recommendations for SPA and third-party skill providers based on the privacy norms studied. [Note]: If you'd like to use the dataset, please remember to cite our paper: - Noura Abdi, Xiao Zhan, Kopo M Ramokapane, and Jose Such. 2021. Privacy Norms for Smart Home Personal Assistants. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14. Paper Link: https://dl.acm.org/doi/10.1145/3411764.3445122 Below are the initial two publications, appearing in both AIES and ECAI, that have employed this dataset for the first time. We strongly encourage you to peruse these works and consider referencing them in acknowledgment of their pioneering utilization of the dataset: - Xiao Zhan, Stefan Sarkadi, Natalia Criado, and Jose Such. 2022. A Model for Governing Information Sharing in Smart Assistants. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’22), August 1–3, 2022, 11 pages. Paper Link: https://doi.org/10.1145/3514094.3534129 - Xiao Zhan, Stefan Sarkadi, and Jose Such. 2023. Privacy-enhanced AI Assistants based on Dialogues and Case Similarity. In Proceedings of the 2023 European Conference on Artificial Intelligence (ECAI) Paper Link: https://kclpure.kcl.ac.uk/ws/portalfiles/portal/224456858/264Zhan.pdf [Other research using our dataset] - Marc Serramia Amoros, William Seymour, Natalia Criado, and Michael Luck. 2023. Predicting Privacy Preferences for Smart Devices as Norms. In The 22nd International Conference on Autonomous Agents and Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) Paper Link: https://arxiv.org/pdf/2302.10650.pdf Here we list all the supplementary materials relevant to the paper

    Voice application developer experiences with Alexa and Google Assistant : juggling risks, liability, and security

    No full text
    Voice applications (voice apps) are a key element in Voice Assistant ecosystems such as Amazon Alexa and Google Assistant, as they provide assistants with a wide range of capabilities that users can invoke with a voice command. Most voice apps, however, are developed by third parties---i.e., not by Amazon/Google---and they are included in the ecosystem through marketplaces akin to smartphone app stores but with crucial differences, e.g., the voice app code is not hosted by the marketplace and is not run on the local device. Previous research has studied the security and privacy issues of voice apps in the wild, finding evidence of bad practices by voice app developers. However, developers' perspectives are yet to be explored. In this paper, we report a qualitative study of the experiences of voice app developers and the challenges they face. Our findings suggest that: 1) developers face several risks due to liability pushed on to them by the more powerful voice assistant platforms, which are linked to negative privacy and security outcomes on voice assistant platforms; and 2) there are key issues around monetization, privacy, design, and testing rooted in problems with the voice app certification process. We discuss the implications of our results for voice app developers, platforms, regulators, and research on voice app development and certification
    corecore