8 research outputs found

    "Yeah, it does have a...Windows `98 Vibe'': Usability Study of Security Features in Programmable Logic Controllers

    Full text link
    Programmable Logic Controllers (PLCs) drive industrial processes critical to society, e.g., water treatment and distribution, electricity and fuel networks. Search engines (e.g., Shodan) have highlighted that Programmable Logic Controllers (PLCs) are often left exposed to the Internet, one of the main reasons being the misconfigurations of security settings. This leads to the question -- why do these misconfigurations occur and, specifically, whether usability of security controls plays a part? To date, the usability of configuring PLC security mechanisms has not been studied. We present the first investigation through a task-based study and subsequent semi-structured interviews (N=19). We explore the usability of PLC connection configurations and two key security mechanisms (i.e., access levels and user administration). We find that the use of unfamiliar labels, layouts and misleading terminology exacerbates an already complex process of configuring security mechanisms. Our results uncover various (mis-) perceptions about the security controls and how design constraints, e.g., safety and lack of regular updates (due to long term nature of such systems), provide significant challenges to realization of modern HCI and usability principles. Based on these findings, we provide design recommendations to bring usable security in industrial settings at par with its IT counterpart

    Multiuser Privacy and Security Conflicts in the Cloud

    Get PDF
    Collaborative cloud platforms make it easier and more convenient for multiple users to work together on files (GoogleDocs, Office365) and store and share them (Dropbox, OneDrive). However, this can lead to privacy and security conflicts between the users involved, for instance when a user adds someone to a shared folder or changes its permissions. Such multiuser conflicts (MCs), though known to happen in the literature, have not yet been studied in-depth. In this paper, we report a study with 1,050 participants about MCs they experienced in the cloud. We show what are the MCs that arise when multiple users work together in the cloud and how and why they arise, what is the prevalence and severity of MCs, what are their consequences on users, and how do users work around MCs. We derive recommendations for designing mechanisms to help users avoid, mitigate, and resolve MCs in the cloud

    Voice App Developer Experiences with Alexa and Google Assistant: Juggling Risks, Liability, and Security

    Full text link
    Voice applications (voice apps) are a key element in Voice Assistant ecosystems such as Amazon Alexa and Google Assistant, as they provide assistants with a wide range of capabilities that users can invoke with a voice command. Most voice apps, however, are developed by third parties - i.e., not by Amazon/Google - and they are included in the ecosystem through marketplaces akin to smartphone app stores but with crucial differences, e.g., the voice app code is not hosted by the marketplace and is not run on the local device. Previous research has studied the security and privacy issues of voice apps in the wild, finding evidence of bad practices by voice app developers. However, developers' perspectives are yet to be explored. In this paper, we report a qualitative study of the experiences of voice app developers and the challenges they face. Our findings suggest that: 1) developers face several risks due to liability pushed on to them by the more powerful voice assistant platforms, which are linked to negative privacy and security outcomes on voice assistant platforms; and 2) there are key issues around monetization, privacy, design, and testing rooted in problems with the voice app certification process. We discuss the implications of our results for voice app developers, platforms, regulators, and research on voice app development and certification.Comment: To be presented at USENIX Security 202

    Co-creating a Transdisciplinary Map of Technology-mediated Harms, Risks and Vulnerabilities: Challenges, Ambivalences and Opportunities

    Full text link
    The phrase "online harms" has emerged in recent years out of a growing political willingness to address the ethical and social issues associated with the use of the Internet and digital technology at large. The broad landscape that surrounds online harms gathers a multitude of disciplinary, sectoral and organizational efforts while raising myriad challenges and opportunities for the crossing entrenched boundaries. In this paper we draw lessons from a journey of co-creating a transdisciplinary knowledge infrastructure within a large research initiative animated by the online harms agenda. We begin with a reflection of the implications of mapping, taxonomizing and constructing knowledge infrastructures and a brief review of how online harm and adjacent themes have been theorized and classified in the literature to date. Grounded on our own experience of co-creating a map of online harms, we then argue that the map -- and the process of mapping -- perform three mutually constitutive functions, acting simultaneously as method, medium and provocation. We draw lessons from how an open-ended approach to mapping, despite not guaranteeing consensus, can foster productive debate and collaboration in ethically and politically fraught areas of research. We end with a call for CSCW research to surface and engage with the multiple temporalities, social lives and political sensibilities of knowledge infrastructures.Comment: 21 pages, 8 figures, to appear in The 26th ACM Conference On Computer-Supported Cooperative Work And Social Computing. October 13-18, 2023. Minneapolis, MN US

    Multiuser Privacy and Security Conflicts in the Cloud

    No full text
    Collaborative cloud platforms make it easier and more convenient for multiple users to work together on fles (GoogleDocs, Ofce365) and store and share them (Dropbox, OneDrive). However, this can lead to privacy and security conficts between the users involved, for instance when a user adds someone to a shared folder or changes its permissions. Such multiuser conficts (MCs), though known to happen in the literature, have not yet been studied in-depth. In this paper, we report a study with 1,050 participants about MCs they experienced in the cloud. We show what are the MCs that arise when multiple users work together in the cloud and how and why they arise, what is the prevalence and severity of MCs, what are their consequences on users, and how do users work around MCs. We derive recommendations for designing mechanisms to help users avoid, mitigate, and resolve MCs in the cloud

    Voice application developer experiences with Alexa and Google Assistant : juggling risks, liability, and security

    No full text
    Voice applications (voice apps) are a key element in Voice Assistant ecosystems such as Amazon Alexa and Google Assistant, as they provide assistants with a wide range of capabilities that users can invoke with a voice command. Most voice apps, however, are developed by third parties---i.e., not by Amazon/Google---and they are included in the ecosystem through marketplaces akin to smartphone app stores but with crucial differences, e.g., the voice app code is not hosted by the marketplace and is not run on the local device. Previous research has studied the security and privacy issues of voice apps in the wild, finding evidence of bad practices by voice app developers. However, developers' perspectives are yet to be explored. In this paper, we report a qualitative study of the experiences of voice app developers and the challenges they face. Our findings suggest that: 1) developers face several risks due to liability pushed on to them by the more powerful voice assistant platforms, which are linked to negative privacy and security outcomes on voice assistant platforms; and 2) there are key issues around monetization, privacy, design, and testing rooted in problems with the voice app certification process. We discuss the implications of our results for voice app developers, platforms, regulators, and research on voice app development and certification
    corecore