4,104 research outputs found

    Ethical guidelines for nudging in information security & privacy

    Get PDF
    There has recently been an upsurge of interest in the deployment of behavioural economics techniques in the information security and privacy domain. In this paper, we consider first the nature of one particular intervention, the nudge, and the way it exercises its influence. We contemplate the ethical ramifications of nudging, in its broadest sense, deriving general principles for ethical nudging from the literature. We extrapolate these principles to the deployment of nudging in information security and privacy. We explain how researchers can use these guidelines to ensure that they satisfy the ethical requirements during nudge trials in information security and privacy. Our guidelines also provide guidance to ethics review boards that are required to evaluate nudge-related research

    Lessons learned from evaluating eight password nudges in the wild

    Get PDF
    Background. The tension between security and convenience, when creating passwords, is well established. It is a tension that often leads users to create poor passwords. For security designers, three mitigation strategies exist: issuing passwords, mandating minimum strength levels or encouraging better passwords. The first strategy prompts recording, the second reuse, but the third merits further investigation. It seemed promising to explore whether users could be subtly nudged towards stronger passwords.Aim. The aim of the study was to investigate the influence of visual nudges on self-chosen password length and/or strength.Method. A university application, enabling students to check course dates and review grades, was used to support two consecutive empirical studies over the course of two academic years. In total, 497 and 776 participants, respectively, were randomly assigned either to a control or an experimental group. Whereas the control group received no intervention, the experimental groups were presented with different visual nudges on the registration page of the web application whenever passwords were created. The experimental groups’ password strengths and lengths were then compared that of the control group.Results. No impact of the visual nudges could be detected, neither in terms of password strength nor length. The ordinal score metric used to calculate password strength led to a decrease in variance and test power, so that the inability to detect an effect size does not definitively indicate that such an effect does not exist.Conclusion. We cannot conclude that the nudges had no effect on password strength. It might well be that an actual effect was not detected due to the experimental design choices. Another possible explanation for our result is that password choice is influenced by the user’s task, cognitive budget, goals and pre-existing routines. A simple visual nudge might not have the power to overcome these forces. Our lessons learned therefore recommend the use of a richer password strength quantification measure, and the acknowledgement of the user’s context, in future studies

    Nudging folks towards stronger password choices:providing certainty is the key

    Get PDF
    Persuading people to choose strong passwords is challenging. One way to influence password strength, as and when people are making the choice, is to tweak the choice architecture to encourage stronger choice. A variety of choice architecture manipulations i.e. “nudges”, have been trialled by researchers with a view to strengthening the overall password profile. None has made much of a difference so far. Here we report on our design of an influential behavioural intervention tailored to the password choice context: a hybrid nudge that significantly prompted stronger passwords.We carried out three longitudinal studies to analyse the efficacy of a range of “nudges” by manipulating the password choice architecture of an actual university web application. The first and second studies tested the efficacy of several simple visual framing “nudges”. Password strength did not budge. The third study tested expiration dates directly linked to password strength. This manipulation delivered a positive result: significantly longer and stronger passwords. Our main conclusion was that the final successful nudge provided participants with absolute certainty as to the benefit of a stronger password, and that it was this certainty that made the difference

    Guidelines for ethical nudging in password authentication

    Get PDF
    Nudging has been adopted by many disciplines in the last decade in order to achieve behavioural change. Information security is no exception. A number of attempts have been made to nudge end-users towards stronger passwords. Here we report on our deployment of an enriched nudge displayed to participants on the system enrolment page, when a password has to be chosen. The enriched nudge was successful in that participants chose significantly longer and stronger passwords. One thing that struck us as we designed and tested this nudge was that we were unable to find any nudge-specific ethical guidelines to inform our experimentation in this context. This led us to reflect on the ethical implications of nudge testing, specifically in the password authentication context. We mined the nudge literature and derived a number of core principles of ethical nudging. We tailored these to the password authentication context, and then show how they can be applied by assessing the ethics of our own nudge. We conclude with a set of preliminary guidelines derived from our study to inform other researchers planning to deploy nudge-related techniques in this context

    POINTER:a GDPR-compliant framework for human pentesting (for SMEs)

    Get PDF
    Penetration tests have become a valuable tool in any organisation’s arsenal, in terms of detecting vulnerabilities in their technical defences. Many organisations now also “penetration test” their employees, assessing their resilience and ability to repel human-targeted attacks. There are two problems with current frameworks: (1) few of these have been developed with SMEs in mind, and (2) many deploy spear phishing, thereby invading employee privacy, which could be illegal under the new European General Data Protection Regulation (GDPR) legislation. We therefore propose the PoinTER (Prepare TEst Remediate) Human Pentesting Framework. We subjected this framework to expert review and present it to open a discourse on the issue of formulating a GDPR- compliant Privacy-Respecting Employee Pentest for SMEs

    A Critical Look at Decentralized Personal Data Architectures

    Full text link
    While the Internet was conceived as a decentralized network, the most widely used web applications today tend toward centralization. Control increasingly rests with centralized service providers who, as a consequence, have also amassed unprecedented amounts of data about the behaviors and personalities of individuals. Developers, regulators, and consumer advocates have looked to alternative decentralized architectures as the natural response to threats posed by these centralized services. The result has been a great variety of solutions that include personal data stores (PDS), infomediaries, Vendor Relationship Management (VRM) systems, and federated and distributed social networks. And yet, for all these efforts, decentralized personal data architectures have seen little adoption. This position paper attempts to account for these failures, challenging the accepted wisdom in the web community on the feasibility and desirability of these approaches. We start with a historical discussion of the development of various categories of decentralized personal data architectures. Then we survey the main ideas to illustrate the common themes among these efforts. We tease apart the design characteristics of these systems from the social values that they (are intended to) promote. We use this understanding to point out numerous drawbacks of the decentralization paradigm, some inherent and others incidental. We end with recommendations for designers of these systems for working towards goals that are achievable, but perhaps more limited in scope and ambition

    Technology, autonomy, and manipulation

    Get PDF
    Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of “online manipulation”. While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers — first, by defining “online manipulation”, thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of information technology to covertly influence another person’s decision-making, by targeting and exploiting their decision-making vulnerabilities. Engaging in such practices can harm individuals by diminishing their economic interests, but its deeper, more insidious harm is its challenge to individual autonomy. We explore this autonomy harm, emphasising its implications for both individuals and society, and we briefly outline some strategies for combating online manipulation and strengthening autonomy in an increasingly digital world

    ATM and cashpoint art: what’s at stake in designing against crime

    Full text link
    When Hammersmith Police approached the Design Against Crime Research Centre (DACRC) at the University of Arts London, for help in dealing with theft and fraud linked to users of ATM’s, the DACRC team looked sideways, beyond traditional ‘security solutions’, collaborating with artist Steve Russell, to help find some new and creative ways of influencing behaviour around “cashpoints”. Hammersmith Police contacted DACRC because Prof. Lorraine Gamman, who directs the Centre, has written about design against pickpocketing and bag theft, and works closely with businesses in her role as advisor to the Home Office’s “Design Technology Alliance Against Crime
    • 

    corecore