79 research outputs found

    Effective online privacy mechanisms with persuasive communication

    Get PDF
    This thesis contributes to research by taking a social psychological perspective to managing privacy online. The thesis proposes to support the effort to form a mental model that is required to evaluate a context with regards to privacy attitudes or to ease the effort by biasing activation of privacy attitudes. Privacy being a behavioural concept, the human-computer interaction design plays a major role in supporting and contributing to end users’ ability to manage their privacy online. However, unless privacy attitudes are activated or made accessible, end users’ behaviour would not necessarily match their attitudes. This perspective contributes to explaining why online privacy mechanisms have long been found to be in-effective. Privacy academics and practitioners are queried for their opinions on aspects of usable privacy designs. Evaluation of existing privacy mechanisms (social network service, internet browsers privacy tabs and E-Commerce websites) for privacy experts’ requirements reveals that the privacy mechanisms do not provide for the social psychological processes of privacy management. This is determined through communication breakdowns within the interaction design and the lack of privacy disclosure dialectical tension, lack of disclosure context and visibility of privacy means. The thesis taps into established research in social psychology related to the attitude behaviour relationship. It proposes persuasive communication to support the privacy management process that is to enable end user control of their privacy while ensuring typical usability criteria such as minimum effort and ease of use. An experimental user study within an E-Commerce context provides evidence that in the presence of persuasive triggers that support the disclosure and privacy dialectic within a context of disclosure; end users can engage in privacy behaviour that match their privacy concerns. Reminders for privacy actions with a message that is personally relevant or has a privacy argument result in significantly more privacy behaviour than a simple reminder. However, reminders with an attractive source that is not linked with privacy can distract end users from privacy behaviour such that the observed response is similar to the simple reminder. This finding is significant for the research space since it supports the use of persuasive communication within human-computer interaction of privacy designs as a powerful tool in enabling attitude activation and accessibility such that cognitive evaluation of an attitude object can be unleashed and end users can have a higher likelihood of responding with privacy behaviour. It also supports the view that privacy designs that do not consider their interaction with privacy attitudes or their influence on behaviour can turn out to be in-effective although found to support the typical usability criteria. More research into the social-psychological aspects of online privacy management would be beneficial to the research space. Further research could determine the strength of activated or accessed privacy attitude caused by particular persuasive triggers and the extent of privacy behaviour. Longitudinal studies could also be useful to better understand online privacy behaviour and help designs of more effective and usable online privacy

    How Can and Would People Protect From Online Tracking?

    Get PDF
    Online tracking is complex and users find itchallenging to protect themselves from it. While the aca-demic community has extensively studied systems andusers for tracking practices, the link between the dataprotection regulations, websites’ practices of presentingprivacy-enhancing technologies (PETs), and how userslearn about PETs and practice them is not clear. Thispaper takes a multidimensional approach to find such alink. We conduct a study to evaluate the 100 top EUwebsites, where we find that information about PETsis provided far beyond the cookie notice. We also findthat opting-out from privacy settings is not as easy asopting-in and becomes even more difficult (if not impos-sible) when the user decides to opt-out of previously ac-cepted privacy settings. In addition, we conduct an on-line survey with 614 participants across three countries(UK, France, Germany) to gain a broad understand-ing of users’ tracking protection practices. We find thatusers mostly learn about PETs for tracking protectionvia their own research or with the help of family andfriends. We find a disparity between what websites offeras tracking protection and the ways individuals reportto do so. Observing such a disparity sheds light on whycurrent policies and practices are ineffective in support-ing the use of PETs by users

    Cybercrimes in the aftermath of COVID-19: Present concerns and future directions

    Get PDF
    Cybercrimes are broadly defined as criminal activities carried out using computers or computer networks. Given the rapid and considerable shifts in Internet use and the impact of the COVID-19 pandemic on cybercrime rates, online behaviours have attracted increased public and policy attention. In this article, we map the landscape of cybercrime in the UK by first reviewing legislation and policy, as well as examine barriers to reporting and address investigative challenges. Given the indisputable rise in cybercrime and its mental health impacts, we propose a four-facet approach for research and practice in this field with an eye to systemic shifts and strategies to combat cybercrime holistically: community alliances and social support, state intervention, and infrastructural sensitivity to user diversity. Lastly, empirical evidence from research guides the design of data-driven technology and provision of advice/interventions to provide a safer digital landscape — hence the importance for more informative research

    <i>‘We’re not that gullible!’</i> Revealing dark pattern mental models of 11-12 year-old Scottish children

    Get PDF
    Deceptive techniques known as dark patterns specifically target online users. Children are particularly vulnerable as they might lack the skills to recognise and resist these deceptive attempts. To be effective, interventions to forewarn and forearm should build on a comprehensive understanding of children’s existing mental models. To this end, we carried out a study with 11-12 year old Scottish children to reveal their mental models of dark patterns. They were acutely aware of online deception, referring to deployers as being ‘up to no good’. Yet, they were overly vigilant and construed worst-case outcomes, with even a benign warning triggering suspicion. We recommend that rather than focusing on specific instances of dark patterns in awareness raising, interventions should prioritise improving children’s understanding of the characteristics of, and the motivations behind, deceptive online techniques. By so doing, we can help them to develop a more robust defence against these deceptive practices

    Towards an equitable digital society:artificial intelligence (AI) and corporate digital responsibility (CDR)

    Get PDF
    In the digital era, we witness the increasing use of artificial intelligence (AI) to solve problems, while improving productivity and efficiency. Yet, inevitably costs are involved with delegating power to algorithmically based systems, some of whose workings are opaque and unobservable and thus termed the “black box”. Central to understanding the “black box” is to acknowledge that the algorithm is not mendaciously undertaking this action; it is simply using the recombination afforded to scaled computable machine learning algorithms. But an algorithm with arbitrary precision can easily reconstruct those characteristics and make life-changing decisions, particularly in financial services (credit scoring, risk assessment, etc.), and it could be difficult to reconstruct, if this was done in a fair manner reflecting the values of society. If we permit AI to make life-changing decisions, what are the opportunity costs, data trade-offs, and implications for social, economic, technical, legal, and environmental systems? We find that over 160 ethical AI principles exist, advocating organisations to act responsibly to avoid causing digital societal harms. This maelstrom of guidance, none of which is compulsory, serves to confuse, as opposed to guide. We need to think carefully about how we implement these algorithms, the delegation of decisions and data usage, in the absence of human oversight and AI governance. The paper seeks to harmonise and align approaches, illustrating the opportunities and threats of AI, while raising awareness of Corporate Digital Responsibility (CDR) as a potential collaborative mechanism to demystify governance complexity and to establish an equitable digital society

    Identifying and Supporting Financially Vulnerable Consumers in a Privacy-Preserving Manner: A Use Case Using Decentralised Identifiers and Verifiable Credentials

    Get PDF
    Vulnerable individuals have a limited ability to make reasonable financial decisions and choices and, thus, the level of care that is appropriate to be provided to them by financial institutions may be different from that required for other consumers. Therefore, identifying vulnerability is of central importance for the design and effective provision of financial services and products. However, validating the information that customers share and respecting their privacy are both particularly important in finance and this poses a challenge for identifying and caring for vulnerable populations. This position paper examines the potential of the combination of two emerging technologies, Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs), for the identification of vulnerable consumers in finance in an efficient and privacy-preserving manner.Comment: Published in the ACM CHI 2021 workshop on Designing for New Forms of Vulnerabilit
    • 

    corecore