15 research outputs found

    Implicit Contextual Integrity in Online Social Networks

    Get PDF
    Many real incidents demonstrate that users of Online Social Networks need mechanisms that help them manage their interactions by increasing the awareness of the different contexts that coexist in Online Social Networks and preventing them from exchanging inappropriate information in those contexts or disseminating sensitive information from some contexts to others. Contextual integrity is a privacy theory that conceptualises the appropriateness of information sharing based on the contexts in which this information is to be shared. Computational models of Contextual Integrity assume the existence of well-defined contexts, in which individuals enact pre-defined roles and information sharing is governed by an explicit set of norms. However, contexts in Online Social Networks are known to be implicit, unknown a priori and ever changing; users relationships are constantly evolving; and the information sharing norms are implicit. This makes current Contextual Integrity models not suitable for Online Social Networks. In this paper, we propose the first computational model of \emph{Implicit} Contextual Integrity, presenting an information model for Implicit Contextual Integrity as well as a so-called Information Assistant Agent that uses the information model to learn implicit contexts, relationships and the information sharing norms in order to help users avoid inappropriate information exchanges and undesired information disseminations. Through an experimental evaluation, we validate the properties of the model proposed. In particular, Information Assistant Agents are shown to: (i) infer the information sharing norms even if a small proportion of the users follow the norms and in presence of malicious users; (ii) help reduce the exchange of inappropriate information and the dissemination of sensitive information with only a partial view of the system and the information received and sent by their users; and (iii) minimise the burden to the users in terms of raising unnecessary alerts

    Bait the hook to suit the phish, not the phisherman: A field experiment on security networks of teams to withstand spear phishing attacks on online social networks

    Get PDF
    In this paper, we present our research in progress of a field experiment conducted to observe the impact of collective security behavior of teams when being targeted with a spear phishing attack on online social networks. To observe the shaping of security networks in teams, fifteen different honeypot profiles were created to send spear phishing messages after an initial bonding of eight weeks to the target group of 76 people. The experiment simulated a regular communication on online social networks of three teams of an international organization. The team members were entangled in personal and individual chats on an online social network to later react to an unexpected and unforeseen spear phishing message. As previous research has shown, various aspects influence the spear phishing susceptibility, but the collective security behavior has currently been neglected. This work plans to evaluate how security networks are being formed, the factors relevant to shape those networks and efforts to protect against spear phishing attacks

    Capturing Users’ Privacy Expectations To Design Better Smart Car Applications

    Get PDF
    Smart cars learn from gathered operating data to add value to the users’ driving experience and increase security. Thereby, not only users benefit from these data- driven services; various actors in the associated ecosystem are able to optimize their business models based on smart car related information. Continuous collection of data can defy users’ privacy expectations, which may lead to reluctant usage or even refusal to accept services offered by smart car providers. This paper investigates users’ privacy expectations using a vignette study, in which participants judge variations of smart car applications, differing with respect to factors such as data transmission and the type of information transferred. We expect to identify application dependent privacy expectations, that eventually yield insights on how to design smart car applications and associated business models that respect users’ privacy expectations

    Multiparty Privacy in Social Media

    Get PDF

    SHAPE: A Framework for Evaluating the Ethicality of Influence

    Full text link
    Agents often exert influence when interacting with humans and non-human agents. However, the ethical status of such influence is often unclear. In this paper, we present the SHAPE framework, which lists reasons why influence may be unethical. We draw on literature from descriptive and moral philosophy and connect it to machine learning to help guide ethical considerations when developing algorithms with potential influence. Lastly, we explore mechanisms for governing algorithmic systems that influence people, inspired by mechanisms used in journalism, human subject research, and advertising.Comment: An earlier version of this paper was accepted at EUMAS 202

    The Psychology of Privacy in the Digital Age

    Get PDF
    Privacy is a psychological topic suffering from historical neglect – a neglect that is increasingly consequential in an era of social media connectedness, mass surveillance and the permanence of our electronic footprint. Despite fundamental changes in the privacy landscape, social and personality psychology journals remains largely unrepresented in debates on the future of privacy. By contrast, in disciplines like computer science and media and communication studies, engaging directly with socio- technical developments, interest in privacy has grown considerably. In our review of this interdisciplinary literature we suggest four domains of interest to psychologists. These are: sensitivity to individual differences in privacy disposition; a claim that privacy is fundamentally based in social interactions; a claim that privacy is inherently contextual; and a suggestion that privacy is as much about psychological groups as it is about individuals. Moreover, we propose a framework to enable progression to more integrative models of the psychology of privacy in the digital age, and in particular suggest that a group and social relations based approach to privacy is needed

    `I make up a silly name': Understanding Children's Perception of Privacy Risks Online

    Get PDF
    Children under 11 are often regarded as too young to comprehend the implications of online privacy. Perhaps as a result, little research has focused on younger kids' risk recognition and coping. Such knowledge is, however, critical for designing efficient safeguarding mechanisms for this age group. Through 12 focus group studies with 29 children aged 6-10 from UK schools, we examined how children described privacy risks related to their use of tablet computers and what information was used by them to identify threats. We found that children could identify and articulate certain privacy risks well, such as information oversharing or revealing real identities online; however, they had less awareness with respect to other risks, such as online tracking or game promotions. Our findings offer promising directions for supporting children's awareness of cyber risks and the ability to protect themselves online.Comment: 13 pages, 1 figur
    corecore