9 research outputs found

    Capturing Users’ Privacy Expectations To Design Better Smart Car Applications

    Get PDF
    Smart cars learn from gathered operating data to add value to the users’ driving experience and increase security. Thereby, not only users benefit from these data- driven services; various actors in the associated ecosystem are able to optimize their business models based on smart car related information. Continuous collection of data can defy users’ privacy expectations, which may lead to reluctant usage or even refusal to accept services offered by smart car providers. This paper investigates users’ privacy expectations using a vignette study, in which participants judge variations of smart car applications, differing with respect to factors such as data transmission and the type of information transferred. We expect to identify application dependent privacy expectations, that eventually yield insights on how to design smart car applications and associated business models that respect users’ privacy expectations

    Your Privacy Is Your Friend's Privacy: Examining Interdependent Information Disclosure on Online Social Networks

    Get PDF
    The highly interactive nature of interpersonal communication on online social networks (OSNs) impels us to think about privacy as a communal matter, with users' private information being revealed by not only their own voluntary disclosures, but also the activities of their social ties. The current privacy literature has identified two types of information disclosures in OSNs: self-disclosure, i.e., the disclosure of an OSN user's private information by him/herself; and co-disclosure, i.e., the disclosure of the user's private information by other users. Although co-disclosure has been increasingly identified as a new source of privacy threat inherent to the OSN context, few systematic attempts have been made to provide a framework for understanding the commonalities and distinctions between self- vs. co-disclosure, especially pertaining to different types of private information. To address this gap, this paper presents a data-driven study that builds upon an innovative measurement for quantifying the extent to which others' co-disclosure could lead to actual privacy harm. The results demonstrate the significant harm caused by co-disclosure and illustrate the differences between the identity elements revealed through self- and co-disclosure

    Protecting Privacy on Social Media: Is Consumer Privacy Self-Management Sufficient?

    Get PDF
    Among the existing solutions for protecting privacy on social media, a popular doctrine is privacy self-management, which asks users to directly control the sharing of their information through privacy settings. While most existing research focuses on whether a user makes informed and rational decisions on privacy settings, we address a novel yet important question of whether these settings are indeed effective in practice. Specifically, we conduct an observational study on the effect of the most prominent privacy setting on Twitter, the protected mode. Our results show that, even after setting an account to protected, real-world account owners still have private information continuously disclosed, mostly through tweets posted by the owner’s connections. This illustrates a fundamental limit of privacy self-management: its inability to control the peer-disclosure of privacy by an individual’s friends. Our results also point to a potential remedy: A comparative study before vs after an account became protected shows a substantial decrease of peer-disclosure in posts where the other users proactively mention the protected user, but no significant change when the other users are reacting to the protected user’s posts. In addition, peer-disclosure through explicit specification, such as the direct mentioning of a user’s location, decreases sharply, but no significant change occurs for implicit inference, such as the disclosure of birthday through the date of a “happy birthday” message. The design implication here is that online social networks should provide support alerting users of potential peer-disclosure through implicit inference, especially when a user is reacting to the activities of a user in the protected mode

    Your Privacy Is Your Friend\u27s Privacy: Examining Interdependent Information Disclosure on Online Social Networks

    Get PDF
    The highly interactive nature of interpersonal communication on online social networks (OSNs) impels us to think about privacy as a communal matter, with users\u27 private information being revealed by not only their own voluntary disclosures, but also the activities of their social ties. The current privacy literature has identified two types of information disclosures in OSNs: self-disclosure, i.e., the disclosure of an OSN user\u27s private information by him/herself; and co-disclosure, i.e., the disclosure of the user\u27s private information by other users. Although co-disclosure has been increasingly identified as a new source of privacy threat inherent to the OSN context, few systematic attempts have been made to provide a framework for understanding the commonalities and distinctions between self- vs. co-disclosure, especially pertaining to different types of private information. To address this gap, this paper presents a data-driven study that builds upon an innovative measurement for quantifying the extent to which others\u27 co-disclosure could lead to actual privacy harm. The results demonstrate the significant harm caused by co-disclosure and illustrate the differences between the identity elements revealed through self- and co-disclosure

    A contextual approach to information privacy research

    Get PDF
    In this position article, we synthesize various knowledge gaps in information privacy scholarship and propose a research agenda that promotes greater cross‐disciplinary collaboration within the iSchool community and beyond. We start by critically examining Westin\u27s conceptualization of information privacy and argue for a contextual approach that holds promise for overcoming some of Westin\u27s weaknesses. We then highlight three contextual considerations for studying privacy—digital networks, marginalized populations, and the global context—and close by discussing how these considerations advance privacy theorization and technology design

    Measuring Individuals\u27 Concerns over Collective Privacy on Social Networking Sites

    Get PDF
    With the rise of social networking sites (SNSs), individuals not only disclose personal information but also share private information concerning others online. While shared information is co-constructed by self and others, personal and collective privacy boundaries become blurred. Thus there is an increasing concern over information privacy beyond the individual level. Drawing on the Communication Privacy Management theory, we conceptualize individuals\u27 concerns over collective privacy on SNSs, with three distinctive dimensions—collective information access, control and diffusion, and develop a scale of collective SNS privacy concern (SNSPC) through empirical validation. Structural model analyses confirm the three-dimension structure of collective SNSPC and indicate perceived risk and propensity to value privacy as two antecedents. We discuss key findings, implications and future research directions for theorizing and examining privacy as a collective issue

    Turning Privacy Inside Out

    Get PDF
    The problem of theorizing privacy moves on two levels, the first consisting of an inadequate conceptual vocabulary and the second consisting of an inadequate institutional grammar. Privacy rights are supposed to protect individual subjects, and so conventional ways of understanding privacy are subject-centered, but subject-centered approaches to theorizing privacy also wrestle with deeply embedded contradictions. And privacy’s most enduring institutional failure modes flow from its insistence on placing the individual and individualized control at the center. Strategies for rescuing privacy from irrelevance involve inverting both established ways of talking about privacy rights and established conventions for designing institutions to protect them. In terms of theory, turning privacy inside out entails focusing on the conditions that are needed to produce sufficiently private and privacy-valuing subjects. Institutionally, turning privacy inside out entails focusing on the design, production, and operational practices most likely to instantiate and preserve those conditions
    corecore