9 research outputs found

    Understanding Privacy Switching Behaviour on Twitter

    Get PDF
    Changing a Twitter account's privacy setting between public and protected changes the visibility of past tweets. By inspecting the privacy setting of over 100K Twitter users over 3 months, we noticed that over 40% of those users change their privacy setting at least once with around 16% changing it over 5 times. This motivated us to explore the reasons why people switch their privacy setting. We studied these switching phenomena quantitatively by comparing the tweeting behaviour of users when public vs protected, and qualitatively using two follow-up surveys (n=100, n=324) to understand potential reasoning behind the observed behaviours. Our quantitative analysis shows that users who switch privacy settings mention others and share hashtags more when their setting is public. Our surveys highlighted that users turn protected to share personal content and regulate boundaries while they turn public to interact with others in ways prevented by being protected.Comment: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI'22

    Owning and Sharing: Privacy Perceptions of Smart Speaker Users

    Get PDF

    Preserving Privacy as Social Responsibility in Online Social Networks

    No full text
    Online social networks provide an environment for their users to share content with others, where the user who shares a content item is put in charge, generally ignoring others that might be affected by it. However, a content that is shared by one user can very well violate the privacy of other users. To remedy this, ideally, all users who are related to a content should get a say in how the content should be shared. Recent approaches advocate the use of agreement technologies to enable stakeholders of a post to discuss the privacy configurations of a post. This allows related individuals to express concerns so that various privacy violations are avoided up front. Existing techniques try to establish an agreement on a single post. However, most of the time, agreement should be established over multiple posts such that the user can tolerate slight breaches of privacy in return of a right to share posts themselves in future interactions. As a result, users can help each other preserve their privacy, viewing this as their social responsibility. This article develops a reciprocity-based negotiation for reaching privacy agreements among users and introduces a negotiation architecture that combines semantic privacy rules with utility functions. We evaluate our approach over multiagent simulations with software agents that mimic users based on a user study
    corecore