10 research outputs found

    Understanding Privacy Switching Behaviour on Twitter

    Get PDF
    Changing a Twitter account's privacy setting between public and protected changes the visibility of past tweets. By inspecting the privacy setting of over 100K Twitter users over 3 months, we noticed that over 40% of those users change their privacy setting at least once with around 16% changing it over 5 times. This motivated us to explore the reasons why people switch their privacy setting. We studied these switching phenomena quantitatively by comparing the tweeting behaviour of users when public vs protected, and qualitatively using two follow-up surveys (n=100, n=324) to understand potential reasoning behind the observed behaviours. Our quantitative analysis shows that users who switch privacy settings mention others and share hashtags more when their setting is public. Our surveys highlighted that users turn protected to share personal content and regulate boundaries while they turn public to interact with others in ways prevented by being protected.Comment: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI'22

    Owning and Sharing: Privacy Perceptions of Smart Speaker Users

    Get PDF

    Analysing privacy in online social media

    Get PDF
    People share a wide variety of information on social media, including personal and sensitive information, without understanding the size of their audience which may cause privacy complications. The networked nature of the platforms further exacerbates these complications where the information can be shared without the information owner's control. People also struggle to achieve their intended audience using the privacy settings provided by the platforms. In this thesis, I analyse potential privacy violations caused by social media users and their networks, as well as the usage and understanding of privacy settings. I focus on Twitter which has rather simplistic privacy settings with binary states. The first part of my studies includes investigating personal information disclosures by networks using congratulatory messages. I analyse these messages and detect various types of life events including relationships, illness, familial matters, and birthdays. I show that public replies are enough to infer the content of the original message, even if the event subject hides or deletes the message. I further focus on birthdays which is one of the most popular life events and the potential date of birth disclosure has security implications besides the privacy ones. I show that over 1K users have their date of birth exposed daily, where 10% of these users have protected their tweets. I also show that users react positively to these congratulatory messages even though these posts potentially disclose personal and sensitive information. In the second part of my thesis, I focus on privacy settings on Twitter. I quantify the usage patterns of privacy settings and investigate the reasons for changing these settings between public and protected by conducting a mixed-method study. I show that there is a set of users who frequently utilize the privacy settings provided by the platform. I also show that users turn protected to share personal content and regulate boundaries while they turn public to interact with others in ways prevented by being protected. In the last stage of the thesis, I investigate the user understanding of information and tweet visibility of different account types by conducting a user survey. I show that the users are aware of the visibility of their profile information and individual tweets. However, the visibility of followed topics, lists, and interactions with protected accounts is confusing. Less than a third of the survey participants were aware that a reply by a public account to a protected account's tweet would be publicly visible. Surprisingly, having a protected account did not result in a better understanding of the information or tweet visibility. Actual functionalities and the user understanding of them should align so that users can take the right actions for desired levels of privacy protection in online social networks. I show that even with simplistic privacy settings, users have difficulty understanding the reach of their posts. Implications of interactions between users need to be clearly relayed. I give design suggestions to increase this awareness and for users to have better tools to manage their boundaries. I conclude the thesis by giving general implications around the studies conducted and possible future directions

    Preserving Privacy as Social Responsibility in Online Social Networks

    No full text
    Online social networks provide an environment for their users to share content with others, where the user who shares a content item is put in charge, generally ignoring others that might be affected by it. However, a content that is shared by one user can very well violate the privacy of other users. To remedy this, ideally, all users who are related to a content should get a say in how the content should be shared. Recent approaches advocate the use of agreement technologies to enable stakeholders of a post to discuss the privacy configurations of a post. This allows related individuals to express concerns so that various privacy violations are avoided up front. Existing techniques try to establish an agreement on a single post. However, most of the time, agreement should be established over multiple posts such that the user can tolerate slight breaches of privacy in return of a right to share posts themselves in future interactions. As a result, users can help each other preserve their privacy, viewing this as their social responsibility. This article develops a reciprocity-based negotiation for reaching privacy agreements among users and introduces a negotiation architecture that combines semantic privacy rules with utility functions. We evaluate our approach over multiagent simulations with software agents that mimic users based on a user study

    Preserving Privacy as Social Responsibility in Online Social Networks

    No full text
    Online social networks provide an environment for their users to share content with others, where the user who shares a content item is put in charge, generally ignoring others that might be affected by it. However, a content that is shared by one user can very well violate the privacy of other users. To remedy this, ideally, all users who are related to a content should get a say in how the content should be shared. Recent approaches advocate the use of agreement technologies to enable stakeholders of a post to discuss the privacy configurations of a post. This allows related individuals to express concerns so that various privacy violations are avoided up front. Existing techniques try to establish an agreement on a single post. However, most of the time, agreement should be established over multiple posts such that the user can tolerate slight breaches of privacy in return of a right to share posts themselves in future interactions. As a result, users can help each other preserve their privacy, viewing this as their social responsibility. This article develops a reciprocity-based negotiation for reaching privacy agreements among users and introduces a negotiation architecture that combines semantic privacy rules with utility functions. We evaluate our approach over multiagent simulations with software agents that mimic users based on a user study
    corecore