6 research outputs found

    Quantifying the invisible audience in social networks

    Get PDF
    This paper combines survey and large-scale log data to examine how well users’ perceptions of their audience match their actual audience on Facebook.AbstractWhen you share content in an online social network, who is listening? Users have scarce information about who actually sees their content, making their audience seem invisible and difficult to estimate. However, understanding this invisible audience can impact both science and design, since perceived audiences influence content production and self-presentation online. In this paper, we combine survey and large-scale log data to examine how well users’ perceptions of their audience match their actual audience on Facebook. We find that social media users consistently underestimate their audience size for their posts, guessing that their audience is just 27% of its true size. Qualitative coding of survey responses reveals folk theories that attempt to reverse-engineer audience size using feedback and friend count, though none of these approaches are particularly accurate. We analyze audience logs for 222,000 Facebook users’ posts over the course of one month and find that publicly visible signals — friend count, likes, and comments — vary widely and do not strongly indicate the audience of a single post. Despite the variation, users typically reach 61% of their friends each month. Together, our results begin to reveal the invisible undercurrents of audience attention and behavior in online social networks.Authored by Michael S. Bernstein, Eytan Bakshy, Moira Burke and Brian Karrer

    Security and privacy in online social networking: Risk perceptions and precautionary behaviour

    Get PDF
    A quantitative behavioural online study examined a set of hazards that correspond with security- and privacy settings of the major global online social network (Facebook). These settings concern access to a user's account and access to the user's shared information (both security) as well as regulation of the user's information-sharing and user's regulation of others' information-sharing in relation to the user (both privacy). We measured 201 non-student UK users' perceptions of risk and other risk dimensions, and precautionary behaviour. First, perceptions of risk and dread were highest and precautionary behaviour was most common for hazards related to users' regulation of information-sharing. Other hazards were perceived as less risky and less precaution was taken against these, even though they can lead to breaches of users' security or privacy. Second, consistent with existing theory, significant predictors of perceived risk were attitude towards sharing information on Facebook, dread, voluntariness, catastrophic potential and Internet experience; and significant predictors of precautionary behaviour were perceived risk, control, voluntariness and Internet experience. Methodological implications emphasise the need for non-aggregated analysis and practical implications emphasise interventions to promote safe online social-network use

    Security and Privacy in Online Social Networking: Risk Perceptions and Precautionary Behaviour

    Get PDF
    A quantitative behavioural online study examined a set of hazards that correspond with security- and privacy settings of the major global online social network (Facebook). These settings concern access to a user's account and access to the user's shared information (both security) as well as regulation of the user's information-sharing and user's regulation of others' information-sharing in relation to the user (both privacy). We measured 201 non-student UK users' perceptions of risk and other risk dimensions, and precautionary behaviour. First, perceptions of risk and dread were highest and precautionary behaviour was most common for hazards related to users' regulation of information-sharing. Other hazards were perceived as less risky and less precaution was taken against these, even though they can lead to breaches of users' security or privacy. Second, consistent with existing theory, significant predictors of perceived risk were attitude towards sharing information on Facebook, dread, voluntariness, catastrophic potential and Internet experience; and significant predictors of precautionary behaviour were perceived risk, control, voluntariness and Internet experience. Methodological implications emphasise the need for non-aggregated analysis and practical implications emphasise interventions to promote safe online social-network use

    Practical, appropriate, empirically-validated guidelines for designing educational games

    Get PDF
    There has recently been a great deal of interest in the potential of computer games to function as innovative educational tools. However, there is very little evidence of games fulfilling that potential. Indeed, the process of merging the disparate goals of education and games design appears problematic, and there are currently no practical guidelines for how to do so in a coherent manner. In this paper, we describe the successful, empirically validated teaching methods developed by behavioural psychologists and point out how they are uniquely suited to take advantage of the benefits that games offer to education. We conclude by proposing some practical steps for designing educational games, based on the techniques of Applied Behaviour Analysis. It is intended that this paper can both focus educational games designers on the features of games that are genuinely useful for education, and also introduce a successful form of teaching that this audience may not yet be familiar with

    Understanding and designing for interactional privacy needs within social networking sites

    Get PDF
    "Interpersonal boundary regulation" is a way to optimize social interactions when sharing and connecting through Social Networking Sites (SNSs). The theoretical foundation of much of my research comes from Altman's work on privacy management in the physical world. Altman believed that "we should attempt to design responsive environments, which permit easy alternation between a state of separateness and a state of togetherness" (1975). In contrast, Mark Zuckerberg, Facebook's CEO, claims that sharing is the new "social norm" for Facebook's 800 million users (Facebook 2011), and it is Facebook's job to enable "frictionless sharing" (Matyszczyk 2010). My research focuses on reconciling this rift between social media sharing and privacy by examining interpersonal boundary regulation within SNSs as a means to align privacy needs with social networking goals. To do this, I performed an in-depth feature-oriented domain analysis (Kang, Cohen et al. 1990) across five popular SNS interfaces and 21 SNS user interviews to understand boundary mechanisms unique to these environments and their associated challenges. From this, I created a taxonomy of different interpersonal boundaries users manage within their SNSs, identified interface features that directly supported these boundary mechanisms, and uncovered coping behaviors for when interface features were inadequate or inappropriately leveraged. By better understanding this dynamic, we can begin to build new interfaces to help support and possibly even correct some of the maladaptive social behaviors exhibited within SNSs. Finally, I conducted two empirical studies that quantitatively validated some of the relationships in my theoretical model of the interpersonal boundary regulation process within SNSs. Specifically, I examined the role of risk awareness, feature awareness, burden, and desired privacy level on SNS privacy behaviors. I also examined the relationship between privacy outcomes and SNS goals of connecting and sharing with others. Through this research, I show that boundary regulation allows SNS users to reap the benefits of social networking while simultaneously protecting their privacy

    Investigating Obfuscation as a Tool to Enhance Photo Privacy on Social Networks Sites

    Get PDF
    Photos which contain rich visual information can be a source of privacy issues. Some privacy issues associated with photos include identification of people, inference attacks, location disclosure, and sensitive information leakage. However, photo privacy is often hard to achieve because the content in the photos is both what makes them valuable to viewers, and what causes privacy concerns. Photo sharing often occurs via Social Network Sites (SNSs). Photo privacy is difficult to achieve via SNSs due to two main reasons: first, SNSs seldom notify users of the sensitive content in their photos that might cause privacy leakage; second, the recipient control tools available on SNSs are not effective. The only solution that existing SNSs (e.g., Facebook, Flickr) provide is control over who receives a photo. This solution allows users to withhold the entire photo from certain viewers while sharing it with other viewers. The idea is that if viewers cannot see a photo, then privacy risk is minimized. However, withholding or self-censoring photos is not always the solution people want. In some cases, people want to be able to share photos, or parts of photos, even when they have privacy concerns about the photo. To provide better online photo privacy protection options for users, we leverage a behavioral theory of privacy that identifies and focuses on two key elements that influence privacy -- information content and information recipient. This theory provides a vocabulary for discussing key aspects of privacy and helps us organize our research to focus on the two key parameters through a series of studies. In my thesis, I describe five studies I have conducted. First, I focus on the content parameter to identify what portions of an image are considered sensitive and therefore are candidates to be obscured to increase privacy. I provide a taxonomy of content sensitivity that can help designers of photo-privacy mechanisms understand what categories of content users consider sensitive. Then, focusing on the recipient parameter, I describe how elements of the taxonomy are associated with users\u27 sharing preferences for different categories of recipients (e.g., colleagues vs. family members). Second, focusing on controlling photo content disclosure, I invented privacy-enhancing obfuscations and evaluated their effectiveness against human recognition and studied how they affect the viewing experience. Third, after discovering that avatar and inpainting are two promising obfuscation methods, I studied whether they were robust when de-identifying both familiar and unfamiliar people since viewers are likely to know the people in OSN photos. Additionally, I quantified the prevalence of self-reported photo self-censorship and discovered that privacy-preserving obfuscations might be useful for combating photo self-censorship. Gaining sufficient knowledge from the studies above, I proposed a privacy-enhanced photo-sharing interface that helps users identify the potential sensitive content and provides obfuscation options. To evaluate the interface, I compared the proposed obfuscation approach with the other two approaches – a control condition that mimics the current Facebook photo-sharing interface and an interface that provides a privacy warning about potentially sensitive content. The results show that our proposed system performs better over the other two in terms of reducing perceived privacy risks, increasing willingness to share, and enhancing usability. Overall, our research will benefit privacy researchers, online social network designers, policymakers, computer vision researchers, and anyone who has or wants to share photos online
    corecore