7,855 research outputs found

    Literature Overview - Privacy in Online Social Networks

    Get PDF
    In recent years, Online Social Networks (OSNs) have become an important\ud part of daily life for many. Users build explicit networks to represent their\ud social relationships, either existing or new. Users also often upload and share a plethora of information related to their personal lives. The potential privacy risks of such behavior are often underestimated or ignored. For example, users often disclose personal information to a larger audience than intended. Users may even post information about others without their consent. A lack of experience and awareness in users, as well as proper tools and design of the OSNs, perpetuate the situation. This paper aims to provide insight into such privacy issues and looks at OSNs, their associated privacy risks, and existing research into solutions. The final goal is to help identify the research directions for the Kindred Spirits project

    User's Privacy in Recommendation Systems Applying Online Social Network Data, A Survey and Taxonomy

    Full text link
    Recommender systems have become an integral part of many social networks and extract knowledge from a user's personal and sensitive data both explicitly, with the user's knowledge, and implicitly. This trend has created major privacy concerns as users are mostly unaware of what data and how much data is being used and how securely it is used. In this context, several works have been done to address privacy concerns for usage in online social network data and by recommender systems. This paper surveys the main privacy concerns, measurements and privacy-preserving techniques used in large-scale online social networks and recommender systems. It is based on historical works on security, privacy-preserving, statistical modeling, and datasets to provide an overview of the technical difficulties and problems associated with privacy preserving in online social networks.Comment: 26 pages, IET book chapter on big data recommender system

    Investigating Obfuscation as a Tool to Enhance Photo Privacy on Social Networks Sites

    Get PDF
    Photos which contain rich visual information can be a source of privacy issues. Some privacy issues associated with photos include identification of people, inference attacks, location disclosure, and sensitive information leakage. However, photo privacy is often hard to achieve because the content in the photos is both what makes them valuable to viewers, and what causes privacy concerns. Photo sharing often occurs via Social Network Sites (SNSs). Photo privacy is difficult to achieve via SNSs due to two main reasons: first, SNSs seldom notify users of the sensitive content in their photos that might cause privacy leakage; second, the recipient control tools available on SNSs are not effective. The only solution that existing SNSs (e.g., Facebook, Flickr) provide is control over who receives a photo. This solution allows users to withhold the entire photo from certain viewers while sharing it with other viewers. The idea is that if viewers cannot see a photo, then privacy risk is minimized. However, withholding or self-censoring photos is not always the solution people want. In some cases, people want to be able to share photos, or parts of photos, even when they have privacy concerns about the photo. To provide better online photo privacy protection options for users, we leverage a behavioral theory of privacy that identifies and focuses on two key elements that influence privacy -- information content and information recipient. This theory provides a vocabulary for discussing key aspects of privacy and helps us organize our research to focus on the two key parameters through a series of studies. In my thesis, I describe five studies I have conducted. First, I focus on the content parameter to identify what portions of an image are considered sensitive and therefore are candidates to be obscured to increase privacy. I provide a taxonomy of content sensitivity that can help designers of photo-privacy mechanisms understand what categories of content users consider sensitive. Then, focusing on the recipient parameter, I describe how elements of the taxonomy are associated with users\u27 sharing preferences for different categories of recipients (e.g., colleagues vs. family members). Second, focusing on controlling photo content disclosure, I invented privacy-enhancing obfuscations and evaluated their effectiveness against human recognition and studied how they affect the viewing experience. Third, after discovering that avatar and inpainting are two promising obfuscation methods, I studied whether they were robust when de-identifying both familiar and unfamiliar people since viewers are likely to know the people in OSN photos. Additionally, I quantified the prevalence of self-reported photo self-censorship and discovered that privacy-preserving obfuscations might be useful for combating photo self-censorship. Gaining sufficient knowledge from the studies above, I proposed a privacy-enhanced photo-sharing interface that helps users identify the potential sensitive content and provides obfuscation options. To evaluate the interface, I compared the proposed obfuscation approach with the other two approaches – a control condition that mimics the current Facebook photo-sharing interface and an interface that provides a privacy warning about potentially sensitive content. The results show that our proposed system performs better over the other two in terms of reducing perceived privacy risks, increasing willingness to share, and enhancing usability. Overall, our research will benefit privacy researchers, online social network designers, policymakers, computer vision researchers, and anyone who has or wants to share photos online

    Privacy Intelligence: A Survey on Image Sharing on Online Social Networks

    Full text link
    Image sharing on online social networks (OSNs) has become an indispensable part of daily social activities, but it has also led to an increased risk of privacy invasion. The recent image leaks from popular OSN services and the abuse of personal photos using advanced algorithms (e.g. DeepFake) have prompted the public to rethink individual privacy needs when sharing images on OSNs. However, OSN image sharing itself is relatively complicated, and systems currently in place to manage privacy in practice are labor-intensive yet fail to provide personalized, accurate and flexible privacy protection. As a result, an more intelligent environment for privacy-friendly OSN image sharing is in demand. To fill the gap, we contribute a systematic survey of 'privacy intelligence' solutions that target modern privacy issues related to OSN image sharing. Specifically, we present a high-level analysis framework based on the entire lifecycle of OSN image sharing to address the various privacy issues and solutions facing this interdisciplinary field. The framework is divided into three main stages: local management, online management and social experience. At each stage, we identify typical sharing-related user behaviors, the privacy issues generated by those behaviors, and review representative intelligent solutions. The resulting analysis describes an intelligent privacy-enhancing chain for closed-loop privacy management. We also discuss the challenges and future directions existing at each stage, as well as in publicly available datasets.Comment: 32 pages, 9 figures. Under revie

    Mining social network data for personalisation and privacy concerns: A case study of Facebook’s Beacon

    Get PDF
    This is the post-print version of the final published paper that is available from the link below.The popular success of online social networking sites (SNS) such as Facebook is a hugely tempting resource of data mining for businesses engaged in personalised marketing. The use of personal information, willingly shared between online friends' networks intuitively appears to be a natural extension of current advertising strategies such as word-of-mouth and viral marketing. However, the use of SNS data for personalised marketing has provoked outrage amongst SNS users and radically highlighted the issue of privacy concern. This paper inverts the traditional approach to personalisation by conceptualising the limits of data mining in social networks using privacy concern as the guide. A qualitative investigation of 95 blogs containing 568 comments was collected during the failed launch of Beacon, a third party marketing initiative by Facebook. Thematic analysis resulted in the development of taxonomy of privacy concerns which offers a concrete means for online businesses to better understand SNS business landscape - especially with regard to the limits of the use and acceptance of personalised marketing in social networks

    A User Study of a Wearable System to Enhance Bystanders’ Facial Privacy

    Get PDF
    The privacy of users and information are becoming increasingly important with the growth and pervasive use of mobile devices such as wearables, mobile phones, drones, and Internet of Things (IoT) devices. Today many of these mobile devices are equipped with cameras which enable users to take pictures and record videos anytime they need to do so. In many such cases, bystanders’ privacy is not a concern, and as a result, audio and video of bystanders are often captured without their consent. We present results from a user study in which 21 participants were asked to use a wearable system called FacePET developed to enhance bystanders’ facial privacy by providing a way for bystanders to protect their own privacy rather than relying on external systems for protection. While past works in the literature focused on privacy perceptions of bystanders when photographed in public/shared spaces, there has not been research with a focus on user perceptions of bystander-based wearable devices to enhance privacy. Thus, in this work, we focus on user perceptions of the FacePET device and/or similar wearables to enhance bystanders’ facial privacy. In our study, we found that 16 participants would use FacePET or similar devices to enhance their facial privacy, and 17 participants agreed that if smart glasses had features to conceal users’ identities, it would allow them to become more popular

    Facebook Users Attitudes towards Secondary Use of Personal Information

    Get PDF
    This paper reports on a study of how user attitudes to institutional privacy change after exposing users to potential inferences that can be made from information disclosed on Facebook. Two sets of focus group sessions with Facebook users were conducted. Three sessions were conducted by demonstrating to the users, on a general level, what can be inferred from posts using prototypical software called DataBait. Another set of three sessions let the users experience the potential inferences from their own actual Facebook profiles by using the DataBait tool. Findings suggest that the participants’ attitudes to secondary use of information changed from affective to cognitive when they were exposed to potential third-party inferences using their own actual personal information. This observation calls for more research into online tools that allow users to manage and educate themselves dynamically about their own disclosure practices
    corecore