7 research outputs found

    Can Privacy-Aware Lifelogs Alter Our Memories?

    Get PDF
    The abundance of automatically-triggered lifelogging cameras is a privacy threat to bystanders. Countering this by deleting photos limits relevant memory cues and the informative content of lifelogs. An alternative is to obfuscate bystanders, but it is not clear how this impacts the lifelogger's recall of memories. We report on a study in which we compare viewing 1) unaltered photos, 2) photos with blurred people, and 3) a subset of the photos after deleting private ones, on memory recall. Findings show that obfuscated content helps users recall a lot of content, but it also results in recalling less accurate details, which can sometimes mislead the user. Our work informs the design of privacy-aware lifelogging systems that maximizes recall and steers discussion about ubiquitous technologies that could alter human memories

    DeepFakes for Privacy: Investigating Perceptions and Effectiveness of State-of-the-Art Privacy-Enhancing Face Obfuscation Methods

    Get PDF
    There are many contexts in which a person’s face needs to be obfuscated for privacy, such as in social media posts. We present a user-centered analysis of the effectiveness of DeepFakes for obfuscation using synthetically generated faces, and compare it with state-of-the-art obfuscation methods: blurring, masking, pixelating, and replacement with avatars. For this, we conducted an online survey (N=110) and found that DeepFake obfuscation is a viable alternative to state-of-the-art obfuscation methods; it is as effective as masking and avatar obfuscation in concealing the identities of individuals in photos. At the same time, DeepFakes blend well with surroundings and are as aesthetically pleasing as blurring and pixelating. We discuss how DeepFake obfuscation can enhance privacy protection without negatively impacting the photo’s aesthetics

    StyleGAN as a Utility-Preserving Face De-identification Method

    Full text link
    Face de-identification methods have been proposed to preserve users' privacy by obscuring their faces. These methods, however, can degrade the quality of photos, and they usually do not preserve the utility of faces, i.e., their age, gender, pose, and facial expression. Recently, GANs, such as StyleGAN, have been proposed, which generate realistic, high-quality imaginary faces. In this paper, we investigate the use of StyleGAN in generating de-identified faces through style mixing. We examined this de-identification method for preserving utility and privacy by implementing several face detection, verification, and identification attacks and conducting a user study. The results from our extensive experiments, human evaluation, and comparison with two state-of-the-art methods, i.e., CIAGAN and DeepPrivacy, show that StyleGAN performs on par or better than these methods, preserving users' privacy and images' utility. In particular, the results of the machine learning-based experiments show that StyleGAN0-4 preserves utility better than CIAGAN and DeepPrivacy while preserving privacy at the same level. StyleGAN0-3 preserves utility at the same level while providing more privacy. In this paper, for the first time, we also performed a carefully designed user study to examine both privacy and utility-preserving properties of StyleGAN0-3, 0-4, and 0-5, as well as CIAGAN and DeepPrivacy from the human observers' perspectives. Our statistical tests showed that participants tend to verify and identify StyleGAN0-5 images more easily than DeepPrivacy images. All the methods but StyleGAN0-5 had significantly lower identification rates than CIAGAN. Regarding utility, as expected, StyleGAN0-5 performed significantly better in preserving some attributes. Among all methods, on average, participants believe gender has been preserved the most while naturalness has been preserved the least

    Multiuser Privacy and Security Conflicts in the Cloud

    Get PDF
    Collaborative cloud platforms make it easier and more convenient for multiple users to work together on files (GoogleDocs, Office365) and store and share them (Dropbox, OneDrive). However, this can lead to privacy and security conflicts between the users involved, for instance when a user adds someone to a shared folder or changes its permissions. Such multiuser conflicts (MCs), though known to happen in the literature, have not yet been studied in-depth. In this paper, we report a study with 1,050 participants about MCs they experienced in the cloud. We show what are the MCs that arise when multiple users work together in the cloud and how and why they arise, what is the prevalence and severity of MCs, what are their consequences on users, and how do users work around MCs. We derive recommendations for designing mechanisms to help users avoid, mitigate, and resolve MCs in the cloud

    The cardboard box study: understanding collaborative data management in the connected home

    Get PDF
    The home is a site marked by the increasing collection and use of personal data, whether online or from connected devices. This trend is accompanied by new data protection regulation and the development of privacy enhancing technologies (PETs) that seek to enable individual control over the processing of personal data. However, a great deal of the data generated within the connected home is interpersonal in nature and cannot therefore be attributed to an individual. The cardboard box study adapts the technology probe approach to explore with potential end users the salience of a PET called the Databox and to understand the challenge of collaborative rather than individual data management in the home. The cardboard box study was designed as an ideation card game and conducted with 22 households distributed around the UK, providing us with 38 participants. Demographically, our participants were of varying ages and had a variety of occupational backgrounds and differing household situations. The study makes it perspicuous that privacy is not a ubiquitous concern within the home as a great deal of data is shared by default of people living together; that when privacy is occasioned it performs a distinct social function that is concerned with human security and the safety and integrity of people rather than devices and data; and that current ‘interdependent privacy’ solutions that seek to support collaborative data management are not well aligned with the ways access control is negotiated and managed within the home

    Privacy Intelligence: A Survey on Image Sharing on Online Social Networks

    Full text link
    Image sharing on online social networks (OSNs) has become an indispensable part of daily social activities, but it has also led to an increased risk of privacy invasion. The recent image leaks from popular OSN services and the abuse of personal photos using advanced algorithms (e.g. DeepFake) have prompted the public to rethink individual privacy needs when sharing images on OSNs. However, OSN image sharing itself is relatively complicated, and systems currently in place to manage privacy in practice are labor-intensive yet fail to provide personalized, accurate and flexible privacy protection. As a result, an more intelligent environment for privacy-friendly OSN image sharing is in demand. To fill the gap, we contribute a systematic survey of 'privacy intelligence' solutions that target modern privacy issues related to OSN image sharing. Specifically, we present a high-level analysis framework based on the entire lifecycle of OSN image sharing to address the various privacy issues and solutions facing this interdisciplinary field. The framework is divided into three main stages: local management, online management and social experience. At each stage, we identify typical sharing-related user behaviors, the privacy issues generated by those behaviors, and review representative intelligent solutions. The resulting analysis describes an intelligent privacy-enhancing chain for closed-loop privacy management. We also discuss the challenges and future directions existing at each stage, as well as in publicly available datasets.Comment: 32 pages, 9 figures. Under revie
    corecore