72,972 research outputs found

    Automated Privacy Protection for Mobile Device Users and Bystanders in Public Spaces

    Get PDF
    As smartphones have gained popularity over recent years, they have provided usersconvenient access to services and integrated sensors that were previously only available through larger, stationary computing devices. This trend of ubiquitous, mobile devices provides unparalleled convenience and productivity for users who wish to perform everyday actions such as taking photos, participating in social media, reading emails, or checking online banking transactions. However, the increasing use of mobile devices in public spaces by users has negative implications for their own privacy and, in some cases, that of bystanders around them. Specifically, digital photography trends in public have negative implications for bystanders who can be captured inadvertently in users’ photos. Those who are captured often have no knowledge of being photographed and have no control over how photos of them are distributed. To address this growing issue, a novel system is proposed for protecting the privacy of bystanders captured in public photos. A fully automated approach to accurately distinguish the intended subjects from strangers is explored. A feature-based classification scheme utilizing entire photos is presented. Additionally, the privacy-minded case of only utilizing local face images with no contextual information from the original image is explored with a convolutional neural network-based classifier. Three methods of face anonymization are implemented and compared: black boxing, Gaussian blurring, and pose-tolerant face swapping. To validate these methods, a comprehensive user survey is conducted to understand the difference in viability between them. Beyond photographing, the privacy of mobile device users can sometimes be impacted in public spaces, as visual eavesdropping or “shoulder surfing” attacks on device screens become feasible. Malicious individuals can easily glean personal data from smartphone and mobile device screens while they are accessed visually. In order to protect displayed user content, anovel, sensor-based visual eavesdropping detection scheme using integrated device cameras is proposed. In order to selectively obfuscate private content while an attacker is nearby, a dynamic scheme for detecting and hiding private content is also developed utilizing User-Interface-as-an-Image (UIaaI). A deep, convolutional object detection network is trained and utilized to identify sensitive content under this scheme. To allow users to customize the types ofcontent to hide, dynamic training sample generation is introduced to retrain the content detection network with very few original UI samples. Web applications are also considered with a Chrome browser extension which automates the detection and obfuscation of sensitive web page fields through HTML parsing and CSS injection

    Multi-Path Region-Based Convolutional Neural Network for Accurate Detection of Unconstrained "Hard Faces"

    Full text link
    Large-scale variations still pose a challenge in unconstrained face detection. To the best of our knowledge, no current face detection algorithm can detect a face as large as 800 x 800 pixels while simultaneously detecting another one as small as 8 x 8 pixels within a single image with equally high accuracy. We propose a two-stage cascaded face detection framework, Multi-Path Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a deep neural network with a classic learning strategy, to tackle this challenge. The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes faces at three different scales. It simultaneously utilizes three parallel outputs of the convolutional feature maps to predict multi-scale candidate face regions. The "atrous" convolution trick (convolution with up-sampled filters) and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN to further boost its performance. The second stage is a Boosted Forests classifier, which utilizes deep facial features pooled from inside the candidate face regions as well as deep contextual features pooled from a larger region surrounding the candidate face regions. This step is included to further remove hard negative samples. Experiments show that this approach achieves state-of-the-art face detection performance on the WIDER FACE dataset "hard" partition, outperforming the former best result by 9.6% for the Average Precision.Comment: 11 pages, 7 figures, to be presented at CRV 201

    Context-aware person identification in personal photo collections

    Get PDF
    Identifying the people in photos is an important need for users of photo management systems. We present MediAssist, one such system which facilitates browsing, searching and semi-automatic annotation of personal photos, using analysis of both image content and the context in which the photo is captured. This semi-automatic annotation includes annotation of the identity of people in photos. In this paper, we focus on such person annotation, and propose person identification techniques based on a combination of context and content. We propose language modelling and nearest neighbor approaches to context-based person identification, in addition to novel face color and image color content-based features (used alongside face recognition and body patch features). We conduct a comprehensive empirical study of these techniques using the real private photo collections of a number of users, and show that combining context- and content-based analysis improves performance over content or context alone

    Human-Machine CRFs for Identifying Bottlenecks in Holistic Scene Understanding

    Get PDF
    Recent trends in image understanding have pushed for holistic scene understanding models that jointly reason about various tasks such as object detection, scene recognition, shape analysis, contextual reasoning, and local appearance based classifiers. In this work, we are interested in understanding the roles of these different tasks in improved scene understanding, in particular semantic segmentation, object detection and scene recognition. Towards this goal, we "plug-in" human subjects for each of the various components in a state-of-the-art conditional random field model. Comparisons among various hybrid human-machine CRFs give us indications of how much "head room" there is to improve scene understanding by focusing research efforts on various individual tasks
    corecore