62,370 research outputs found
Deep Feature-based Face Detection on Mobile Devices
We propose a deep feature-based face detector for mobile devices to detect
user's face acquired by the front facing camera. The proposed method is able to
detect faces in images containing extreme pose and illumination variations as
well as partial faces. The main challenge in developing deep feature-based
algorithms for mobile devices is the constrained nature of the mobile platform
and the non-availability of CUDA enabled GPUs on such devices. Our
implementation takes into account the special nature of the images captured by
the front-facing camera of mobile devices and exploits the GPUs present in
mobile devices without CUDA-based frameorks, to meet these challenges.Comment: ISBA 201
Partial Face Detection and Illumination Estimation
Face Analysis has long been a crucial component of many security applications. In this work, we shall propose and explore some face analysis algorithms which are applicable to two different security problems, namely Active Authentication and Image Tampering Detection. In the first section, we propose two algorithms, “Deep Feature based Face Detection for Mobile Devices” and “DeepSegFace” that are useful in detecting partial faces such as those seem in typical Active Authentication scenarios. In the second section, we propose an algorithm to detect discrepancies in illumination conditions given two face images, and use that as an indication to decide if an image has been tampered by transplanting faces. We also extend the illumination detection algorithm by proposing an adversarial data augmentation scheme. We show the efficacy of the proposed algorithms by evaluating them on multiple datasets
Automated Privacy Protection for Mobile Device Users and Bystanders in Public Spaces
As smartphones have gained popularity over recent years, they have provided usersconvenient access to services and integrated sensors that were previously only available through larger, stationary computing devices. This trend of ubiquitous, mobile devices provides unparalleled convenience and productivity for users who wish to perform everyday actions such as taking photos, participating in social media, reading emails, or checking online banking transactions. However, the increasing use of mobile devices in public spaces by users has negative implications for their own privacy and, in some cases, that of bystanders around them.
Specifically, digital photography trends in public have negative implications for bystanders who can be captured inadvertently in users’ photos. Those who are captured often have no knowledge of being photographed and have no control over how photos of them are distributed. To address this growing issue, a novel system is proposed for protecting the privacy of bystanders captured in public photos. A fully automated approach to accurately distinguish the intended subjects from strangers is explored. A feature-based classification scheme utilizing entire photos is presented. Additionally, the privacy-minded case of only utilizing local face images with no contextual information from the original image is explored with a convolutional neural network-based classifier. Three methods of face anonymization are implemented and compared: black boxing, Gaussian blurring, and pose-tolerant face swapping. To validate these methods, a comprehensive user survey is conducted to understand the difference in viability between them.
Beyond photographing, the privacy of mobile device users can sometimes be impacted in public spaces, as visual eavesdropping or “shoulder surfing” attacks on device screens become feasible. Malicious individuals can easily glean personal data from smartphone and mobile device screens while they are accessed visually. In order to protect displayed user content, anovel, sensor-based visual eavesdropping detection scheme using integrated device cameras is proposed. In order to selectively obfuscate private content while an attacker is nearby, a dynamic scheme for detecting and hiding private content is also developed utilizing User-Interface-as-an-Image (UIaaI). A deep, convolutional object detection network is trained and utilized to identify sensitive content under this scheme. To allow users to customize the types ofcontent to hide, dynamic training sample generation is introduced to retrain the content detection network with very few original UI samples. Web applications are also considered with a Chrome browser extension which automates the detection and obfuscation of sensitive web page fields through HTML parsing and CSS injection
Active User Authentication for Smartphones: A Challenge Data Set and Benchmark Results
In this paper, automated user verification techniques for smartphones are
investigated. A unique non-commercial dataset, the University of Maryland
Active Authentication Dataset 02 (UMDAA-02) for multi-modal user authentication
research is introduced. This paper focuses on three sensors - front camera,
touch sensor and location service while providing a general description for
other modalities. Benchmark results for face detection, face verification,
touch-based user identification and location-based next-place prediction are
presented, which indicate that more robust methods fine-tuned to the mobile
platform are needed to achieve satisfactory verification accuracy. The dataset
will be made available to the research community for promoting additional
research.Comment: 8 pages, 12 figures, 6 tables. Best poster award at BTAS 201
Active Authentication using an Autoencoder regularized CNN-based One-Class Classifier
Active authentication refers to the process in which users are unobtrusively
monitored and authenticated continuously throughout their interactions with
mobile devices. Generally, an active authentication problem is modelled as a
one class classification problem due to the unavailability of data from the
impostor users. Normally, the enrolled user is considered as the target class
(genuine) and the unauthorized users are considered as unknown classes
(impostor). We propose a convolutional neural network (CNN) based approach for
one class classification in which a zero centered Gaussian noise and an
autoencoder are used to model the pseudo-negative class and to regularize the
network to learn meaningful feature representations for one class data,
respectively. The overall network is trained using a combination of the
cross-entropy and the reconstruction error losses. A key feature of the
proposed approach is that any pre-trained CNN can be used as the base network
for one class classification. Effectiveness of the proposed framework is
demonstrated using three publically available face-based active authentication
datasets and it is shown that the proposed method achieves superior performance
compared to the traditional one class classification methods. The source code
is available at: github.com/otkupjnoz/oc-acnn.Comment: Accepted and to appear at AFGR 201
- …