79,324 research outputs found
Unconstrained Face Verification using Deep CNN Features
In this paper, we present an algorithm for unconstrained face verification
based on deep convolutional features and evaluate it on the newly released
IARPA Janus Benchmark A (IJB-A) dataset. The IJB-A dataset includes real-world
unconstrained faces from 500 subjects with full pose and illumination
variations which are much harder than the traditional Labeled Face in the Wild
(LFW) and Youtube Face (YTF) datasets. The deep convolutional neural network
(DCNN) is trained using the CASIA-WebFace dataset. Extensive experiments on the
IJB-A dataset are provided
Deep Learning Face Representation by Joint Identification-Verification
The key challenge of face recognition is to develop effective feature
representations for reducing intra-personal variations while enlarging
inter-personal differences. In this paper, we show that it can be well solved
with deep learning and using both face identification and verification signals
as supervision. The Deep IDentification-verification features (DeepID2) are
learned with carefully designed deep convolutional networks. The face
identification task increases the inter-personal variations by drawing DeepID2
extracted from different identities apart, while the face verification task
reduces the intra-personal variations by pulling DeepID2 extracted from the
same identity together, both of which are essential to face recognition. The
learned DeepID2 features can be well generalized to new identities unseen in
the training data. On the challenging LFW dataset, 99.15% face verification
accuracy is achieved. Compared with the best deep learning result on LFW, the
error rate has been significantly reduced by 67%
CardioCam: Leveraging Camera on Mobile Devices to Verify Users While Their Heart is Pumping
With the increasing prevalence of mobile and IoT devices (e.g., smartphones, tablets, smart-home appliances), massive private and sensitive information are stored on these devices. To prevent unauthorized access on these devices, existing user verification solutions either rely on the complexity of user-defined secrets (e.g., password) or resort to specialized biometric sensors (e.g., fingerprint reader), but the users may still suffer from various attacks, such as password theft, shoulder surfing, smudge, and forged biometrics attacks. In this paper, we propose, CardioCam, a low-cost, general, hard-to-forge user verification system leveraging the unique cardiac biometrics extracted from the readily available built-in cameras in mobile and IoT devices. We demonstrate that the unique cardiac features can be extracted from the cardiac motion patterns in fingertips, by pressing on the built-in camera. To mitigate the impacts of various ambient lighting conditions and human movements under practical scenarios, CardioCam develops a gradient-based technique to optimize the camera configuration, and dynamically selects the most sensitive pixels in a camera frame to extract reliable cardiac motion patterns. Furthermore, the morphological characteristic analysis is deployed to derive user-specific cardiac features, and a feature transformation scheme grounded on Principle Component Analysis (PCA) is developed to enhance the robustness of cardiac biometrics for effective user verification. With the prototyped system, extensive experiments involving 25 subjects are conducted to demonstrate that CardioCam can achieve effective and reliable user verification with over 99% average true positive rate (TPR) while maintaining the false positive rate (FPR) as low as 4%
- …