279 research outputs found
Finger Vein Recognition Based on (2D)2 PCA and Metric Learning
Finger vein recognition is a promising biometric recognition technology, which verifies identities via the vein patterns in the fingers. In this paper, (2D)2 PCA is applied to extract features of finger veins, based on which a new recognition method is proposed in conjunction with metric learning. It learns a KNN classifier for each individual, which is different from the traditional methods where a fixed threshold is employed for all individuals. Besides, the SMOTE technology is adopted to solve the class-imbalance problem. Our experiments show that the proposed method is effective by achieving a recognition rate of 99.17%
MaLP: Manipulation Localization Using a Proactive Scheme
Advancements in the generation quality of various Generative Models (GMs) has
made it necessary to not only perform binary manipulation detection but also
localize the modified pixels in an image. However, prior works termed as
passive for manipulation localization exhibit poor generalization performance
over unseen GMs and attribute modifications. To combat this issue, we propose a
proactive scheme for manipulation localization, termed MaLP. We encrypt the
real images by adding a learned template. If the image is manipulated by any
GM, this added protection from the template not only aids binary detection but
also helps in identifying the pixels modified by the GM. The template is
learned by leveraging local and global-level features estimated by a two-branch
architecture. We show that MaLP performs better than prior passive works. We
also show the generalizability of MaLP by testing on 22 different GMs,
providing a benchmark for future research on manipulation localization.
Finally, we show that MaLP can be used as a discriminator for improving the
generation quality of GMs. Our models/codes are available at
www.github.com/vishal3477/pro_loc.Comment: Published at Conference on Computer Vision and Pattern Recognition
202
Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images
State-of-the-art (SOTA) Generative Models (GMs) can synthesize
photo-realistic images that are hard for humans to distinguish from genuine
photos. We propose to perform reverse engineering of GMs to infer the model
hyperparameters from the images generated by these models. We define a novel
problem, "model parsing", as estimating GM network architectures and training
loss functions by examining their generated images -- a task seemingly
impossible for human beings. To tackle this problem, we propose a framework
with two components: a Fingerprint Estimation Network (FEN), which estimates a
GM fingerprint from a generated image by training with four constraints to
encourage the fingerprint to have desired properties, and a Parsing Network
(PN), which predicts network architecture and loss functions from the estimated
fingerprints. To evaluate our approach, we collect a fake image dataset with
K images generated by GMs. Extensive experiments show encouraging
results in parsing the hyperparameters of the unseen models. Finally, our
fingerprint estimation can be leveraged for deepfake detection and image
attribution, as we show by reporting SOTA results on both the recent Celeb-DF
and image attribution benchmarks
- ā¦