3 research outputs found
Open-set Face Recognition with Neural Ensemble, Maximal Entropy Loss and Feature Augmentation
Open-set face recognition refers to a scenario in which biometric systems
have incomplete knowledge of all existing subjects. Therefore, they are
expected to prevent face samples of unregistered subjects from being identified
as previously enrolled identities. This watchlist context adds an arduous
requirement that calls for the dismissal of irrelevant faces by focusing mainly
on subjects of interest. As a response, this work introduces a novel method
that associates an ensemble of compact neural networks with a margin-based cost
function that explores additional samples. Supplementary negative samples can
be obtained from external databases or synthetically built at the
representation level in training time with a new mix-up feature augmentation
approach. Deep neural networks pre-trained on large face datasets serve as the
preliminary feature extraction module. We carry out experiments on well-known
LFW and IJB-C datasets where results show that the approach is able to boost
closed and open-set identification rates
Open-set face recognition with maximal entropy and Objectosphere loss
Open-set face recognition characterizes a scenario where unknown individuals, unseen during the training and enrollment stages, appear on operation time. This work concentrates on watchlists, an open-set task that is expected to operate at a low false-positive identification rate and generally includes only a few enrollment samples per identity. We introduce a compact adapter network that benefits from additional negative face images when combined with distinct cost functions, such as Objectosphere Loss (OS) and the proposed Maximal Entropy Loss (MEL). MEL modifies the traditional Cross-Entropy loss in favor of increasing the entropy for negative samples and attaches a penalty to known target classes in pursuance of gallery specialization. The proposed approach adopts pre-trained deep neural networks (DNNs) for face recognition as feature extractors. Then, the adapter network takes deep feature representations and acts as a substitute for the output layer of the pre-trained DNN in exchange for an agile domain adaptation. Promising results have been achieved following open-set protocols for three different datasets: LFW, IJB-C, and UCCS as well as state-of-the-art performance when supplementary negative data is properly selected to fine-tune the adapter network