3,454 research outputs found
Multi-view Face Detection Using Deep Convolutional Neural Networks
In this paper we consider the problem of multi-view face detection. While
there has been significant research on this problem, current state-of-the-art
approaches for this task require annotation of facial landmarks, e.g. TSM [25],
or annotation of face poses [28, 22]. They also require training dozens of
models to fully capture faces in all orientations, e.g. 22 models in HeadHunter
method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method
that does not require pose/landmark annotation and is able to detect faces in a
wide range of orientations using a single model based on deep convolutional
neural networks. The proposed method has minimal complexity; unlike other
recent deep learning object detection methods [9], it does not require
additional components such as segmentation, bounding-box regression, or SVM
classifiers. Furthermore, we analyzed scores of the proposed face detector for
faces in different orientations and found that 1) the proposed method is able
to detect faces from different angles and can handle occlusion to some extent,
2) there seems to be a correlation between dis- tribution of positive examples
in the training set and scores of the proposed face detector. The latter
suggests that the proposed methods performance can be further improved by using
better sampling strategies and more sophisticated data augmentation techniques.
Evaluations on popular face detection benchmark datasets show that our
single-model face detector algorithm has similar or better performance compared
to the previous methods, which are more complex and require annotations of
either different poses or facial landmarks.Comment: in International Conference on Multimedia Retrieval 2015 (ICMR
Accelerated face detector training using the PSL framework
We train a face detection system using the PSL framework [1] which combines the AdaBoost
learning algorithm and Haar-like features. We demonstrate the ability of this framework to
overcome some of the challenges inherent in training classifiers that are structured in cascades
of boosted ensembles (CoBE). The PSL classifiers are compared to the Viola-Jones type cas-
caded classifiers. We establish the ability of the PSL framework to produce classifiers in a
complex domain in significantly reduced time frame. They also comprise of fewer boosted en-
sembles albeit at a price of increased false detection rates on our test dataset. We also report
on results from a more diverse number of experiments carried out on the PSL framework in
order to shed more insight into the effects of variations in its adjustable training parameters
Selective Refinement Network for High Performance Face Detection
High performance face detection remains a very challenging problem,
especially when there exists many tiny faces. This paper presents a novel
single-shot face detector, named Selective Refinement Network (SRN), which
introduces novel two-step classification and regression operations selectively
into an anchor-based face detector to reduce false positives and improve
location accuracy simultaneously. In particular, the SRN consists of two
modules: the Selective Two-step Classification (STC) module and the Selective
Two-step Regression (STR) module. The STC aims to filter out most simple
negative anchors from low level detection layers to reduce the search space for
the subsequent classifier, while the STR is designed to coarsely adjust the
locations and sizes of anchors from high level detection layers to provide
better initialization for the subsequent regressor. Moreover, we design a
Receptive Field Enhancement (RFE) block to provide more diverse receptive
field, which helps to better capture faces in some extreme poses. As a
consequence, the proposed SRN detector achieves state-of-the-art performance on
all the widely used face detection benchmarks, including AFW, PASCAL face,
FDDB, and WIDER FACE datasets. Codes will be released to facilitate further
studies on the face detection problem.Comment: The first two authors have equal contributions. Corresponding author:
Shifeng Zhang ([email protected]
- …