456,911 research outputs found
CMS-RCNN: Contextual Multi-Scale Region-based CNN for Unconstrained Face Detection
Robust face detection in the wild is one of the ultimate components to
support various facial related problems, i.e. unconstrained face recognition,
facial periocular recognition, facial landmarking and pose estimation, facial
expression recognition, 3D facial model construction, etc. Although the face
detection problem has been intensely studied for decades with various
commercial applications, it still meets problems in some real-world scenarios
due to numerous challenges, e.g. heavy facial occlusions, extremely low
resolutions, strong illumination, exceptionally pose variations, image or video
compression artifacts, etc. In this paper, we present a face detection approach
named Contextual Multi-Scale Region-based Convolution Neural Network (CMS-RCNN)
to robustly solve the problems mentioned above. Similar to the region-based
CNNs, our proposed network consists of the region proposal component and the
region-of-interest (RoI) detection component. However, far apart of that
network, there are two main contributions in our proposed network that play a
significant role to achieve the state-of-the-art performance in face detection.
Firstly, the multi-scale information is grouped both in region proposal and RoI
detection to deal with tiny face regions. Secondly, our proposed network allows
explicit body contextual reasoning in the network inspired from the intuition
of human vision system. The proposed approach is benchmarked on two recent
challenging face detection databases, i.e. the WIDER FACE Dataset which
contains high degree of variability, as well as the Face Detection Dataset and
Benchmark (FDDB). The experimental results show that our proposed approach
trained on WIDER FACE Dataset outperforms strong baselines on WIDER FACE
Dataset by a large margin, and consistently achieves competitive results on
FDDB against the recent state-of-the-art face detection methods
Fast Landmark Localization with 3D Component Reconstruction and CNN for Cross-Pose Recognition
Two approaches are proposed for cross-pose face recognition, one is based on
the 3D reconstruction of facial components and the other is based on the deep
Convolutional Neural Network (CNN). Unlike most 3D approaches that consider
holistic faces, the proposed approach considers 3D facial components. It
segments a 2D gallery face into components, reconstructs the 3D surface for
each component, and recognizes a probe face by component features. The
segmentation is based on the landmarks located by a hierarchical algorithm that
combines the Faster R-CNN for face detection and the Reduced Tree Structured
Model for landmark localization. The core part of the CNN-based approach is a
revised VGG network. We study the performances with different settings on the
training set, including the synthesized data from 3D reconstruction, the
real-life data from an in-the-wild database, and both types of data combined.
We investigate the performances of the network when it is employed as a
classifier or designed as a feature extractor. The two recognition approaches
and the fast landmark localization are evaluated in extensive experiments, and
compared to stateof-the-art methods to demonstrate their efficacy.Comment: 14 pages, 12 figures, 4 table
Self-Supervised Vision-Based Detection of the Active Speaker as Support for Socially-Aware Language Acquisition
This paper presents a self-supervised method for visual detection of the
active speaker in a multi-person spoken interaction scenario. Active speaker
detection is a fundamental prerequisite for any artificial cognitive system
attempting to acquire language in social settings. The proposed method is
intended to complement the acoustic detection of the active speaker, thus
improving the system robustness in noisy conditions. The method can detect an
arbitrary number of possibly overlapping active speakers based exclusively on
visual information about their face. Furthermore, the method does not rely on
external annotations, thus complying with cognitive development. Instead, the
method uses information from the auditory modality to support learning in the
visual domain. This paper reports an extensive evaluation of the proposed
method using a large multi-person face-to-face interaction dataset. The results
show good performance in a speaker dependent setting. However, in a speaker
independent setting the proposed method yields a significantly lower
performance. We believe that the proposed method represents an essential
component of any artificial cognitive system or robotic platform engaging in
social interactions.Comment: 10 pages, IEEE Transactions on Cognitive and Developmental System
Face Prediction Model for an Automatic Age-invariant Face Recognition System
Automated face recognition and identification softwares are becoming part of
our daily life; it finds its abode not only with Facebook's auto photo tagging,
Apple's iPhoto, Google's Picasa, Microsoft's Kinect, but also in Homeland
Security Department's dedicated biometric face detection systems. Most of these
automatic face identification systems fail where the effects of aging come into
the picture. Little work exists in the literature on the subject of face
prediction that accounts for aging, which is a vital part of the computer face
recognition systems. In recent years, individual face components' (e.g. eyes,
nose, mouth) features based matching algorithms have emerged, but these
approaches are still not efficient. Therefore, in this work we describe a Face
Prediction Model (FPM), which predicts human face aging or growth related image
variation using Principle Component Analysis (PCA) and Artificial Neural
Network (ANN) learning techniques. The FPM captures the facial changes, which
occur with human aging and predicts the facial image with a few years of gap
with an acceptable accuracy of face matching from 76 to 86%.Comment: 3 pages, 2 figure
Component-based Face Detection in Colour Images
Abstract: Face detection is an important process in many applications such as face recognition, person identification and tracking, and access control. The technique used for face detection depends on how a face is modelled. In this paper, a face is defined as a skin region and a lips region that meet certain geometrical criteria. Thus, the face detection system has three main components: a skin detection module, a lips detection module, and a face verification module. The Multi-layer perceptron (MLP) neural networks was used for the skin and lips detection modules. In order to test the face detection system, two databases were created. The images in the first database, called In-house, were taken under controlled environment while those in the second database, called WWW, were collected from the World Wide Web and as such have no restriction on lighting, head pose or background. The system achieved a correct detection rate of 87 and 80 percent on the In-house and WWW databases respectivel
- …