539 research outputs found
Fast Landmark Localization with 3D Component Reconstruction and CNN for Cross-Pose Recognition
Two approaches are proposed for cross-pose face recognition, one is based on
the 3D reconstruction of facial components and the other is based on the deep
Convolutional Neural Network (CNN). Unlike most 3D approaches that consider
holistic faces, the proposed approach considers 3D facial components. It
segments a 2D gallery face into components, reconstructs the 3D surface for
each component, and recognizes a probe face by component features. The
segmentation is based on the landmarks located by a hierarchical algorithm that
combines the Faster R-CNN for face detection and the Reduced Tree Structured
Model for landmark localization. The core part of the CNN-based approach is a
revised VGG network. We study the performances with different settings on the
training set, including the synthesized data from 3D reconstruction, the
real-life data from an in-the-wild database, and both types of data combined.
We investigate the performances of the network when it is employed as a
classifier or designed as a feature extractor. The two recognition approaches
and the fast landmark localization are evaluated in extensive experiments, and
compared to stateof-the-art methods to demonstrate their efficacy.Comment: 14 pages, 12 figures, 4 table
Robust Head-Pose Estimation Based on Partially-Latent Mixture of Linear Regressions
Head-pose estimation has many applications, such as social event analysis,
human-robot and human-computer interaction, driving assistance, and so forth.
Head-pose estimation is challenging because it must cope with changing
illumination conditions, variabilities in face orientation and in appearance,
partial occlusions of facial landmarks, as well as bounding-box-to-face
alignment errors. We propose tu use a mixture of linear regressions with
partially-latent output. This regression method learns to map high-dimensional
feature vectors (extracted from bounding boxes of faces) onto the joint space
of head-pose angles and bounding-box shifts, such that they are robustly
predicted in the presence of unobservable phenomena. We describe in detail the
mapping method that combines the merits of unsupervised manifold learning
techniques and of mixtures of regressions. We validate our method with three
publicly available datasets and we thoroughly benchmark four variants of the
proposed algorithm with several state-of-the-art head-pose estimation methods.Comment: 12 pages, 5 figures, 3 table
Multi-Modal Classifiers for Open-Vocabulary Object Detection
The goal of this paper is open-vocabulary object detection (OVOD)
\unicode{x2013} building a model that can detect objects beyond the set of
categories seen at training, thus enabling the user to specify categories of
interest at inference without the need for model retraining. We adopt a
standard two-stage object detector architecture, and explore three ways for
specifying novel categories: via language descriptions, via image exemplars, or
via a combination of the two. We make three contributions: first, we prompt a
large language model (LLM) to generate informative language descriptions for
object classes, and construct powerful text-based classifiers; second, we
employ a visual aggregator on image exemplars that can ingest any number of
images as input, forming vision-based classifiers; and third, we provide a
simple method to fuse information from language descriptions and image
exemplars, yielding a multi-modal classifier. When evaluating on the
challenging LVIS open-vocabulary benchmark we demonstrate that: (i) our
text-based classifiers outperform all previous OVOD works; (ii) our
vision-based classifiers perform as well as text-based classifiers in prior
work; (iii) using multi-modal classifiers perform better than either modality
alone; and finally, (iv) our text-based and multi-modal classifiers yield
better performance than a fully-supervised detector.Comment: ICML 2023, project page:
https://www.robots.ox.ac.uk/vgg/research/mm-ovod
- …