1,494 research outputs found
Template Adaptation for Face Verification and Identification
Face recognition performance evaluation has traditionally focused on
one-to-one verification, popularized by the Labeled Faces in the Wild dataset
for imagery and the YouTubeFaces dataset for videos. In contrast, the newly
released IJB-A face recognition dataset unifies evaluation of one-to-many face
identification with one-to-one face verification over templates, or sets of
imagery and videos for a subject. In this paper, we study the problem of
template adaptation, a form of transfer learning to the set of media in a
template. Extensive performance evaluations on IJB-A show a surprising result,
that perhaps the simplest method of template adaptation, combining deep
convolutional network features with template specific linear SVMs, outperforms
the state-of-the-art by a wide margin. We study the effects of template size,
negative set construction and classifier fusion on performance, then compare
template adaptation to convolutional networks with metric learning, 2D and 3D
alignment. Our unexpected conclusion is that these other methods, when combined
with template adaptation, all achieve nearly the same top performance on IJB-A
for template-based face verification and identification
K-Space at TRECVID 2008
In this paper we describe K-Spaceās participation in
TRECVid 2008 in the interactive search task. For 2008
the K-Space group performed one of the largest interactive
video information retrieval experiments conducted
in a laboratory setting. We had three institutions participating
in a multi-site multi-system experiment. In
total 36 users participated, 12 each from Dublin City
University (DCU, Ireland), University of Glasgow (GU,
Scotland) and Centrum Wiskunde and Informatica (CWI,
the Netherlands). Three user interfaces were developed,
two from DCU which were also used in 2007 as well as
an interface from GU. All interfaces leveraged the same
search service. Using a latin squares arrangement, each
user conducted 12 topics, leading in total to 6 runs per
site, 18 in total. We officially submitted for evaluation 3
of these runs to NIST with an additional expert run using
a 4th system. Our submitted runs performed around
the median. In this paper we will present an overview of
the search system utilized, the experimental setup and a
preliminary analysis of our results
K-Space at TRECVid 2008
In this paper we describe K-Spaceās participation in TRECVid 2008 in the interactive search task. For 2008 the K-Space group performed one of the largest interactive video information retrieval experiments conducted in a laboratory setting. We had three institutions participating in a multi-site multi-system experiment. In total 36 users participated, 12 each from Dublin City University (DCU, Ireland), University of Glasgow (GU, Scotland) and Centrum Wiskunde & Informatica (CWI, the Netherlands). Three user interfaces were developed, two from DCU which were also used in 2007 as well as an interface from GU. All interfaces leveraged the same search service. Using a latin squares arrangement, each user conducted 12 topics, leading in total to 6 runs per site, 18 in total. We officially submitted for evaluation 3 of these runs to NIST with an additional expert run using a 4th system. Our submitted runs performed around the median. In this paper we will present an overview of the search system utilized, the experimental setup and a preliminary analysis of our results
Unsupervised Domain Adaptation for Face Recognition in Unlabeled Videos
Despite rapid advances in face recognition, there remains a clear gap between
the performance of still image-based face recognition and video-based face
recognition, due to the vast difference in visual quality between the domains
and the difficulty of curating diverse large-scale video datasets. This paper
addresses both of those challenges, through an image to video feature-level
domain adaptation approach, to learn discriminative video frame
representations. The framework utilizes large-scale unlabeled video data to
reduce the gap between different domains while transferring discriminative
knowledge from large-scale labeled still images. Given a face recognition
network that is pretrained in the image domain, the adaptation is achieved by
(i) distilling knowledge from the network to a video adaptation network through
feature matching, (ii) performing feature restoration through synthetic data
augmentation and (iii) learning a domain-invariant feature through a domain
adversarial discriminator. We further improve performance through a
discriminator-guided feature fusion that boosts high-quality frames while
eliminating those degraded by video domain-specific factors. Experiments on the
YouTube Faces and IJB-A datasets demonstrate that each module contributes to
our feature-level domain adaptation framework and substantially improves video
face recognition performance to achieve state-of-the-art accuracy. We
demonstrate qualitatively that the network learns to suppress diverse artifacts
in videos such as pose, illumination or occlusion without being explicitly
trained for them.Comment: accepted for publication at International Conference on Computer
Vision (ICCV) 201
- ā¦