32 research outputs found
A Comprehensive Performance Evaluation of Deformable Face Tracking "In-the-Wild"
Recently, technologies such as face detection, facial landmark localisation
and face recognition and verification have matured enough to provide effective
and efficient solutions for imagery captured under arbitrary conditions
(referred to as "in-the-wild"). This is partially attributed to the fact that
comprehensive "in-the-wild" benchmarks have been developed for face detection,
landmark localisation and recognition/verification. A very important technology
that has not been thoroughly evaluated yet is deformable face tracking
"in-the-wild". Until now, the performance has mainly been assessed
qualitatively by visually assessing the result of a deformable face tracking
technology on short videos. In this paper, we perform the first, to the best of
our knowledge, thorough evaluation of state-of-the-art deformable face tracking
pipelines using the recently introduced 300VW benchmark. We evaluate many
different architectures focusing mainly on the task of on-line deformable face
tracking. In particular, we compare the following general strategies: (a)
generic face detection plus generic facial landmark localisation, (b) generic
model free tracking plus generic facial landmark localisation, as well as (c)
hybrid approaches using state-of-the-art face detection, model free tracking
and facial landmark localisation technologies. Our evaluation reveals future
avenues for further research on the topic.Comment: E. Antonakos and P. Snape contributed equally and have joint second
authorshi
Motion deblurring of faces
Face analysis is a core part of computer vision, in which remarkable progress
has been observed in the past decades. Current methods achieve recognition and
tracking with invariance to fundamental modes of variation such as
illumination, 3D pose, expressions. Notwithstanding, a much less standing mode
of variation is motion deblurring, which however presents substantial
challenges in face analysis. Recent approaches either make oversimplifying
assumptions, e.g. in cases of joint optimization with other tasks, or fail to
preserve the highly structured shape/identity information. Therefore, we
propose a data-driven method that encourages identity preservation. The
proposed model includes two parallel streams (sub-networks): the first deblurs
the image, the second implicitly extracts and projects the identity of both the
sharp and the blurred image in similar subspaces. We devise a method for
creating realistic motion blur by averaging a variable number of frames to
train our model. The averaged images originate from a 2MF2 dataset with 10
million facial frames, which we introduce for the task. Considering deblurring
as an intermediate step, we utilize the deblurred outputs to conduct a thorough
experimentation on high-level face analysis tasks, i.e. landmark localization
and face verification. The experimental evaluation demonstrates the superiority
of our method
Multilinear Operator Networks
Despite the remarkable capabilities of deep neural networks in image
recognition, the dependence on activation functions remains a largely
unexplored area and has yet to be eliminated. On the other hand, Polynomial
Networks is a class of models that does not require activation functions, but
have yet to perform on par with modern architectures. In this work, we aim
close this gap and propose MONet, which relies solely on multilinear operators.
The core layer of MONet, called Mu-Layer, captures multiplicative interactions
of the elements of the input token. MONet captures high-degree interactions of
the input elements and we demonstrate the efficacy of our approach on a series
of image recognition and scientific computing benchmarks. The proposed model
outperforms prior polynomial networks and performs on par with modern
architectures. We believe that MONet can inspire further research on models
that use entirely multilinear operations.Comment: International Conference on Learning Representations Poster(2024
Unsupervised Controllable Generation with Self-Training
Recent generative adversarial networks (GANs) are able to generate impressive
photo-realistic images. However, controllable generation with GANs remains a
challenging research problem. Achieving controllable generation requires
semantically interpretable and disentangled factors of variation. It is
challenging to achieve this goal using simple fixed distributions such as
Gaussian distribution. Instead, we propose an unsupervised framework to learn a
distribution of latent codes that control the generator through self-training.
Self-training provides an iterative feedback in the GAN training, from the
discriminator to the generator, and progressively improves the proposal of the
latent codes as training proceeds. The latent codes are sampled from a latent
variable model that is learned in the feature space of the discriminator. We
consider a normalized independent component analysis model and learn its
parameters through tensor factorization of the higher-order moments. Our
framework exhibits better disentanglement compared to other variants such as
the variational autoencoder, and is able to discover semantically meaningful
latent codes without any supervision. We demonstrate empirically on both cars
and faces datasets that each group of elements in the learned code controls a
mode of variation with a semantic meaning, e.g. pose or background change. We
also demonstrate with quantitative metrics that our method generates better
results compared to other approaches
On the Convergence of Encoder-only Shallow Transformers
In this paper, we aim to build the global convergence theory of encoder-only
shallow Transformers under a realistic setting from the perspective of
architectures, initialization, and scaling under a finite width regime. The
difficulty lies in how to tackle the softmax in self-attention mechanism, the
core ingredient of Transformer. In particular, we diagnose the scaling scheme,
carefully tackle the input/output of softmax, and prove that quadratic
overparameterization is sufficient for global convergence of our shallow
Transformers under commonly-used He/LeCun initialization in practice. Besides,
neural tangent kernel (NTK) based analysis is also given, which facilitates a
comprehensive comparison. Our theory demonstrates the separation on the
importance of different scaling schemes and initialization. We believe our
results can pave the way for a better understanding of modern Transformers,
particularly on training dynamics
Unsupervised Controllable Generation with Self-Training
Recent generative adversarial networks (GANs) are able to generate impressive photo-realistic images. However, controllable generation with GANs remains a challenging research problem. Achieving controllable generation requires semantically interpretable and disentangled factors of variation. It is challenging to achieve this goal using simple fixed distributions such as Gaussian distribution. Instead, we propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training. Self-training provides an iterative feedback in the GAN training, from the discriminator to the generator, and progressively improves the proposal of the latent codes as training proceeds. The latent codes are sampled from a latent variable model that is learned in the feature space of the discriminator. We consider a normalized independent component analysis model and learn its parameters through tensor factorization of the higher-order moments. Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder, and is able to discover semantically meaningful latent codes without any supervision. We demonstrate empirically on both cars and faces datasets that each group of elements in the learned code controls a mode of variation with a semantic meaning, e.g. pose or background change. We also demonstrate with quantitative metrics that our method generates better results compared to other approaches
The first Facial Landmark Tracking in-the-Wild Challenge: benchmark and results
Detection and tracking of faces in image sequences is among the most well studied problems in the intersection of statistical machine learning and computer vision. Often, tracking and detection methodologies use a rigid representation to describe the facial region 1, hence they can neither capture nor exploit the non-rigid facial deformations, which are crucial for countless of applications (e.g., facial expression analysis, facial motion capture, high-performance face recognition etc.). Usually, the non-rigid deformations are captured by locating and tracking the position of a set of fiducial facial landmarks (e.g., eyes, nose, mouth etc.). Recently, we witnessed a burst of research in automatic facial landmark localisation in static imagery. This is partly attributed to the availability of large amount of annotated data, many of which have been provided by the first facial landmark localisation challenge (also known as 300-W challenge). Even though now well established benchmarks exist for facial landmark localisation in static imagery, to the best of our knowledge, there is no established benchmark for assessing the performance of facial landmark tracking methodologies, containing an adequate number of annotated face videos. In conjunction with ICCV’2015 we run the first competition/challenge on facial landmark tracking in long-term videos. In this paper, we present the first benchmark for long-term facial landmark tracking, containing currently over 110 annotated videos, and we summarise the results of the competition