56 research outputs found
Learning Grimaces by Watching TV
Differently from computer vision systems which require explicit supervision,
humans can learn facial expressions by observing people in their environment.
In this paper, we look at how similar capabilities could be developed in
machine vision. As a starting point, we consider the problem of relating facial
expressions to objectively measurable events occurring in videos. In
particular, we consider a gameshow in which contestants play to win significant
sums of money. We extract events affecting the game and corresponding facial
expressions objectively and automatically from the videos, obtaining large
quantities of labelled data for our study. We also develop, using benchmarks
such as FER and SFEW 2.0, state-of-the-art deep neural networks for facial
expression recognition, showing that pre-training on face verification data can
be highly beneficial for this task. Then, we extend these models to use facial
expressions to predict events in videos and learn nameable expressions from
them. The dataset and emotion recognition models are available at
http://www.robots.ox.ac.uk/~vgg/data/facevalueComment: British Machine Vision Conference (BMVC) 201
Learnable PINs: Cross-Modal Embeddings for Person Identity
We propose and investigate an identity sensitive joint embedding of face and
voice. Such an embedding enables cross-modal retrieval from voice to face and
from face to voice. We make the following four contributions: first, we show
that the embedding can be learnt from videos of talking faces, without
requiring any identity labels, using a form of cross-modal self-supervision;
second, we develop a curriculum learning schedule for hard negative mining
targeted to this task, that is essential for learning to proceed successfully;
third, we demonstrate and evaluate cross-modal retrieval for identities unseen
and unheard during training over a number of scenarios and establish a
benchmark for this novel task; finally, we show an application of using the
joint embedding for automatically retrieving and labelling characters in TV
dramas.Comment: To appear in ECCV 201
Simple Baselines for Interactive Video Retrieval with Questions and Answers
To date, the majority of video retrieval systems have been optimized for a
"single-shot" scenario in which the user submits a query in isolation, ignoring
previous interactions with the system. Recently, there has been renewed
interest in interactive systems to enhance retrieval, but existing approaches
are complex and deliver limited gains in performance. In this work, we revisit
this topic and propose several simple yet effective baselines for interactive
video retrieval via question-answering. We employ a VideoQA model to simulate
user interactions and show that this enables the productive study of the
interactive retrieval task without access to ground truth dialogue data.
Experiments on MSR-VTT, MSVD, and AVSD show that our framework using
question-based interaction significantly improves the performance of text-based
video retrieval systems.Comment: ICCV 2023, project page:
https://github.com/kevinliang888/IVR-QA-baseline
Small steps and giant leaps: Minimal Newton solvers for Deep Learning
We propose a fast second-order method that can be used as a drop-in
replacement for current deep learning solvers. Compared to stochastic gradient
descent (SGD), it only requires two additional forward-mode automatic
differentiation operations per iteration, which has a computational cost
comparable to two standard forward passes and is easy to implement. Our method
addresses long-standing issues with current second-order solvers, which invert
an approximate Hessian matrix every iteration exactly or by conjugate-gradient
methods, a procedure that is both costly and sensitive to noise. Instead, we
propose to keep a single estimate of the gradient projected by the inverse
Hessian matrix, and update it once per iteration. This estimate has the same
size and is similar to the momentum variable that is commonly used in SGD. No
estimate of the Hessian is maintained. We first validate our method, called
CurveBall, on small problems with known closed-form solutions (noisy Rosenbrock
function and degenerate 2-layer linear networks), where current deep learning
solvers seem to struggle. We then train several large models on CIFAR and
ImageNet, including ResNet and VGG-f networks, where we demonstrate faster
convergence with no hyperparameter tuning. Code is available
Disentangled Speech Embeddings using Cross-modal Self-supervision
The objective of this paper is to learn representations of speaker identity
without access to manually annotated data. To do so, we develop a
self-supervised learning objective that exploits the natural cross-modal
synchrony between faces and audio in video. The key idea behind our approach is
to tease apart--without annotation--the representations of linguistic content
and speaker identity. We construct a two-stream architecture which: (1) shares
low-level features common to both representations; and (2) provides a natural
mechanism for explicitly disentangling these factors, offering the potential
for greater generalisation to novel combinations of content and identity and
ultimately producing speaker identity representations that are more robust. We
train our method on a large-scale audio-visual dataset of talking heads `in the
wild', and demonstrate its efficacy by evaluating the learned speaker
representations for standard speaker recognition performance.Comment: ICASSP 2020. The first three authors contributed equally to this wor
NamedMask: Distilling Segmenters from Complementary Foundation Models
The goal of this work is to segment and name regions of images without access
to pixel-level labels during training. To tackle this task, we construct
segmenters by distilling the complementary strengths of two foundation models.
The first, CLIP (Radford et al. 2021), exhibits the ability to assign names to
image content but lacks an accessible representation of object structure. The
second, DINO (Caron et al. 2021), captures the spatial extent of objects but
has no knowledge of object names. Our method, termed NamedMask, begins by using
CLIP to construct category-specific archives of images. These images are
pseudo-labelled with a category-agnostic salient object detector bootstrapped
from DINO, then refined by category-specific segmenters using the CLIP archive
labels. Thanks to the high quality of the refined masks, we show that a
standard segmentation architecture trained on these archives with appropriate
data augmentation achieves impressive semantic segmentation abilities for both
single-object and multi-object images. As a result, our proposed NamedMask
performs favourably against a range of prior work on five benchmarks including
the VOC2012, COCO and large-scale ImageNet-S datasets.Comment: Tech report. Code: https://github.com/NoelShin/namedmas
- …