4,644 research outputs found
Contrastive Audio-Visual Masked Autoencoder
In this paper, we first extend the recent Masked Auto-Encoder (MAE) model
from a single modality to audio-visual multi-modalities. Subsequently, we
propose the Contrastive Audio-Visual Masked Auto-Encoder (CAV-MAE) by combining
contrastive learning and masked data modeling, two major self-supervised
learning frameworks, to learn a joint and coordinated audio-visual
representation. Our experiments show that the contrastive audio-visual
correspondence learning objective not only enables the model to perform
audio-visual retrieval tasks, but also helps the model learn a better joint
representation. As a result, our fully self-supervised pretrained CAV-MAE
achieves a new SOTA accuracy of 65.9% on VGGSound, and is comparable with the
previous best supervised pretrained model on AudioSet in the audio-visual event
classification task. Code and pretrained models are at
https://github.com/yuangongnd/cav-mae.Comment: Accepted at ICLR 2023 as a notable top 25% paper. Code and pretrained
models are at https://github.com/yuangongnd/cav-ma
Generative Face Completion
In this paper, we propose an effective face completion algorithm using a deep
generative model. Different from well-studied background completion, the face
completion task is more challenging as it often requires to generate
semantically new pixels for the missing key components (e.g., eyes and mouths)
that contain large appearance variations. Unlike existing nonparametric
algorithms that search for patches to synthesize, our algorithm directly
generates contents for missing regions based on a neural network. The model is
trained with a combination of a reconstruction loss, two adversarial losses and
a semantic parsing loss, which ensures pixel faithfulness and local-global
contents consistency. With extensive experimental results, we demonstrate
qualitatively and quantitatively that our model is able to deal with a large
area of missing pixels in arbitrary shapes and generate realistic face
completion results.Comment: Accepted by CVPR 201
Self-Supervised Feature Learning by Learning to Spot Artifacts
We introduce a novel self-supervised learning method based on adversarial
training. Our objective is to train a discriminator network to distinguish real
images from images with synthetic artifacts, and then to extract features from
its intermediate layers that can be transferred to other data domains and
tasks. To generate images with artifacts, we pre-train a high-capacity
autoencoder and then we use a damage and repair strategy: First, we freeze the
autoencoder and damage the output of the encoder by randomly dropping its
entries. Second, we augment the decoder with a repair network, and train it in
an adversarial manner against the discriminator. The repair network helps
generate more realistic images by inpainting the dropped feature entries. To
make the discriminator focus on the artifacts, we also make it predict what
entries in the feature were dropped. We demonstrate experimentally that
features learned by creating and spotting artifacts achieve state of the art
performance in several benchmarks.Comment: CVPR 2018 (spotlight
A vector quantized masked autoencoder for speech emotion recognition
Recent years have seen remarkable progress in speech emotion recognition
(SER), thanks to advances in deep learning techniques. However, the limited
availability of labeled data remains a significant challenge in the field.
Self-supervised learning has recently emerged as a promising solution to
address this challenge. In this paper, we propose the vector quantized masked
autoencoder for speech (VQ-MAE-S), a self-supervised model that is fine-tuned
to recognize emotions from speech signals. The VQ-MAE-S model is based on a
masked autoencoder (MAE) that operates in the discrete latent space of a
vector-quantized variational autoencoder. Experimental results show that the
proposed VQ-MAE-S model, pre-trained on the VoxCeleb2 dataset and fine-tuned on
emotional speech data, outperforms an MAE working on the raw spectrogram
representation and other state-of-the-art methods in SER.Comment: https://samsad35.github.io/VQ-MAE-Speech
- …