12,674 research outputs found
Unsupervised Learning of Semantic Audio Representations
Even in the absence of any explicit semantic annotation, vast collections of
audio recordings provide valuable information for learning the categorical
structure of sounds. We consider several class-agnostic semantic constraints
that apply to unlabeled nonspeech audio: (i) noise and translations in time do
not change the underlying sound category, (ii) a mixture of two sound events
inherits the categories of the constituents, and (iii) the categories of events
in close temporal proximity are likely to be the same or related. Without
labels to ground them, these constraints are incompatible with classification
loss functions. However, they may still be leveraged to identify geometric
inequalities needed for triplet loss-based training of convolutional neural
networks. The result is low-dimensional embeddings of the input spectrograms
that recover 41% and 84% of the performance of their fully-supervised
counterparts when applied to downstream query-by-example sound retrieval and
sound event classification tasks, respectively. Moreover, in
limited-supervision settings, our unsupervised embeddings double the
state-of-the-art classification performance.Comment: Submitted to ICASSP 201
Predicting the Effectiveness of Self-Training: Application to Sentiment Classification
The goal of this paper is to investigate the connection between the
performance gain that can be obtained by selftraining and the similarity
between the corpora used in this approach. Self-training is a semi-supervised
technique designed to increase the performance of machine learning algorithms
by automatically classifying instances of a task and adding these as additional
training material to the same classifier. In the context of language processing
tasks, this training material is mostly an (annotated) corpus. Unfortunately
self-training does not always lead to a performance increase and whether it
will is largely unpredictable. We show that the similarity between corpora can
be used to identify those setups for which self-training can be beneficial. We
consider this research as a step in the process of developing a classifier that
is able to adapt itself to each new test corpus that it is presented with
Objects that Sound
In this paper our objectives are, first, networks that can embed audio and
visual inputs into a common space that is suitable for cross-modal retrieval;
and second, a network that can localize the object that sounds in an image,
given the audio signal. We achieve both these objectives by training from
unlabelled video using only audio-visual correspondence (AVC) as the objective
function. This is a form of cross-modal self-supervision from video.
To this end, we design new network architectures that can be trained for
cross-modal retrieval and localizing the sound source in an image, by using the
AVC task. We make the following contributions: (i) show that audio and visual
embeddings can be learnt that enable both within-mode (e.g. audio-to-audio) and
between-mode retrieval; (ii) explore various architectures for the AVC task,
including those for the visual stream that ingest a single image, or multiple
images, or a single image and multi-frame optical flow; (iii) show that the
semantic object that sounds within an image can be localized (using only the
sound, no motion or flow information); and (iv) give a cautionary tale on how
to avoid undesirable shortcuts in the data preparation.Comment: Appears in: European Conference on Computer Vision (ECCV) 201
Collaborative Feature Learning from Social Media
Image feature representation plays an essential role in image recognition and
related tasks. The current state-of-the-art feature learning paradigm is
supervised learning from labeled data. However, this paradigm requires
large-scale category labels, which limits its applicability to domains where
labels are hard to obtain. In this paper, we propose a new data-driven feature
learning paradigm which does not rely on category labels. Instead, we learn
from user behavior data collected on social media. Concretely, we use the image
relationship discovered in the latent space from the user behavior data to
guide the image feature learning. We collect a large-scale image and user
behavior dataset from Behance.net. The dataset consists of 1.9 million images
and over 300 million view records from 1.9 million users. We validate our
feature learning paradigm on this dataset and find that the learned feature
significantly outperforms the state-of-the-art image features in learning
better image similarities. We also show that the learned feature performs
competitively on various recognition benchmarks
Weakly-Supervised Neural Text Classification
Deep neural networks are gaining increasing popularity for the classic text
classification task, due to their strong expressive power and less requirement
for feature engineering. Despite such attractiveness, neural text
classification models suffer from the lack of training data in many real-world
applications. Although many semi-supervised and weakly-supervised text
classification models exist, they cannot be easily applied to deep neural
models and meanwhile support limited supervision types. In this paper, we
propose a weakly-supervised method that addresses the lack of training data in
neural text classification. Our method consists of two modules: (1) a
pseudo-document generator that leverages seed information to generate
pseudo-labeled documents for model pre-training, and (2) a self-training module
that bootstraps on real unlabeled data for model refinement. Our method has the
flexibility to handle different types of weak supervision and can be easily
integrated into existing deep neural models for text classification. We have
performed extensive experiments on three real-world datasets from different
domains. The results demonstrate that our proposed method achieves inspiring
performance without requiring excessive training data and outperforms baseline
methods significantly.Comment: CIKM 2018 Full Pape
Look, Listen and Learn
We consider the question: what can be learnt by looking at and listening to a
large number of unlabelled videos? There is a valuable, but so far untapped,
source of information contained in the video itself -- the correspondence
between the visual and the audio streams, and we introduce a novel
"Audio-Visual Correspondence" learning task that makes use of this. Training
visual and audio networks from scratch, without any additional supervision
other than the raw unconstrained videos themselves, is shown to successfully
solve this task, and, more interestingly, result in good visual and audio
representations. These features set the new state-of-the-art on two sound
classification benchmarks, and perform on par with the state-of-the-art
self-supervised approaches on ImageNet classification. We also demonstrate that
the network is able to localize objects in both modalities, as well as perform
fine-grained recognition tasks.Comment: Appears in: IEEE International Conference on Computer Vision (ICCV)
201
- …