4,100 research outputs found
Webly Supervised Learning of Convolutional Networks
We present an approach to utilize large amounts of web data for learning
CNNs. Specifically inspired by curriculum learning, we present a two-step
approach for CNN training. First, we use easy images to train an initial visual
representation. We then use this initial CNN and adapt it to harder, more
realistic images by leveraging the structure of data and categories. We
demonstrate that our two-stage CNN outperforms a fine-tuned CNN trained on
ImageNet on Pascal VOC 2012. We also demonstrate the strength of webly
supervised learning by localizing objects in web images and training a R-CNN
style detector. It achieves the best performance on VOC 2007 where no VOC
training data is used. Finally, we show our approach is quite robust to noise
and performs comparably even when we use image search results from March 2013
(pre-CNN image search era)
Learning Finer-class Networks for Universal Representations
Many real-world visual recognition use-cases can not directly benefit from
state-of-the-art CNN-based approaches because of the lack of many annotated
data. The usual approach to deal with this is to transfer a representation
pre-learned on a large annotated source-task onto a target-task of interest.
This raises the question of how well the original representation is
"universal", that is to say directly adapted to many different target-tasks. To
improve such universality, the state-of-the-art consists in training networks
on a diversified source problem, that is modified either by adding generic or
specific categories to the initial set of categories. In this vein, we proposed
a method that exploits finer-classes than the most specific ones existing, for
which no annotation is available. We rely on unsupervised learning and a
bottom-up split and merge strategy. We show that our method learns more
universal representations than state-of-the-art, leading to significantly
better results on 10 target-tasks from multiple domains, using several network
architectures, either alone or combined with networks learned at a coarser
semantic level.Comment: British Machine Vision Conference (BMVC) 201
Channel-Wise Contrastive Learning for Learning with Noisy Labels
In real-world datasets, noisy labels are pervasive. The challenge of learning
with noisy labels (LNL) is to train a classifier that discerns the actual
classes from given instances. For this, the model must identify features
indicative of the authentic labels. While research indicates that genuine label
information is embedded in the learned features of even inaccurately labeled
data, it's often intertwined with noise, complicating its direct application.
Addressing this, we introduce channel-wise contrastive learning (CWCL). This
method distinguishes authentic label information from noise by undertaking
contrastive learning across diverse channels. Unlike conventional instance-wise
contrastive learning (IWCL), CWCL tends to yield more nuanced and resilient
features aligned with the authentic labels. Our strategy is twofold: firstly,
using CWCL to extract pertinent features to identify cleanly labeled samples,
and secondly, progressively fine-tuning using these samples. Evaluations on
several benchmark datasets validate our method's superiority over existing
approaches
Leveraging Crowdsourcing Data For Deep Active Learning - An Application: Learning Intents in Alexa
This paper presents a generic Bayesian framework that enables any deep
learning model to actively learn from targeted crowds. Our framework inherits
from recent advances in Bayesian deep learning, and extends existing work by
considering the targeted crowdsourcing approach, where multiple annotators with
unknown expertise contribute an uncontrolled amount (often limited) of
annotations. Our framework leverages the low-rank structure in annotations to
learn individual annotator expertise, which then helps to infer the true labels
from noisy and sparse annotations. It provides a unified Bayesian model to
simultaneously infer the true labels and train the deep learning model in order
to reach an optimal learning efficacy. Finally, our framework exploits the
uncertainty of the deep learning model during prediction as well as the
annotators' estimated expertise to minimize the number of required annotations
and annotators for optimally training the deep learning model.
We evaluate the effectiveness of our framework for intent classification in
Alexa (Amazon's personal assistant), using both synthetic and real-world
datasets. Experiments show that our framework can accurately learn annotator
expertise, infer true labels, and effectively reduce the amount of annotations
in model training as compared to state-of-the-art approaches. We further
discuss the potential of our proposed framework in bridging machine learning
and crowdsourcing towards improved human-in-the-loop systems
Detecting Sarcasm in Multimodal Social Platforms
Sarcasm is a peculiar form of sentiment expression, where the surface
sentiment differs from the implied sentiment. The detection of sarcasm in
social media platforms has been applied in the past mainly to textual
utterances where lexical indicators (such as interjections and intensifiers),
linguistic markers, and contextual information (such as user profiles, or past
conversations) were used to detect the sarcastic tone. However, modern social
media platforms allow to create multimodal messages where audiovisual content
is integrated with the text, making the analysis of a mode in isolation
partial. In our work, we first study the relationship between the textual and
visual aspects in multimodal posts from three major social media platforms,
i.e., Instagram, Tumblr and Twitter, and we run a crowdsourcing task to
quantify the extent to which images are perceived as necessary by human
annotators. Moreover, we propose two different computational frameworks to
detect sarcasm that integrate the textual and visual modalities. The first
approach exploits visual semantics trained on an external dataset, and
concatenates the semantics features with state-of-the-art textual features. The
second method adapts a visual neural network initialized with parameters
trained on ImageNet to multimodal sarcastic posts. Results show the positive
effect of combining modalities for the detection of sarcasm across platforms
and methods.Comment: 10 pages, 3 figures, final version published in the Proceedings of
ACM Multimedia 201
Feedback-prop: Convolutional Neural Network Inference under Partial Evidence
We propose an inference procedure for deep convolutional neural networks
(CNNs) when partial evidence is available. Our method consists of a general
feedback-based propagation approach (feedback-prop) that boosts the prediction
accuracy for an arbitrary set of unknown target labels when the values for a
non-overlapping arbitrary set of target labels are known. We show that existing
models trained in a multi-label or multi-task setting can readily take
advantage of feedback-prop without any retraining or fine-tuning. Our
feedback-prop inference procedure is general, simple, reliable, and works on
different challenging visual recognition tasks. We present two variants of
feedback-prop based on layer-wise and residual iterative updates. We experiment
using several multi-task models and show that feedback-prop is effective in all
of them. Our results unveil a previously unreported but interesting dynamic
property of deep CNNs. We also present an associated technical approach that
takes advantage of this property for inference under partial evidence in
general visual recognition tasks.Comment: Accepted to CVPR 201
- …