178 research outputs found
Image classification by visual bag-of-words refinement and reduction
This paper presents a new framework for visual bag-of-words (BOW) refinement
and reduction to overcome the drawbacks associated with the visual BOW model
which has been widely used for image classification. Although very influential
in the literature, the traditional visual BOW model has two distinct drawbacks.
Firstly, for efficiency purposes, the visual vocabulary is commonly constructed
by directly clustering the low-level visual feature vectors extracted from
local keypoints, without considering the high-level semantics of images. That
is, the visual BOW model still suffers from the semantic gap, and thus may lead
to significant performance degradation in more challenging tasks (e.g. social
image classification). Secondly, typically thousands of visual words are
generated to obtain better performance on a relatively large image dataset. Due
to such large vocabulary size, the subsequent image classification may take
sheer amount of time. To overcome the first drawback, we develop a graph-based
method for visual BOW refinement by exploiting the tags (easy to access
although noisy) of social images. More notably, for efficient image
classification, we further reduce the refined visual BOW model to a much
smaller size through semantic spectral clustering. Extensive experimental
results show the promising performance of the proposed framework for visual BOW
refinement and reduction
Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation
Image annotation aims to annotate a given image with a variable number of
class labels corresponding to diverse visual concepts. In this paper, we
address two main issues in large-scale image annotation: 1) how to learn a rich
feature representation suitable for predicting a diverse set of visual concepts
ranging from object, scene to abstract concept; 2) how to annotate an image
with the optimal number of class labels. To address the first issue, we propose
a novel multi-scale deep model for extracting rich and discriminative features
capable of representing a wide range of visual concepts. Specifically, a novel
two-branch deep neural network architecture is proposed which comprises a very
deep main network branch and a companion feature fusion network branch designed
for fusing the multi-scale features computed from the main branch. The deep
model is also made multi-modal by taking noisy user-provided tags as model
input to complement the image input. For tackling the second issue, we
introduce a label quantity prediction auxiliary task to the main label
prediction task to explicitly estimate the optimal label number for a given
image. Extensive experiments are carried out on two large-scale image
annotation benchmark datasets and the results show that our method
significantly outperforms the state-of-the-art.Comment: Submited to IEEE TI
Improvable Gap Balancing for Multi-Task Learning
In multi-task learning (MTL), gradient balancing has recently attracted more
research interest than loss balancing since it often leads to better
performance. However, loss balancing is much more efficient than gradient
balancing, and thus it is still worth further exploration in MTL. Note that
prior studies typically ignore that there exist varying improvable gaps across
multiple tasks, where the improvable gap per task is defined as the distance
between the current training progress and desired final training progress.
Therefore, after loss balancing, the performance imbalance still arises in many
cases. In this paper, following the loss balancing framework, we propose two
novel improvable gap balancing (IGB) algorithms for MTL: one takes a simple
heuristic, and the other (for the first time) deploys deep reinforcement
learning for MTL. Particularly, instead of directly balancing the losses in
MTL, both algorithms choose to dynamically assign task weights for improvable
gap balancing. Moreover, we combine IGB and gradient balancing to show the
complementarity between the two types of algorithms. Extensive experiments on
two benchmark datasets demonstrate that our IGB algorithms lead to the best
results in MTL via loss balancing and achieve further improvements when
combined with gradient balancing. Code is available at
https://github.com/YanqiDai/IGB4MTL.Comment: Accepted for the 39th Conference on Uncertainty in Artificial
Intelligence (UAI 2023
LGDN: Language-Guided Denoising Network for Video-Language Modeling
Video-language modeling has attracted much attention with the rapid growth of
web videos. Most existing methods assume that the video frames and text
description are semantically correlated, and focus on video-language modeling
at video level. However, this hypothesis often fails for two reasons: (1) With
the rich semantics of video contents, it is difficult to cover all frames with
a single video-level description; (2) A raw video typically has
noisy/meaningless information (e.g., scenery shot, transition or teaser).
Although a number of recent works deploy attention mechanism to alleviate this
problem, the irrelevant/noisy information still makes it very difficult to
address. To overcome such challenge, we thus propose an efficient and effective
model, termed Language-Guided Denoising Network (LGDN), for video-language
modeling. Different from most existing methods that utilize all extracted video
frames, LGDN dynamically filters out the misaligned or redundant frames under
the language supervision and obtains only 2--4 salient frames per video for
cross-modal token-level alignment. Extensive experiments on five public
datasets show that our LGDN outperforms the state-of-the-arts by large margins.
We also provide detailed ablation study to reveal the critical importance of
solving the noise issue, in hope of inspiring future video-language work.Comment: Accepted by NeurIPS202
- …