17 research outputs found
Joint Intermodal and Intramodal Label Transfers for Extremely Rare or Unseen Classes
In this paper, we present a label transfer model from texts to images for
image classification tasks. The problem of image classification is often much
more challenging than text classification. On one hand, labeled text data is
more widely available than the labeled images for classification tasks. On the
other hand, text data tends to have natural semantic interpretability, and they
are often more directly related to class labels. On the contrary, the image
features are not directly related to concepts inherent in class labels. One of
our goals in this paper is to develop a model for revealing the functional
relationships between text and image features as to directly transfer
intermodal and intramodal labels to annotate the images. This is implemented by
learning a transfer function as a bridge to propagate the labels between two
multimodal spaces. However, the intermodal label transfers could be undermined
by blindly transferring the labels of noisy texts to annotate images. To
mitigate this problem, we present an intramodal label transfer process, which
complements the intermodal label transfer by transferring the image labels
instead when relevant text is absent from the source corpus. In addition, we
generalize the inter-modal label transfer to zero-shot learning scenario where
there are only text examples available to label unseen classes of images
without any positive image examples. We evaluate our algorithm on an image
classification task and show the effectiveness with respect to the other
compared algorithms.Comment: The paper has been accepted by IEEE Transactions on Pattern Analysis
and Machine Intelligence. It will apear in a future issu
Structure propagation for zero-shot learning
The key of zero-shot learning (ZSL) is how to find the information transfer
model for bridging the gap between images and semantic information (texts or
attributes). Existing ZSL methods usually construct the compatibility function
between images and class labels with the consideration of the relevance on the
semantic classes (the manifold structure of semantic classes). However, the
relationship of image classes (the manifold structure of image classes) is also
very important for the compatibility model construction. It is difficult to
capture the relationship among image classes due to unseen classes, so that the
manifold structure of image classes often is ignored in ZSL. To complement each
other between the manifold structure of image classes and that of semantic
classes information, we propose structure propagation (SP) for improving the
performance of ZSL for classification. SP can jointly consider the manifold
structure of image classes and that of semantic classes for approximating to
the intrinsic structure of object classes. Moreover, the SP can describe the
constrain condition between the compatibility function and these manifold
structures for balancing the influence of the structure propagation iteration.
The SP solution provides not only unseen class labels but also the relationship
of two manifold structures that encode the positive transfer in structure
propagation. Experimental results demonstrate that SP can attain the promising
results on the AwA, CUB, Dogs and SUN databases
Adaptive Locality Preserving Regression
This paper proposes a novel discriminative regression method, called adaptive
locality preserving regression (ALPR) for classification. In particular, ALPR
aims to learn a more flexible and discriminative projection that not only
preserves the intrinsic structure of data, but also possesses the properties of
feature selection and interpretability. To this end, we introduce a target
learning technique to adaptively learn a more discriminative and flexible
target matrix rather than the pre-defined strict zero-one label matrix for
regression. Then a locality preserving constraint regularized by the adaptive
learned weights is further introduced to guide the projection learning, which
is beneficial to learn a more discriminative projection and avoid overfitting.
Moreover, we replace the conventional `Frobenius norm' with the special l21
norm to constrain the projection, which enables the method to adaptively select
the most important features from the original high-dimensional data for feature
extraction. In this way, the negative influence of the redundant features and
noises residing in the original data can be greatly eliminated. Besides, the
proposed method has good interpretability for features owing to the
row-sparsity property of the l21 norm. Extensive experiments conducted on the
synthetic database with manifold structure and many real-world databases prove
the effectiveness of the proposed method.Comment: The paper has been accepted by IEEE Transactions on Circuits and
Systems for Video Technology (TCSVT), and the code can be available at
https://drive.google.com/file/d/1iNzONkRByIaUhXwdEhOkkh_0d2AAXNE8/vie