968 research outputs found
Improving Semantic Embedding Consistency by Metric Learning for Zero-Shot Classification
This paper addresses the task of zero-shot image classification. The key
contribution of the proposed approach is to control the semantic embedding of
images -- one of the main ingredients of zero-shot learning -- by formulating
it as a metric learning problem. The optimized empirical criterion associates
two types of sub-task constraints: metric discriminating capacity and accurate
attribute prediction. This results in a novel expression of zero-shot learning
not requiring the notion of class in the training phase: only pairs of
image/attributes, augmented with a consistency indicator, are given as ground
truth. At test time, the learned model can predict the consistency of a test
image with a given set of attributes , allowing flexible ways to produce
recognition inferences. Despite its simplicity, the proposed approach gives
state-of-the-art results on four challenging datasets used for zero-shot
recognition evaluation.Comment: in ECCV 2016, Oct 2016, amsterdam, Netherlands. 201
Multi-Label Learning with Label Enhancement
The task of multi-label learning is to predict a set of relevant labels for
the unseen instance. Traditional multi-label learning algorithms treat each
class label as a logical indicator of whether the corresponding label is
relevant or irrelevant to the instance, i.e., +1 represents relevant to the
instance and -1 represents irrelevant to the instance. Such label represented
by -1 or +1 is called logical label. Logical label cannot reflect different
label importance. However, for real-world multi-label learning problems, the
importance of each possible label is generally different. For the real
applications, it is difficult to obtain the label importance information
directly. Thus we need a method to reconstruct the essential label importance
from the logical multilabel data. To solve this problem, we assume that each
multi-label instance is described by a vector of latent real-valued labels,
which can reflect the importance of the corresponding labels. Such label is
called numerical label. The process of reconstructing the numerical labels from
the logical multi-label data via utilizing the logical label information and
the topological structure in the feature space is called Label Enhancement. In
this paper, we propose a novel multi-label learning framework called LEMLL,
i.e., Label Enhanced Multi-Label Learning, which incorporates regression of the
numerical labels and label enhancement into a unified framework. Extensive
comparative studies validate that the performance of multi-label learning can
be improved significantly with label enhancement and LEMLL can effectively
reconstruct latent label importance information from logical multi-label data.Comment: ICDM 201
Semi-supervised Learning for Ordinal Kernel Discriminant Analysis
Ordinal classication considers those classication problems where the labels of
the variable to predict follow a given order. Naturally, labelled data is scarce
or di_cult to obtain in this type of problems because, in many cases, ordinal
labels are given by an user or expert (e.g. in recommendation systems). Firstly,
this paper develops a new strategy for ordinal classi_cation where both labelled
and unlabelled data are used in the model construction step (a scheme which
is referred to as semi-supervised learning). More specically, the ordinal version
of kernel discriminant learning is extended for this setting considering the
neighbourhood information of unlabelled data, which is proposed to be computed
in the feature space induced by the kernel function. Secondly, a new
method for semi-supervised kernel learning is devised in the context of ordinal
classi_cation, which is combined with our developed classi_cation strategy to
optimise the kernel parameters. The experiments conducted compare 6 different
approaches for semi-supervised learning in the context of ordinal classication
in a battery of 30 datasets, showing 1) the good synergy of the ordinal version
of discriminant analysis and the use of unlabelled data and 2) the advantage of
computing distances in the feature space induced by the kernel function
- …