21,921 research outputs found
Efficient Decomposed Learning for Structured Prediction
Structured prediction is the cornerstone of several machine learning
applications. Unfortunately, in structured prediction settings with expressive
inter-variable interactions, exact inference-based learning algorithms, e.g.
Structural SVM, are often intractable. We present a new way, Decomposed
Learning (DecL), which performs efficient learning by restricting the inference
step to a limited part of the structured spaces. We provide characterizations
based on the structure, target parameters, and gold labels, under which DecL is
equivalent to exact learning. We then show that in real world settings, where
our theoretical assumptions may not completely hold, DecL-based algorithms are
significantly more efficient and as accurate as exact learning.Comment: ICML201
Sub-Classifier Construction for Error Correcting Output Code Using Minimum Weight Perfect Matching
Multi-class classification is mandatory for real world problems and one of
promising techniques for multi-class classification is Error Correcting Output
Code. We propose a method for constructing the Error Correcting Output Code to
obtain the suitable combination of positive and negative classes encoded to
represent binary classifiers. The minimum weight perfect matching algorithm is
applied to find the optimal pairs of subset of classes by using the
generalization performance as a weighting criterion. Based on our method, each
subset of classes with positive and negative labels is appropriately combined
for learning the binary classifiers. Experimental results show that our
technique gives significantly higher performance compared to traditional
methods including the dense random code and the sparse random code both in
terms of accuracy and classification times. Moreover, our method requires
significantly smaller number of binary classifiers while maintaining accuracy
compared to the One-Versus-One.Comment: 7 pages, 3 figure
Large-scale Multi-label Learning with Missing Labels
The multi-label classification problem has generated significant interest in
recent years. However, existing approaches do not adequately address two key
challenges: (a) the ability to tackle problems with a large number (say
millions) of labels, and (b) the ability to handle data with missing labels. In
this paper, we directly address both these problems by studying the multi-label
problem in a generic empirical risk minimization (ERM) framework. Our
framework, despite being simple, is surprisingly able to encompass several
recent label-compression based methods which can be derived as special cases of
our method. To optimize the ERM problem, we develop techniques that exploit the
structure of specific loss functions - such as the squared loss function - to
offer efficient algorithms. We further show that our learning framework admits
formal excess risk bounds even in the presence of missing labels. Our risk
bounds are tight and demonstrate better generalization performance for low-rank
promoting trace-norm regularization when compared to (rank insensitive)
Frobenius norm regularization. Finally, we present extensive empirical results
on a variety of benchmark datasets and show that our methods perform
significantly better than existing label compression based methods and can
scale up to very large datasets such as the Wikipedia dataset
Random forests with random projections of the output space for high dimensional multi-label classification
We adapt the idea of random projections applied to the output space, so as to
enhance tree-based ensemble methods in the context of multi-label
classification. We show how learning time complexity can be reduced without
affecting computational complexity and accuracy of predictions. We also show
that random output space projections may be used in order to reach different
bias-variance tradeoffs, over a broad panel of benchmark problems, and that
this may lead to improved accuracy while reducing significantly the
computational burden of the learning stage
- …