143,187 research outputs found
Pareto-Path Multi-Task Multiple Kernel Learning
A traditional and intuitively appealing Multi-Task Multiple Kernel Learning
(MT-MKL) method is to optimize the sum (thus, the average) of objective
functions with (partially) shared kernel function, which allows information
sharing amongst tasks. We point out that the obtained solution corresponds to a
single point on the Pareto Front (PF) of a Multi-Objective Optimization (MOO)
problem, which considers the concurrent optimization of all task objectives
involved in the Multi-Task Learning (MTL) problem. Motivated by this last
observation and arguing that the former approach is heuristic, we propose a
novel Support Vector Machine (SVM) MT-MKL framework, that considers an
implicitly-defined set of conic combinations of task objectives. We show that
solving our framework produces solutions along a path on the aforementioned PF
and that it subsumes the optimization of the average of objective functions as
a special case. Using algorithms we derived, we demonstrate through a series of
experimental results that the framework is capable of achieving better
classification performance, when compared to other similar MTL approaches.Comment: Accepted by IEEE Transactions on Neural Networks and Learning System
A pragmatic approach to multi-class classification
We present a novel hierarchical approach to multi-class classification which
is generic in that it can be applied to different classification models (e.g.,
support vector machines, perceptrons), and makes no explicit assumptions about
the probabilistic structure of the problem as it is usually done in multi-class
classification. By adding a cascade of additional classifiers, each of which
receives the previous classifier's output in addition to regular input data,
the approach harnesses unused information that manifests itself in the form of,
e.g., correlations between predicted classes. Using multilayer perceptrons as a
classification model, we demonstrate the validity of this approach by testing
it on a complex ten-class 3D gesture recognition task.Comment: European Symposium on artificial neural networks (ESANN), Apr 2015,
Bruges, Belgium. 201
Solving for multi-class using orthogonal coding matrices
A common method of generalizing binary to multi-class classification is the
error correcting code (ECC). ECCs may be optimized in a number of ways, for
instance by making them orthogonal. Here we test two types of orthogonal ECCs
on seven different datasets using three types of binary classifier and compare
them with three other multi-class methods: 1 vs. 1, one-versus-the-rest and
random ECCs. The first type of orthogonal ECC, in which the codes contain no
zeros, admits a fast and simple method of solving for the probabilities.
Orthogonal ECCs are always more accurate than random ECCs as predicted by
recent literature. Improvments in uncertainty coefficient (U.C.) range between
0.4--17.5% (0.004--0.139, absolute), while improvements in Brier score between
0.7--10.7%. Unfortunately, orthogonal ECCs are rarely more accurate than 1 vs.
1. Disparities are worst when the methods are paired with logistic regression,
with orthogonal ECCs never beating 1 vs. 1. When the methods are paired with
SVM, the losses are less significant, peaking at 1.5%, relative, 0.011 absolute
in uncertainty coefficient and 6.5% in Brier scores. Orthogonal ECCs are always
the fastest of the five multi-class methods when paired with linear
classifiers. When paired with a piecewise linear classifier, whose
classification speed does not depend on the number of training samples,
classifications using orthogonal ECCs were always more accurate than the the
remaining three methods and also faster than 1 vs. 1. Losses against 1 vs. 1
here were higher, peaking at 1.9% (0.017, absolute), in U.C. and 39% in Brier
score. Gains in speed ranged between 1.1% and over 100%. Whether the speed
increase is worth the penalty in accuracy will depend on the application
- …