2 research outputs found
Towards meta-learning for multi-target regression problems
Several multi-target regression methods were devel-oped in the last years
aiming at improving predictive performanceby exploring inter-target correlation
within the problem. However, none of these methods outperforms the others for
all problems. This motivates the development of automatic approachesto
recommend the most suitable multi-target regression method. In this paper, we
propose a meta-learning system to recommend the best predictive method for a
given multi-target regression problem. We performed experiments with a
meta-dataset generated by a total of 648 synthetic datasets. These datasets
were created to explore distinct inter-targets characteristics toward
recommending the most promising method. In experiments, we evaluated four
different algorithms with different biases as meta-learners. Our meta-dataset
is composed of 58 meta-features, based on: statistical information, correlation
characteristics, linear landmarking, from the distribution and smoothness of
the data, and has four different meta-labels. Results showed that induced
meta-models were able to recommend the best methodfor different base level
datasets with a balanced accuracy superior to 70% using a Random Forest
meta-model, which statistically outperformed the meta-learning baselines.Comment: To appear on the 8th Brazilian Conference on Intelligent Systems
(BRACIS
Combining Kernel Functions in Supervised Learning Models.
The research activity has mainly dealt with supervised Machine Learning algorithms,
specifically within the context of kernel methods. A kernel function is a positive definite
function mapping data from the original input space into a higher dimensional Hilbert
space. Differently from classical linear methods, where problems are solved seeking for a
linear function separating points in the input space, kernel methods all have in common
the same basic focus: original input data is mapped onto a higher dimensional feature
set where new coordinates are not computed, but only the inner product of input
points. In this way, kernel methods make possible to deal with non-linearly separable
set of data, making use of linear models in the feature space: all the Machine Learning
methods using a linear function to determine the best fitting for a set of given data.
Instead of employing one single kernel function, Multiple Kernel Learning algorithms
tackle the problem of selecting kernel functions by using a combination of preset base
kernels. Infinite Kernel Learning further extends such idea by exploiting a combination
of possibly infinite base kernels. The research activity core idea is utilize a novel
complex combination of kernel functions in already existing or modified supervised
Machine Learning frameworks. Specifically, we considered two frameworks: Extreme
Learning Machine, having the structure of classical feedforward Neural Networks but
being characterized by hidden nodes variables randomly assigned at the beginning of
the algorithm; Support Vector Machine, a class of linear algorithms based on the idea
of separating data with a hyperplane having as wide a margin as possible. The first
proposed model extends the classical Extreme Learning Machine formulation using a
combination of possibly infinitely many base kernel, presenting a two-step algorithm.
The second result uses a preexisting multi-task kernel function in a novel Support
Vector Machine framework. Multi-task learning defines the Machine Learning problem
of solving more than one task at the same time, with the main goal of taking into
account the existing multi-task relationships. To be able to use the existing multi-task
kernel function, we had to construct a new framework based on the classical Support
Vector Machine one, taking care of every multi-task correlation factor