2,585 research outputs found
Online Matrix Completion with Side Information
We give an online algorithm and prove novel mistake and regret bounds for
online binary matrix completion with side information. The mistake bounds we
prove are of the form . The term is
analogous to the usual margin term in SVM (perceptron) bounds. More
specifically, if we assume that there is some factorization of the underlying
matrix into where the rows of are interpreted
as "classifiers" in and the rows of as "instances" in
, then is the maximum (normalized) margin over all
factorizations consistent with the observed matrix. The
quasi-dimension term measures the quality of side information. In the
presence of vacuous side information, . However, if the side
information is predictive of the underlying factorization of the matrix, then
in an ideal case, where is the number of distinct row
factors and is the number of distinct column factors. We additionally
provide a generalization of our algorithm to the inductive setting. In this
setting, we provide an example where the side information is not directly
specified in advance. For this example, the quasi-dimension is now bounded
by
Multi-Target Prediction: A Unifying View on Problems and Methods
Multi-target prediction (MTP) is concerned with the simultaneous prediction
of multiple target variables of diverse type. Due to its enormous application
potential, it has developed into an active and rapidly expanding research field
that combines several subfields of machine learning, including multivariate
regression, multi-label classification, multi-task learning, dyadic prediction,
zero-shot learning, network inference, and matrix completion. In this paper, we
present a unifying view on MTP problems and methods. First, we formally discuss
commonalities and differences between existing MTP problems. To this end, we
introduce a general framework that covers the above subfields as special cases.
As a second contribution, we provide a structured overview of MTP methods. This
is accomplished by identifying a number of key properties, which distinguish
such methods and determine their suitability for different types of problems.
Finally, we also discuss a few challenges for future research
A Comparative Study of Pairwise Learning Methods based on Kernel Ridge Regression
Many machine learning problems can be formulated as predicting labels for a
pair of objects. Problems of that kind are often referred to as pairwise
learning, dyadic prediction or network inference problems. During the last
decade kernel methods have played a dominant role in pairwise learning. They
still obtain a state-of-the-art predictive performance, but a theoretical
analysis of their behavior has been underexplored in the machine learning
literature.
In this work we review and unify existing kernel-based algorithms that are
commonly used in different pairwise learning settings, ranging from matrix
filtering to zero-shot learning. To this end, we focus on closed-form efficient
instantiations of Kronecker kernel ridge regression. We show that independent
task kernel ridge regression, two-step kernel ridge regression and a linear
matrix filter arise naturally as a special case of Kronecker kernel ridge
regression, implying that all these methods implicitly minimize a squared loss.
In addition, we analyze universality, consistency and spectral filtering
properties. Our theoretical results provide valuable insights in assessing the
advantages and limitations of existing pairwise learning methods.Comment: arXiv admin note: text overlap with arXiv:1606.0427
ARDA: Automatic Relational Data Augmentation for Machine Learning
Automatic machine learning (\AML) is a family of techniques to automate the
process of training predictive models, aiming to both improve performance and
make machine learning more accessible. While many recent works have focused on
aspects of the machine learning pipeline like model selection, hyperparameter
tuning, and feature selection, relatively few works have focused on automatic
data augmentation. Automatic data augmentation involves finding new features
relevant to the user's predictive task with minimal ``human-in-the-loop''
involvement.
We present \system, an end-to-end system that takes as input a dataset and a
data repository, and outputs an augmented data set such that training a
predictive model on this augmented dataset results in improved performance. Our
system has two distinct components: (1) a framework to search and join data
with the input data, based on various attributes of the input, and (2) an
efficient feature selection algorithm that prunes out noisy or irrelevant
features from the resulting join. We perform an extensive empirical evaluation
of different system components and benchmark our feature selection algorithm on
real-world datasets
- …