33 research outputs found
A Consistent Regularization Approach for Structured Prediction
We propose and analyze a regularization approach for structured prediction
problems. We characterize a large class of loss functions that allows to
naturally embed structured outputs in a linear space. We exploit this fact to
design learning algorithms using a surrogate loss approach and regularization
techniques. We prove universal consistency and finite sample bounds
characterizing the generalization properties of the proposed methods.
Experimental results are provided to demonstrate the practical usefulness of
the proposed approach.Comment: 39 pages, 2 Tables, 1 Figur
Learning with SGD and Random Features
Sketching and stochastic gradient methods are arguably the most common
techniques to derive efficient large scale learning algorithms. In this paper,
we investigate their application in the context of nonparametric statistical
learning. More precisely, we study the estimator defined by stochastic gradient
with mini batches and random features. The latter can be seen as form of
nonlinear sketching and used to define approximate kernel methods. The
considered estimator is not explicitly penalized/constrained and regularization
is implicit. Indeed, our study highlights how different parameters, such as
number of features, iterations, step-size and mini-batch size control the
learning properties of the solutions. We do this by deriving optimal finite
sample bounds, under standard assumptions. The obtained results are
corroborated and illustrated by numerical experiments
Consistent Multitask Learning with Nonlinear Output Relations
Key to multitask learning is exploiting relationships between different tasks
to improve prediction performance. If the relations are linear, regularization
approaches can be used successfully. However, in practice assuming the tasks to
be linearly related might be restrictive, and allowing for nonlinear structures
is a challenge. In this paper, we tackle this issue by casting the problem
within the framework of structured prediction. Our main contribution is a novel
algorithm for learning multiple tasks which are related by a system of
nonlinear equations that their joint outputs need to satisfy. We show that the
algorithm is consistent and can be efficiently implemented. Experimental
results show the potential of the proposed method.Comment: 25 pages, 1 figure, 2 table
Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes
We consider stochastic gradient descent (SGD) for least-squares regression
with potentially several passes over the data. While several passes have been
widely reported to perform practically better in terms of predictive
performance on unseen data, the existing theoretical analysis of SGD suggests
that a single pass is statistically optimal. While this is true for
low-dimensional easy problems, we show that for hard problems, multiple passes
lead to statistically optimal predictions while single pass does not; we also
show that in these hard models, the optimal number of passes over the data
increases with sample size. In order to define the notion of hardness and show
that our predictive performances are optimal, we consider potentially
infinite-dimensional models and notions typically associated to kernel methods,
namely, the decay of eigenvalues of the covariance matrix of the features and
the complexity of the optimal predictor as measured through the covariance
matrix. We illustrate our results on synthetic experiments with non-linear
kernel methods and on a classical benchmark with a linear model
Leveraging Low-Rank Relations Between Surrogate Tasks in Structured Prediction
We study the interplay between surrogate methods for structured prediction
and techniques from multitask learning designed to leverage relationships
between surrogate outputs. We propose an efficient algorithm based on trace
norm regularization which, differently from previous methods, does not require
explicit knowledge of the coding/decoding functions of the surrogate framework.
As a result, our algorithm can be applied to the broad class of problems in
which the surrogate space is large or even infinite dimensional. We study
excess risk bounds for trace norm regularized structured prediction, implying
the consistency and learning rates for our estimator. We also identify relevant
regimes in which our approach can enjoy better generalization performance than
previous methods. Numerical experiments on ranking problems indicate that
enforcing low-rank relations among surrogate outputs may indeed provide a
significant advantage in practice.Comment: 42 pages, 1 tabl
Structured Prediction for CRiSP Inverse Kinematics Learning with Misspecified Robot Models
With the recent advances in machine learning, problems that traditionally
would require accurate modeling to be solved analytically can now be
successfully approached with data-driven strategies. Among these, computing the
inverse kinematics of a redundant robot arm poses a significant challenge due
to the non-linear structure of the robot, the hard joint constraints and the
non-invertible kinematics map. Moreover, most learning algorithms consider a
completely data-driven approach, while often useful information on the
structure of the robot is available and should be positively exploited. In this
work, we present a simple, yet effective, approach for learning the inverse
kinematics. We introduce a structured prediction algorithm that combines a
data-driven strategy with the model provided by a forward kinematics function
-- even when this function is misspecified -- to accurately solve the problem.
The proposed approach ensures that predicted joint configurations are well
within the robot's constraints. We also provide statistical guarantees on the
generalization properties of our estimator as well as an empirical evaluation
of its performance on trajectory reconstruction tasks.Comment: Accepted for publication in IEEE Robotics and Automation Letters
(2021) and presentation at IEEE International Conference on Robotics and
Automation (2021) Updated funding informatio
Kernel Instrumental Variable Regression
Instrumental variable (IV) regression is a strategy for learning causal
relationships in observational data. If measurements of input X and output Y
are confounded, the causal relationship can nonetheless be identified if an
instrumental variable Z is available that influences X directly, but is
conditionally independent of Y given X and the unmeasured confounder. The
classic two-stage least squares algorithm (2SLS) simplifies the estimation
problem by modeling all relationships as linear functions. We propose kernel
instrumental variable regression (KIV), a nonparametric generalization of 2SLS,
modeling relations among X, Y, and Z as nonlinear functions in reproducing
kernel Hilbert spaces (RKHSs). We prove the consistency of KIV under mild
assumptions, and derive conditions under which convergence occurs at the
minimax optimal rate for unconfounded, single-stage RKHS regression. In doing
so, we obtain an efficient ratio between training sample sizes used in the
algorithm's first and second stages. In experiments, KIV outperforms state of
the art alternatives for nonparametric IV regression.Comment: 41 pages, 11 figures. Advances in Neural Information Processing
Systems. 201