245 research outputs found
Multi-Target Prediction: A Unifying View on Problems and Methods
Multi-target prediction (MTP) is concerned with the simultaneous prediction
of multiple target variables of diverse type. Due to its enormous application
potential, it has developed into an active and rapidly expanding research field
that combines several subfields of machine learning, including multivariate
regression, multi-label classification, multi-task learning, dyadic prediction,
zero-shot learning, network inference, and matrix completion. In this paper, we
present a unifying view on MTP problems and methods. First, we formally discuss
commonalities and differences between existing MTP problems. To this end, we
introduce a general framework that covers the above subfields as special cases.
As a second contribution, we provide a structured overview of MTP methods. This
is accomplished by identifying a number of key properties, which distinguish
such methods and determine their suitability for different types of problems.
Finally, we also discuss a few challenges for future research
Kernel-based Inference of Functions over Graphs
The study of networks has witnessed an explosive growth over the past decades
with several ground-breaking methods introduced. A particularly interesting --
and prevalent in several fields of study -- problem is that of inferring a
function defined over the nodes of a network. This work presents a versatile
kernel-based framework for tackling this inference problem that naturally
subsumes and generalizes the reconstruction approaches put forth recently by
the signal processing on graphs community. Both the static and the dynamic
settings are considered along with effective modeling approaches for addressing
real-world problems. The herein analytical discussion is complemented by a set
of numerical examples, which showcase the effectiveness of the presented
techniques, as well as their merits related to state-of-the-art methods.Comment: To be published as a chapter in `Adaptive Learning Methods for
Nonlinear System Modeling', Elsevier Publishing, Eds. D. Comminiello and J.C.
Principe (2018). This chapter surveys recent work on kernel-based inference
of functions over graphs including arXiv:1612.03615 and arXiv:1605.07174 and
arXiv:1711.0930
Goal Set Inverse Optimal Control and Iterative Re-planning for Predicting Human Reaching Motions in Shared Workspaces
To enable safe and efficient human-robot collaboration in shared workspaces
it is important for the robot to predict how a human will move when performing
a task. While predicting human motion for tasks not known a priori is very
challenging, we argue that single-arm reaching motions for known tasks in
collaborative settings (which are especially relevant for manufacturing) are
indeed predictable. Two hypotheses underlie our approach for predicting such
motions: First, that the trajectory the human performs is optimal with respect
to an unknown cost function, and second, that human adaptation to their
partner's motion can be captured well through iterative re-planning with the
above cost function. The key to our approach is thus to learn a cost function
which "explains" the motion of the human. To do this, we gather example
trajectories from pairs of participants performing a collaborative assembly
task using motion capture. We then use Inverse Optimal Control to learn a cost
function from these trajectories. Finally, we predict reaching motions from the
human's current configuration to a task-space goal region by iteratively
re-planning a trajectory using the learned cost function. Our planning
algorithm is based on the trajectory optimizer STOMP, it plans for a 23 DoF
human kinematic model and accounts for the presence of a moving collaborator
and obstacles in the environment. Our results suggest that in most cases, our
method outperforms baseline methods when predicting motions. We also show that
our method outperforms baselines for predicting human motion when a human and a
robot share the workspace.Comment: 12 pages, Accepted for publication IEEE Transaction on Robotics 201
Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions
We analyze a class of estimators based on convex relaxation for solving
high-dimensional matrix decomposition problems. The observations are noisy
realizations of a linear transformation of the sum of an
approximately) low rank matrix with a second matrix
endowed with a complementary form of low-dimensional structure;
this set-up includes many statistical models of interest, including factor
analysis, multi-task regression, and robust covariance estimation. We derive a
general theorem that bounds the Frobenius norm error for an estimate of the
pair obtained by solving a convex optimization
problem that combines the nuclear norm with a general decomposable regularizer.
Our results utilize a "spikiness" condition that is related to but milder than
singular vector incoherence. We specialize our general result to two cases that
have been studied in past work: low rank plus an entrywise sparse matrix, and
low rank plus a columnwise sparse matrix. For both models, our theory yields
non-asymptotic Frobenius error bounds for both deterministic and stochastic
noise matrices, and applies to matrices that can be exactly or
approximately low rank, and matrices that can be exactly or
approximately sparse. Moreover, for the case of stochastic noise matrices and
the identity observation operator, we establish matching lower bounds on the
minimax error. The sharpness of our predictions is confirmed by numerical
simulations.Comment: 41 pages, 2 figure
- âŠ