3,264 research outputs found
Matrix completion and extrapolation via kernel regression
Matrix completion and extrapolation (MCEX) are dealt with here over
reproducing kernel Hilbert spaces (RKHSs) in order to account for prior
information present in the available data. Aiming at a faster and
low-complexity solver, the task is formulated as a kernel ridge regression. The
resultant MCEX algorithm can also afford online implementation, while the class
of kernel functions also encompasses several existing approaches to MC with
prior information. Numerical tests on synthetic and real datasets show that the
novel approach performs faster than widespread methods such as alternating
least squares (ALS) or stochastic gradient descent (SGD), and that the recovery
error is reduced, especially when dealing with noisy data
Application Performance Modeling via Tensor Completion
Performance tuning, software/hardware co-design, and job scheduling are among
the many tasks that rely on models to predict application performance. We
propose and evaluate low-rank tensor decomposition for modeling application
performance. We discretize the input and configuration domains of an
application using regular grids. Application execution times mapped within
grid-cells are averaged and represented by tensor elements. We show that
low-rank canonical-polyadic (CP) tensor decomposition is effective in
approximating these tensors. We further show that this decomposition enables
accurate extrapolation of unobserved regions of an application's parameter
space. We then employ tensor completion to optimize a CP decomposition given a
sparse set of observed execution times. We consider alternative
piecewise/grid-based models and supervised learning models for six applications
and demonstrate that CP decomposition optimized using tensor completion offers
higher prediction accuracy and memory-efficiency for high-dimensional
performance modeling
Gaussian-Process-based Robot Learning from Demonstration
Endowed with higher levels of autonomy, robots are required to perform
increasingly complex manipulation tasks. Learning from demonstration is arising
as a promising paradigm for transferring skills to robots. It allows to
implicitly learn task constraints from observing the motion executed by a human
teacher, which can enable adaptive behavior. We present a novel
Gaussian-Process-based learning from demonstration approach. This probabilistic
representation allows to generalize over multiple demonstrations, and encode
variability along the different phases of the task. In this paper, we address
how Gaussian Processes can be used to effectively learn a policy from
trajectories in task space. We also present a method to efficiently adapt the
policy to fulfill new requirements, and to modulate the robot behavior as a
function of task variability. This approach is illustrated through a real-world
application using the TIAGo robot.Comment: 8 pages, 10 figure
Generalization error bounds for kernel matrix completion and extrapolation
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Prior information can be incorporated in matrix completion to improve estimation accuracy and extrapolate the missing entries. Reproducing kernel Hilbert spaces provide tools to leverage the said prior information, and derive more reliable algorithms. This paper analyzes the generalization error of such approaches, and presents numerical tests confirming the theoretical resultsThis work is supported by ERDF funds (TEC2013-41315-R and TEC2016-75067-C4-2), the Catalan Government (2017 SGR 578), and NSF grants(1500713, 1514056, 1711471 and 1509040).Peer ReviewedPostprint (published version
- …