59,580 research outputs found
Matrix completion and extrapolation via kernel regression
Matrix completion and extrapolation (MCEX) are dealt with here over
reproducing kernel Hilbert spaces (RKHSs) in order to account for prior
information present in the available data. Aiming at a faster and
low-complexity solver, the task is formulated as a kernel ridge regression. The
resultant MCEX algorithm can also afford online implementation, while the class
of kernel functions also encompasses several existing approaches to MC with
prior information. Numerical tests on synthetic and real datasets show that the
novel approach performs faster than widespread methods such as alternating
least squares (ALS) or stochastic gradient descent (SGD), and that the recovery
error is reduced, especially when dealing with noisy data
Graph Convolutional Matrix Completion
We consider matrix completion for recommender systems from the point of view
of link prediction on graphs. Interaction data such as movie ratings can be
represented by a bipartite user-item graph with labeled edges denoting observed
ratings. Building on recent progress in deep learning on graph-structured data,
we propose a graph auto-encoder framework based on differentiable message
passing on the bipartite interaction graph. Our model shows competitive
performance on standard collaborative filtering benchmarks. In settings where
complimentary feature information or structured data such as a social network
is available, our framework outperforms recent state-of-the-art methods.Comment: 9 pages, 3 figures, updated with additional experimental evaluatio
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
- …