768 research outputs found
On best rank one approximation of tensors
In this paper we suggest a new algorithm for the computation of a best rank
one approximation of tensors, called alternating singular value decomposition.
This method is based on the computation of maximal singular values and the
corresponding singular vectors of matrices. We also introduce a modification
for this method and the alternating least squares method, which ensures that
alternating iterations will always converge to a semi-maximal point. (A
critical point in several vector variables is semi-maximal if it is maximal
with respect to each vector variable, while other vector variables are kept
fixed.) We present several numerical examples that illustrate the computational
performance of the new method in comparison to the alternating least square
method.Comment: 17 pages and 6 figure
Fast truncation of mode ranks for bilinear tensor operations
We propose a fast algorithm for mode rank truncation of the result of a
bilinear operation on 3-tensors given in the Tucker or canonical form. If the
arguments and the result have mode sizes n and mode ranks r, the computation
costs . The algorithm is based on the cross approximation of
Gram matrices, and the accuracy of the resulted Tucker approximation is limited
by square root of machine precision.Comment: 9 pages, 2 tables. Submitted to Numerical Linear Algebra and
Applications, special edition for ICSMT conference, Hong Kong, January 201
Fast ALS-based tensor factorization for context-aware recommendation from implicit feedback
Albeit, the implicit feedback based recommendation problem - when only the
user history is available but there are no ratings - is the most typical
setting in real-world applications, it is much less researched than the
explicit feedback case. State-of-the-art algorithms that are efficient on the
explicit case cannot be straightforwardly transformed to the implicit case if
scalability should be maintained. There are few if any implicit feedback
benchmark datasets, therefore new ideas are usually experimented on explicit
benchmarks. In this paper, we propose a generic context-aware implicit feedback
recommender algorithm, coined iTALS. iTALS apply a fast, ALS-based tensor
factorization learning method that scales linearly with the number of non-zero
elements in the tensor. The method also allows us to incorporate diverse
context information into the model while maintaining its computational
efficiency. In particular, we present two such context-aware implementation
variants of iTALS. The first incorporates seasonality and enables to
distinguish user behavior in different time intervals. The other views the user
history as sequential information and has the ability to recognize usage
pattern typical to certain group of items, e.g. to automatically tell apart
product types or categories that are typically purchased repetitively
(collectibles, grocery goods) or once (household appliances). Experiments
performed on three implicit datasets (two proprietary ones and an implicit
variant of the Netflix dataset) show that by integrating context-aware
information with our factorization framework into the state-of-the-art implicit
recommender algorithm the recommendation quality improves significantly.Comment: Accepted for ECML/PKDD 2012, presented on 25th September 2012,
Bristol, U
- …