305 research outputs found
Matrix recovery using Split Bregman
In this paper we address the problem of recovering a matrix, with inherent
low rank structure, from its lower dimensional projections. This problem is
frequently encountered in wide range of areas including pattern recognition,
wireless sensor networks, control systems, recommender systems, image/video
reconstruction etc. Both in theory and practice, the most optimal way to solve
the low rank matrix recovery problem is via nuclear norm minimization. In this
paper, we propose a Split Bregman algorithm for nuclear norm minimization. The
use of Bregman technique improves the convergence speed of our algorithm and
gives a higher success rate. Also, the accuracy of reconstruction is much
better even for cases where small number of linear measurements are available.
Our claim is supported by empirical results obtained using our algorithm and
its comparison to other existing methods for matrix recovery. The algorithms
are compared on the basis of NMSE, execution time and success rate for varying
ranks and sampling ratios
Budget-Constrained Item Cold-Start Handling in Collaborative Filtering Recommenders via Optimal Design
It is well known that collaborative filtering (CF) based recommender systems
provide better modeling of users and items associated with considerable rating
history. The lack of historical ratings results in the user and the item
cold-start problems. The latter is the main focus of this work. Most of the
current literature addresses this problem by integrating content-based
recommendation techniques to model the new item. However, in many cases such
content is not available, and the question arises is whether this problem can
be mitigated using CF techniques only. We formalize this problem as an
optimization problem: given a new item, a pool of available users, and a budget
constraint, select which users to assign with the task of rating the new item
in order to minimize the prediction error of our model. We show that the
objective function is monotone-supermodular, and propose efficient optimal
design based algorithms that attain an approximation to its optimum. Our
findings are verified by an empirical study using the Netflix dataset, where
the proposed algorithms outperform several baselines for the problem at hand.Comment: 11 pages, 2 figure
Large-scale Dynamic Network Representation via Tensor Ring Decomposition
Large-scale Dynamic Networks (LDNs) are becoming increasingly important in
the Internet age, yet the dynamic nature of these networks captures the
evolution of the network structure and how edge weights change over time,
posing unique challenges for data analysis and modeling. A Latent Factorization
of Tensors (LFT) model facilitates efficient representation learning for a LDN.
But the existing LFT models are almost based on Canonical Polyadic
Factorization (CPF). Therefore, this work proposes a model based on Tensor Ring
(TR) decomposition for efficient representation learning for a LDN.
Specifically, we incorporate the principle of single latent factor-dependent,
non-negative, and multiplicative update (SLF-NMU) into the TR decomposition
model, and analyze the particular bias form of TR decomposition. Experimental
studies on two real LDNs demonstrate that the propose method achieves higher
accuracy than existing models
- …