43,266 research outputs found

    Permutation Models for Collaborative Ranking

    Full text link
    We study the problem of collaborative filtering where ranking information is available. Focusing on the core of the collaborative ranking process, the user and their community, we propose new models for representation of the underlying permutations and prediction of ranks. The first approach is based on the assumption that the user makes successive choice of items in a stage-wise manner. In particular, we extend the Plackett-Luce model in two ways - introducing parameter factoring to account for user-specific contribution, and modelling the latent community in a generative setting. The second approach relies on log-linear parameterisation, which relaxes the discrete-choice assumption, but makes learning and inference much more involved. We propose MCMC-based learning and inference methods and derive linear-time prediction algorithms

    Expert Mining Collaborative Filtering Recommendation Algorithm Based on Signal Fluctuation

    Get PDF
    This paper proposes an advanced expert collaborative filtering recommendation algorithm. Although ordinary expert system filtering algorithms have improved the recommendation accuracy of collaborative filtering technology to a certain extent, they have not screened the level of expertise of experts, and the credibility of experts varies. Therefore, this paper proposes an expert mining system based on signal fluctuations. The algorithm uses signal processing technology to filter the level of experts. This method introduces a kurtosis factor. Regarding the user's rating sequence as a random discrete signal, and then randomly sorting the user's ratings k times, the average kurtosis of the user is obtained. And take the kurtosis value as the credibility of expert users. Through experiments on multiple datasets including MovieLens, Jester, Booking-Crossings, and Last.fm, we have proved the advancement and reliability of our method

    Distributed information consensus filters for simultaneous input and state estimation

    Get PDF
    This paper describes the distributed information filtering where a set of sensor networks are required to simultaneously estimate input and state of a linear discrete-time system from collaborative manner. Our research purpose is to develop a consensus strategy in which sensor nodes communicate within the network through a sequence of Kalman iterations and data diffusion. A novel recursive information filtering is proposed by integrating input estimation error into measurement data and weighted information matrices. On the fusing process, local system state filtering transmits estimation information using the consensus averaging algorithm, which penalizes the disagreement in a dynamic manner. A simulation example is provided to compare the performance of the distributed information filtering with optimal Gillijins–De Moor’s algorithm

    Multi-Feature Discrete Collaborative Filtering for Fast Cold-start Recommendation

    Full text link
    Hashing is an effective technique to address the large-scale recommendation problem, due to its high computation and storage efficiency on calculating the user preferences on items. However, existing hashing-based recommendation methods still suffer from two important problems: 1) Their recommendation process mainly relies on the user-item interactions and single specific content feature. When the interaction history or the content feature is unavailable (the cold-start problem), their performance will be seriously deteriorated. 2) Existing methods learn the hash codes with relaxed optimization or adopt discrete coordinate descent to directly solve binary hash codes, which results in significant quantization loss or consumes considerable computation time. In this paper, we propose a fast cold-start recommendation method, called Multi-Feature Discrete Collaborative Filtering (MFDCF), to solve these problems. Specifically, a low-rank self-weighted multi-feature fusion module is designed to adaptively project the multiple content features into binary yet informative hash codes by fully exploiting their complementarity. Additionally, we develop a fast discrete optimization algorithm to directly compute the binary hash codes with simple operations. Experiments on two public recommendation datasets demonstrate that MFDCF outperforms the state-of-the-arts on various aspects

    Discrete Factorization Machines for Fast Feature-based Recommendation

    Full text link
    User and item features of side information are crucial for accurate recommendation. However, the large number of feature dimensions, e.g., usually larger than 10^7, results in expensive storage and computational cost. This prohibits fast recommendation especially on mobile applications where the computational resource is very limited. In this paper, we develop a generic feature-based recommendation model, called Discrete Factorization Machine (DFM), for fast and accurate recommendation. DFM binarizes the real-valued model parameters (e.g., float32) of every feature embedding into binary codes (e.g., boolean), and thus supports efficient storage and fast user-item score computation. To avoid the severe quantization loss of the binarization, we propose a convergent updating rule that resolves the challenging discrete optimization of DFM. Through extensive experiments on two real-world datasets, we show that 1) DFM consistently outperforms state-of-the-art binarized recommendation models, and 2) DFM shows very competitive performance compared to its real-valued version (FM), demonstrating the minimized quantization loss. This work is accepted by IJCAI 2018.Comment: Appeared in IJCAI 201

    Hierarchical Compound Poisson Factorization

    Full text link
    Non-negative matrix factorization models based on a hierarchical Gamma-Poisson structure capture user and item behavior effectively in extremely sparse data sets, making them the ideal choice for collaborative filtering applications. Hierarchical Poisson factorization (HPF) in particular has proved successful for scalable recommendation systems with extreme sparsity. HPF, however, suffers from a tight coupling of sparsity model (absence of a rating) and response model (the value of the rating), which limits the expressiveness of the latter. Here, we introduce hierarchical compound Poisson factorization (HCPF) that has the favorable Gamma-Poisson structure and scalability of HPF to high-dimensional extremely sparse matrices. More importantly, HCPF decouples the sparsity model from the response model, allowing us to choose the most suitable distribution for the response. HCPF can capture binary, non-negative discrete, non-negative continuous, and zero-inflated continuous responses. We compare HCPF with HPF on nine discrete and three continuous data sets and conclude that HCPF captures the relationship between sparsity and response better than HPF.Comment: Will appear on Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 4
    corecore