5,462 research outputs found

    A Broad Learning Approach for Context-Aware Mobile Application Recommendation

    Full text link
    With the rapid development of mobile apps, the availability of a large number of mobile apps in application stores brings challenge to locate appropriate apps for users. Providing accurate mobile app recommendation for users becomes an imperative task. Conventional approaches mainly focus on learning users' preferences and app features to predict the user-app ratings. However, most of them did not consider the interactions among the context information of apps. To address this issue, we propose a broad learning approach for \textbf{C}ontext-\textbf{A}ware app recommendation with \textbf{T}ensor \textbf{A}nalysis (CATA). Specifically, we utilize a tensor-based framework to effectively integrate user's preference, app category information and multi-view features to facilitate the performance of app rating prediction. The multidimensional structure is employed to capture the hidden relationships between multiple app categories with multi-view features. We develop an efficient factorization method which applies Tucker decomposition to learn the full-order interactions within multiple categories and features. Furthermore, we employ a group 1\ell_{1}-norm regularization to learn the group-wise feature importance of each view with respect to each app category. Experiments on two real-world mobile app datasets demonstrate the effectiveness of the proposed method

    Discrete Factorization Machines for Fast Feature-based Recommendation

    Full text link
    User and item features of side information are crucial for accurate recommendation. However, the large number of feature dimensions, e.g., usually larger than 10^7, results in expensive storage and computational cost. This prohibits fast recommendation especially on mobile applications where the computational resource is very limited. In this paper, we develop a generic feature-based recommendation model, called Discrete Factorization Machine (DFM), for fast and accurate recommendation. DFM binarizes the real-valued model parameters (e.g., float32) of every feature embedding into binary codes (e.g., boolean), and thus supports efficient storage and fast user-item score computation. To avoid the severe quantization loss of the binarization, we propose a convergent updating rule that resolves the challenging discrete optimization of DFM. Through extensive experiments on two real-world datasets, we show that 1) DFM consistently outperforms state-of-the-art binarized recommendation models, and 2) DFM shows very competitive performance compared to its real-valued version (FM), demonstrating the minimized quantization loss. This work is accepted by IJCAI 2018.Comment: Appeared in IJCAI 201
    corecore