14 research outputs found

    Fast ALS-based tensor factorization for context-aware recommendation from implicit feedback

    Full text link
    Albeit, the implicit feedback based recommendation problem - when only the user history is available but there are no ratings - is the most typical setting in real-world applications, it is much less researched than the explicit feedback case. State-of-the-art algorithms that are efficient on the explicit case cannot be straightforwardly transformed to the implicit case if scalability should be maintained. There are few if any implicit feedback benchmark datasets, therefore new ideas are usually experimented on explicit benchmarks. In this paper, we propose a generic context-aware implicit feedback recommender algorithm, coined iTALS. iTALS apply a fast, ALS-based tensor factorization learning method that scales linearly with the number of non-zero elements in the tensor. The method also allows us to incorporate diverse context information into the model while maintaining its computational efficiency. In particular, we present two such context-aware implementation variants of iTALS. The first incorporates seasonality and enables to distinguish user behavior in different time intervals. The other views the user history as sequential information and has the ability to recognize usage pattern typical to certain group of items, e.g. to automatically tell apart product types or categories that are typically purchased repetitively (collectibles, grocery goods) or once (household appliances). Experiments performed on three implicit datasets (two proprietary ones and an implicit variant of the Netflix dataset) show that by integrating context-aware information with our factorization framework into the state-of-the-art implicit recommender algorithm the recommendation quality improves significantly.Comment: Accepted for ECML/PKDD 2012, presented on 25th September 2012, Bristol, U

    CTR Prediction for DSP with Improved Cube Factorization Model from Historical Bidding Log

    No full text

    Towards scalable view-invariant gait recognition: Multilinear analysis for gait

    No full text
    Abstract. In this paper we introduce a novel approach for learning view-invariant gait representation that does not require synthesizing particular views or any camera calibration. Given walking sequences captured from multiple views for multiple people, we fit a multilinear generative model using higher-order singular value decomposition which decomposes view factors, body configuration factors, and gait-style factors. Gait-style is a view-invariant, time-invariant, and speedinvariant gait signature that can then be used in recognition. In the recognition phase, a new walking cycle of unknown person in unknown view is automatically aligned to the learned model and then iterative procedure is used to solve for both the gait-style parameter and the view. The proposed framework allows for scalability to add a new person to already learned model even if a single cycle of a single view is available.

    Facial expression analysis using nonlinear decomposable generative models

    No full text
    Abstract. We present a new framework to represent and analyze dynamic facial motions using a decomposable generative model. In this paper, we consider facial expressions which lie on a one dimensional closed manifold, i.e., start from some configuration and coming back to the same configuration, while there are other sources of variability such as different classes of expression, and different people, etc., all of which are needed to be parameterized. The learned model supports tasks such as facial expression recognition, person identification, and synthesis. We aim to learn a generative model that can generate different dynamic facial appearances for different people and for different expressions. Given a single image or a sequence of images, we can use the model to solve for the temporal embedding, expression type and person identification parameters. As a result we can directly infer intensity of facial expression, expression type, and person identity from the visual input. The model can successfully be used to recognize expressions performed by different people never seen during training. We show experiment results for applying the framework for simultaneous face and facial expression recognition. Sub-categories: 1.1 Novel algorithms, 1.6 Others: modeling facial expression

    Learning Modewise Independent Components from Tensor Data Using Multilinear Mixing Model

    No full text
    Abstract. Independent component analysis (ICA) is a popular unsupervised learning method. This paper extends it to multilinear modewise ICA (MMICA) for tensors and explores two architectures in learning and recognition. MMICA models tensor data as mixtures generated from modewise source matrices that encode statistically independent information. Its sources have more compact representations than the sources in ICA. We embed ICA into the multilinear principal component analysis framework to solve for each source matrix alternatively with a few iterations. Then we obtain mixing tensors through regularized inverses of the source matrices. Simulations on synthetic data show that MMICA can estimate hidden sources accurately from structured tensor data. Moreover, in face recognition experiments, it outperforms competing solutions with both architectures

    Collective sampling and analysis of high order tensors for chatroom communications

    No full text
    Abstract. This work investigates the accuracy and efficiency tradeoffs between centralized and collective (distributed) algorithms for (i) sampling, and (ii) n-way data analysis techniques in multidimensional stream data, such as Internet chatroom communications. Its contributions are threefold. First, we use the Kolmogorov-Smirnov goodness-of-fit test to show that statistical differences between real data obtained by collective sampling in time dimension from multiple servers and that of obtained from a single server are insignificant. Second, we show using the real data that collective data analysis of 3-way data arrays (users x keywords x time) known as high order tensors is more efficient than centralized algorithms with respect to both space and computational cost. Furthermore, we show that this gain is obtained without loss of accuracy. Third, we examine the sensitivity of collective constructions and analysis of high order data tensors to the choice of server selection and sampling window size. We construct 4-way tensors (users x keywords x time x servers) and analyze them to show the impact of server and window size selections on the results.

    Clustering View-Segmented Documents via Tensor Modeling

    No full text
    International audienceWe propose a clustering framework for view-segmented documents, i.e., relatively long documents made up of smaller fragments that can be provided according to a target set of views or aspects. The framework is designed to exploit a view-based document segmentation into a third-order tensor model, whose decomposition result would enable any standard document clustering algorithm to better reflect the multi-faceted nature of the documents. Experimental results on document collections featuring paragraph-based, metadata-based, or user-driven views have shown the significance of the proposed approach, highlighting performance improvement in the document clustering task
    corecore