49,799 research outputs found
JNET: Learning User Representations via Joint Network Embedding and Topic Embedding
User representation learning is vital to capture diverse user preferences,
while it is also challenging as user intents are latent and scattered among
complex and different modalities of user-generated data, thus, not directly
measurable. Inspired by the concept of user schema in social psychology, we
take a new perspective to perform user representation learning by constructing
a shared latent space to capture the dependency among different modalities of
user-generated data. Both users and topics are embedded to the same space to
encode users' social connections and text content, to facilitate joint modeling
of different modalities, via a probabilistic generative framework. We evaluated
the proposed solution on large collections of Yelp reviews and StackOverflow
discussion posts, with their associated network structures. The proposed model
outperformed several state-of-the-art topic modeling based user models with
better predictive power in unseen documents, and state-of-the-art network
embedding based user models with improved link prediction quality in unseen
nodes. The learnt user representations are also proved to be useful in content
recommendation, e.g., expert finding in StackOverflow
NRPA: Neural Recommendation with Personalized Attention
Existing review-based recommendation methods usually use the same model to
learn the representations of all users/items from reviews posted by users
towards items. However, different users have different preference and different
items have different characteristics. Thus, the same word or similar reviews
may have different informativeness for different users and items. In this paper
we propose a neural recommendation approach with personalized attention to
learn personalized representations of users and items from reviews. We use a
review encoder to learn representations of reviews from words, and a user/item
encoder to learn representations of users or items from reviews. We propose a
personalized attention model, and apply it to both review and user/item
encoders to select different important words and reviews for different
users/items. Experiments on five datasets validate our approach can effectively
improve the performance of neural recommendation.Comment: 4 pages, 4 figure
A Location-Sentiment-Aware Recommender System for Both Home-Town and Out-of-Town Users
Spatial item recommendation has become an important means to help people
discover interesting locations, especially when people pay a visit to
unfamiliar regions. Some current researches are focusing on modelling
individual and collective geographical preferences for spatial item
recommendation based on users' check-in records, but they fail to explore the
phenomenon of user interest drift across geographical regions, i.e., users
would show different interests when they travel to different regions. Besides,
they ignore the influence of public comments for subsequent users' check-in
behaviors. Specifically, it is intuitive that users would refuse to check in to
a spatial item whose historical reviews seem negative overall, even though it
might fit their interests. Therefore, it is necessary to recommend the right
item to the right user at the right location. In this paper, we propose a
latent probabilistic generative model called LSARS to mimic the decision-making
process of users' check-in activities both in home-town and out-of-town
scenarios by adapting to user interest drift and crowd sentiments, which can
learn location-aware and sentiment-aware individual interests from the contents
of spatial items and user reviews. Due to the sparsity of user activities in
out-of-town regions, LSARS is further designed to incorporate the public
preferences learned from local users' check-in behaviors. Finally, we deploy
LSARS into two practical application scenes: spatial item recommendation and
target user discovery. Extensive experiments on two large-scale location-based
social networks (LBSNs) datasets show that LSARS achieves better performance
than existing state-of-the-art methods.Comment: Accepted by KDD 201
Latent dirichlet markov allocation for sentiment analysis
In recent years probabilistic topic models have gained tremendous attention in data mining and natural language processing research areas. In the field of information retrieval for text mining, a variety of probabilistic topic models have been used to analyse content of documents. A topic model is a generative model for documents, it specifies a probabilistic procedure by which documents can be generated. All topic models share the idea that documents are mixture of topics, where a topic is a probability distribution over words. In this paper we describe Latent Dirichlet Markov Allocation Model (LDMA), a new generative probabilistic topic model, based on Latent Dirichlet Allocation (LDA) and Hidden Markov Model (HMM), which emphasizes on extracting multi-word topics from text data. LDMA is a four-level hierarchical Bayesian model where topics are associated with documents, words are associated with topics and topics in the model can be presented with single- or multi-word terms. To evaluate performance of LDMA, we report results in the field of aspect detection in sentiment analysis, comparing to the basic LDA model
- …