1,528 research outputs found
Sequential Prediction of Social Media Popularity with Deep Temporal Context Networks
Prediction of popularity has profound impact for social media, since it
offers opportunities to reveal individual preference and public attention from
evolutionary social systems. Previous research, although achieves promising
results, neglects one distinctive characteristic of social data, i.e.,
sequentiality. For example, the popularity of online content is generated over
time with sequential post streams of social media. To investigate the
sequential prediction of popularity, we propose a novel prediction framework
called Deep Temporal Context Networks (DTCN) by incorporating both temporal
context and temporal attention into account. Our DTCN contains three main
components, from embedding, learning to predicting. With a joint embedding
network, we obtain a unified deep representation of multi-modal user-post data
in a common embedding space. Then, based on the embedded data sequence over
time, temporal context learning attempts to recurrently learn two adaptive
temporal contexts for sequential popularity. Finally, a novel temporal
attention is designed to predict new popularity (the popularity of a new
user-post pair) with temporal coherence across multiple time-scales.
Experiments on our released image dataset with about 600K Flickr photos
demonstrate that DTCN outperforms state-of-the-art deep prediction algorithms,
with an average of 21.51% relative performance improvement in the popularity
prediction (Spearman Ranking Correlation).Comment: accepted in IJCAI-1
Bilinear Graph Neural Network with Neighbor Interactions
Graph Neural Network (GNN) is a powerful model to learn representations and
make predictions on graph data. Existing efforts on GNN have largely defined
the graph convolution as a weighted sum of the features of the connected nodes
to form the representation of the target node. Nevertheless, the operation of
weighted sum assumes the neighbor nodes are independent of each other, and
ignores the possible interactions between them. When such interactions exist,
such as the co-occurrence of two neighbor nodes is a strong signal of the
target node's characteristics, existing GNN models may fail to capture the
signal. In this work, we argue the importance of modeling the interactions
between neighbor nodes in GNN. We propose a new graph convolution operator,
which augments the weighted sum with pairwise interactions of the
representations of neighbor nodes. We term this framework as Bilinear Graph
Neural Network (BGNN), which improves GNN representation ability with bilinear
interactions between neighbor nodes. In particular, we specify two BGNN models
named BGCN and BGAT, based on the well-known GCN and GAT, respectively.
Empirical results on three public benchmarks of semi-supervised node
classification verify the effectiveness of BGNN -- BGCN (BGAT) outperforms GCN
(GAT) by 1.6% (1.5%) in classification accuracy.Codes are available at:
https://github.com/zhuhm1996/bgnn.Comment: Accepted by IJCAI 2020. SOLE copyright holder is IJCAI (International
Joint Conferences on Artificial Intelligence), all rights reserve
How to Retrain Recommender System? A Sequential Meta-Learning Method
Practical recommender systems need be periodically retrained to refresh the
model with new interaction data. To pursue high model fidelity, it is usually
desirable to retrain the model on both historical and new data, since it can
account for both long-term and short-term user preference. However, a full
model retraining could be very time-consuming and memory-costly, especially
when the scale of historical data is large. In this work, we study the model
retraining mechanism for recommender systems, a topic of high practical values
but has been relatively little explored in the research community.
Our first belief is that retraining the model on historical data is
unnecessary, since the model has been trained on it before. Nevertheless,
normal training on new data only may easily cause overfitting and forgetting
issues, since the new data is of a smaller scale and contains fewer information
on long-term user preference. To address this dilemma, we propose a new
training method, aiming to abandon the historical data during retraining
through learning to transfer the past training experience. Specifically, we
design a neural network-based transfer component, which transforms the old
model to a new model that is tailored for future recommendations. To learn the
transfer component well, we optimize the "future performance" -- i.e., the
recommendation accuracy evaluated in the next time period. Our Sequential
Meta-Learning(SML) method offers a general training paradigm that is applicable
to any differentiable model. We demonstrate SML on matrix factorization and
conduct experiments on two real-world datasets. Empirical results show that SML
not only achieves significant speed-up, but also outperforms the full model
retraining in recommendation accuracy, validating the effectiveness of our
proposals. We release our codes at: https://github.com/zyang1580/SML.Comment: Appear in SIGIR 202
HDIdx: High-Dimensional Indexing for Efficient Approximate Nearest Neighbor Search
Fast Nearest Neighbor (NN) search is a fundamental challenge in large-scale
data processing and analytics, particularly for analyzing multimedia contents
which are often of high dimensionality. Instead of using exact NN search,
extensive research efforts have been focusing on approximate NN search
algorithms. In this work, we present "HDIdx", an efficient high-dimensional
indexing library for fast approximate NN search, which is open-source and
written in Python. It offers a family of state-of-the-art algorithms that
convert input high-dimensional vectors into compact binary codes, making them
very efficient and scalable for NN search with very low space complexity
- …
