7,868 research outputs found
Representation Learning for Attributed Multiplex Heterogeneous Network
Network embedding (or graph embedding) has been widely used in many
real-world applications. However, existing methods mainly focus on networks
with single-typed nodes/edges and cannot scale well to handle large networks.
Many real-world networks consist of billions of nodes and edges of multiple
types, and each node is associated with different attributes. In this paper, we
formalize the problem of embedding learning for the Attributed Multiplex
Heterogeneous Network and propose a unified framework to address this problem.
The framework supports both transductive and inductive learning. We also give
the theoretical analysis of the proposed framework, showing its connection with
previous works and proving its better expressiveness. We conduct systematical
evaluations for the proposed framework on four different genres of challenging
datasets: Amazon, YouTube, Twitter, and Alibaba. Experimental results
demonstrate that with the learned embeddings from the proposed framework, we
can achieve statistically significant improvements (e.g., 5.99-28.23% lift by
F1 scores; p<<0.01, t-test) over previous state-of-the-art methods for link
prediction. The framework has also been successfully deployed on the
recommendation system of a worldwide leading e-commerce company, Alibaba Group.
Results of the offline A/B tests on product recommendation further confirm the
effectiveness and efficiency of the framework in practice.Comment: Accepted to KDD 2019. Website: https://sites.google.com/view/gatn
Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction
Click-Through Rate prediction is an important task in recommender systems,
which aims to estimate the probability of a user to click on a given item.
Recently, many deep models have been proposed to learn low-order and high-order
feature interactions from original features. However, since useful interactions
are always sparse, it is difficult for DNN to learn them effectively under a
large number of parameters. In real scenarios, artificial features are able to
improve the performance of deep models (such as Wide & Deep Learning), but
feature engineering is expensive and requires domain knowledge, making it
impractical in different scenarios. Therefore, it is necessary to augment
feature space automatically. In this paper, We propose a novel Feature
Generation by Convolutional Neural Network (FGCNN) model with two components:
Feature Generation and Deep Classifier. Feature Generation leverages the
strength of CNN to generate local patterns and recombine them to generate new
features. Deep Classifier adopts the structure of IPNN to learn interactions
from the augmented feature space. Experimental results on three large-scale
datasets show that FGCNN significantly outperforms nine state-of-the-art
models. Moreover, when applying some state-of-the-art models as Deep
Classifier, better performance is always achieved, showing the great
compatibility of our FGCNN model. This work explores a novel direction for CTR
predictions: it is quite useful to reduce the learning difficulties of DNN by
automatically identifying important features
Template Adaptation for Face Verification and Identification
Face recognition performance evaluation has traditionally focused on
one-to-one verification, popularized by the Labeled Faces in the Wild dataset
for imagery and the YouTubeFaces dataset for videos. In contrast, the newly
released IJB-A face recognition dataset unifies evaluation of one-to-many face
identification with one-to-one face verification over templates, or sets of
imagery and videos for a subject. In this paper, we study the problem of
template adaptation, a form of transfer learning to the set of media in a
template. Extensive performance evaluations on IJB-A show a surprising result,
that perhaps the simplest method of template adaptation, combining deep
convolutional network features with template specific linear SVMs, outperforms
the state-of-the-art by a wide margin. We study the effects of template size,
negative set construction and classifier fusion on performance, then compare
template adaptation to convolutional networks with metric learning, 2D and 3D
alignment. Our unexpected conclusion is that these other methods, when combined
with template adaptation, all achieve nearly the same top performance on IJB-A
for template-based face verification and identification
- …