3,822 research outputs found
Learning Tree-based Deep Model for Recommender Systems
Model-based methods for recommender systems have been studied extensively in
recent years. In systems with large corpus, however, the calculation cost for
the learnt model to predict all user-item preferences is tremendous, which
makes full corpus retrieval extremely difficult. To overcome the calculation
barriers, models such as matrix factorization resort to inner product form
(i.e., model user-item preference as the inner product of user, item latent
factors) and indexes to facilitate efficient approximate k-nearest neighbor
searches. However, it still remains challenging to incorporate more expressive
interaction forms between user and item features, e.g., interactions through
deep neural networks, because of the calculation cost.
In this paper, we focus on the problem of introducing arbitrary advanced
models to recommender systems with large corpus. We propose a novel tree-based
method which can provide logarithmic complexity w.r.t. corpus size even with
more expressive models such as deep neural networks. Our main idea is to
predict user interests from coarse to fine by traversing tree nodes in a
top-down fashion and making decisions for each user-node pair. We also show
that the tree structure can be jointly learnt towards better compatibility with
users' interest distribution and hence facilitate both training and prediction.
Experimental evaluations with two large-scale real-world datasets show that the
proposed method significantly outperforms traditional methods. Online A/B test
results in Taobao display advertising platform also demonstrate the
effectiveness of the proposed method in production environments.Comment: Accepted by KDD 201
Scene Matters: Model-based Deep Video Compression
Video compression has always been a popular research area, where many
traditional and deep video compression methods have been proposed. These
methods typically rely on signal prediction theory to enhance compression
performance by designing high efficient intra and inter prediction strategies
and compressing video frames one by one. In this paper, we propose a novel
model-based video compression (MVC) framework that regards scenes as the
fundamental units for video sequences. Our proposed MVC directly models the
intensity variation of the entire video sequence in one scene, seeking
non-redundant representations instead of reducing redundancy through
spatio-temporal predictions. To achieve this, we employ implicit neural
representation as our basic modeling architecture. To improve the efficiency of
video modeling, we first propose context-related spatial positional embedding
and frequency domain supervision in spatial context enhancement. For temporal
correlation capturing, we design the scene flow constrain mechanism and
temporal contrastive loss. Extensive experimental results demonstrate that our
method achieves up to a 20\% bitrate reduction compared to the latest video
coding standard H.266 and is more efficient in decoding than existing video
coding strategies
Managed Bumblebees Outperform Honeybees in Increasing Peach Fruit Set in China: Different Limiting Processes with Different Pollinators
© 2015 Zhang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. http://creativecommons.org/licenses/by/4.0/ The file attached is the published version of the article
- …