10,724 research outputs found
A Multi-task Learning Approach for Improving Product Title Compression with User Search Log Data
It is a challenging and practical research problem to obtain effective
compression of lengthy product titles for E-commerce. This is particularly
important as more and more users browse mobile E-commerce apps and more
merchants make the original product titles redundant and lengthy for Search
Engine Optimization. Traditional text summarization approaches often require a
large amount of preprocessing costs and do not capture the important issue of
conversion rate in E-commerce. This paper proposes a novel multi-task learning
approach for improving product title compression with user search log data. In
particular, a pointer network-based sequence-to-sequence approach is utilized
for title compression with an attentive mechanism as an extractive method and
an attentive encoder-decoder approach is utilized for generating user search
queries. The encoding parameters (i.e., semantic embedding of original titles)
are shared among the two tasks and the attention distributions are jointly
optimized. An extensive set of experiments with both human annotated data and
online deployment demonstrate the advantage of the proposed research for both
compression qualities and online business values.Comment: 8 Pages, accepted at AAAI 201
Probing Product Description Generation via Posterior Distillation
In product description generation (PDG), the user-cared aspect is critical
for the recommendation system, which can not only improve user's experiences
but also obtain more clicks. High-quality customer reviews can be considered as
an ideal source to mine user-cared aspects. However, in reality, a large number
of new products (known as long-tailed commodities) cannot gather sufficient
amount of customer reviews, which brings a big challenge in the product
description generation task. Existing works tend to generate the product
description solely based on item information, i.e., product attributes or title
words, which leads to tedious contents and cannot attract customers
effectively. To tackle this problem, we propose an adaptive posterior network
based on Transformer architecture that can utilize user-cared information from
customer reviews. Specifically, we first extend the self-attentive Transformer
encoder to encode product titles and attributes. Then, we apply an adaptive
posterior distillation module to utilize useful review information, which
integrates user-cared aspects to the generation process. Finally, we apply a
Transformer-based decoding phase with copy mechanism to automatically generate
the product description. Besides, we also collect a large-scare Chinese product
description dataset to support our work and further research in this field.
Experimental results show that our model is superior to traditional generative
models in both automatic indicators and human evaluation
Ask the GRU: Multi-Task Learning for Deep Text Recommendations
In a variety of application domains the content to be recommended to users is
associated with text. This includes research papers, movies with associated
plot summaries, news articles, blog posts, etc. Recommendation approaches based
on latent factor models can be extended naturally to leverage text by employing
an explicit mapping from text to factors. This enables recommendations for new,
unseen content, and may generalize better, since the factors for all items are
produced by a compactly-parametrized model. Previous work has used topic models
or averages of word embeddings for this mapping. In this paper we present a
method leveraging deep recurrent neural networks to encode the text sequence
into a latent vector, specifically gated recurrent units (GRUs) trained
end-to-end on the collaborative filtering task. For the task of scientific
paper recommendation, this yields models with significantly higher accuracy. In
cold-start scenarios, we beat the previous state-of-the-art, all of which
ignore word order. Performance is further improved by multi-task learning,
where the text encoder network is trained for a combination of content
recommendation and item metadata prediction. This regularizes the collaborative
filtering model, ameliorating the problem of sparsity of the observed rating
matrix.Comment: 8 page
- …