3 research outputs found
A Transformer-based Embedding Model for Personalized Product Search
Product search is an important way for people to browse and purchase items on
E-commerce platforms. While customers tend to make choices based on their
personal tastes and preferences, analysis of commercial product search logs has
shown that personalization does not always improve product search quality. Most
existing product search techniques, however, conduct undifferentiated
personalization across search sessions. They either use a fixed coefficient to
control the influence of personalization or let personalization take effect all
the time with an attention mechanism. The only notable exception is the
recently proposed zero-attention model (ZAM) that can adaptively adjust the
effect of personalization by allowing the query to attend to a zero vector.
Nonetheless, in ZAM, personalization can act at most as equally important as
the query and the representations of items are static across the collection
regardless of the items co-occurring in the user's historical purchases. Aware
of these limitations, we propose a transformer-based embedding model (TEM) for
personalized product search, which could dynamically control the influence of
personalization by encoding the sequence of query and user's purchase history
with a transformer architecture. Personalization could have a dominant impact
when necessary and interactions between items can be taken into consideration
when computing attention weights. Experimental results show that TEM
outperforms state-of-the-art personalization product retrieval models
significantly.Comment: In the proceedings of SIGIR 202
Deep Neural Network and Boosting Based Hybrid Quality Ranking for e-Commerce Product Search
In the age of information overload, customers are overwhelmed with the number of products available for sale. Search engines try to overcome this issue by filtering relevant items to the users’ queries. Traditional search engines rely on the exact match of terms in the query and product meta-data. Recently, deep learning-based approaches grabbed more attention by outperforming traditional methods in many circumstances. In this work, we involve the power of embeddings to solve the challenging task of optimizing product search engines in e-commerce. This work proposes an e-commerce product search engine based on a similarity metric that works on top of query and product embeddings. Two pre-trained word embedding models were tested, the first representing a category of models that generate fixed embeddings and a second representing a newer category of models that generate context-aware embeddings. Furthermore, a re-ranking step was performed by incorporating a list of quality indicators that reflects the utility of the product to the customer as inputs to well-known ranking methods. To prove the reliability of the approach, the Amazon reviews dataset was used for experimentation. The results demonstrated the effectiveness of context-aware embeddings in retrieving relevant products and the quality indicators in ranking high-quality products
Recommended from our members
Neural Approaches to Feedback in Information Retrieval
Relevance feedback on search results indicates users\u27 search intent and preferences. Extensive studies have shown that incorporating relevance feedback (RF) on the top k (usually 10) ranked results significantly improves the performance of re-ranking. However, most existing research on user feedback focuses on words-based retrieval models. Recently, neural retrieval models have shown their efficacy in capturing relevance matching in retrieval but little research has been conducted on neural approaches to feedback. This leads us to study different aspects of feedback with neural approaches in the dissertation.
RF techniques are seldom used in real search scenarios since they can require significant manual efforts to obtain explicit judgments for search results. However, with mobile or voice-based intelligent assistants being more popular nowadays, user feedback of result quality could be collected potentially during their interactions with the assistants. We study both positive and negative RF to refine the re-ranking performance. Positive feedback aims to find more relevant results given some known relevant results while negative feedback targets identifying the first relevant result. In most cases, it is more beneficial to find the first relevant result compared with finding additional relevant results. However, negative feedback is much more challenging than positive feedback since relevant results are usually similar while non-relevant results could vary considerably.
We focus on the tasks of text retrieval and product search to study the different aspects of incorporating feedback for ranking refinement with neural approaches. Our contributions are: (1) we show that iterative relevance feedback (IRF) is more effective than top-k RF on answer passages and we further improve IRF with neural approaches; (2) we propose an effective RF technique based on neural models for product search; (3) we study how to refine re-ranking with negative feedback for conversational product search; (4) we leverage negative feedback in user responses to ask clarifying questions in open-domain conversational search. Our research improves retrieval performance by incorporating feedback in interactive retrieval and approaches multi-turn conversational information-seeking tasks with a focus on positive and negative feedback