1,499 research outputs found
MergeDTS: A Method for Effective Large-Scale Online Ranker Evaluation
Online ranker evaluation is one of the key challenges in information
retrieval. While the preferences of rankers can be inferred by interleaving
methods, the problem of how to effectively choose the ranker pair that
generates the interleaved list without degrading the user experience too much
is still challenging. On the one hand, if two rankers have not been compared
enough, the inferred preference can be noisy and inaccurate. On the other, if
two rankers are compared too many times, the interleaving process inevitably
hurts the user experience too much. This dilemma is known as the exploration
versus exploitation tradeoff. It is captured by the -armed dueling bandit
problem, which is a variant of the -armed bandit problem, where the feedback
comes in the form of pairwise preferences. Today's deployed search systems can
evaluate a large number of rankers concurrently, and scaling effectively in the
presence of numerous rankers is a critical aspect of -armed dueling bandit
problems.
In this paper, we focus on solving the large-scale online ranker evaluation
problem under the so-called Condorcet assumption, where there exists an optimal
ranker that is preferred to all other rankers. We propose Merge Double Thompson
Sampling (MergeDTS), which first utilizes a divide-and-conquer strategy that
localizes the comparisons carried out by the algorithm to small batches of
rankers, and then employs Thompson Sampling (TS) to reduce the comparisons
between suboptimal rankers inside these small batches. The effectiveness
(regret) and efficiency (time complexity) of MergeDTS are extensively evaluated
using examples from the domain of online evaluation for web search. Our main
finding is that for large-scale Condorcet ranker evaluation problems, MergeDTS
outperforms the state-of-the-art dueling bandit algorithms.Comment: Accepted at TOI
Feature Extraction and Duplicate Detection for Text Mining: A Survey
Text mining, also known as Intelligent Text Analysis is an important research area. It is very difficult to focus on the most appropriate information due to the high dimensionality of data. Feature Extraction is one of the important techniques in data reduction to discover the most important features. Proce- ssing massive amount of data stored in a unstructured form is a challenging task. Several pre-processing methods and algo- rithms are needed to extract useful features from huge amount of data. The survey covers different text summarization, classi- fication, clustering methods to discover useful features and also discovering query facets which are multiple groups of words or phrases that explain and summarize the content covered by a query thereby reducing time taken by the user. Dealing with collection of text documents, it is also very important to filter out duplicate data. Once duplicates are deleted, it is recommended to replace the removed duplicates. Hence we also review the literature on duplicate detection and data fusion (remove and replace duplicates).The survey provides existing text mining techniques to extract relevant features, detect duplicates and to replace the duplicate data to get fine grained knowledge to the user
- …