The issue of query performance prediction has been studied in the context of text retrieval and Web search. In this paper, we investigate this issue in an intranet environment. The collection used is a crawl of the dcs.gla.ac.uk domain, and the queries are logged from the domain search engine, which is powered by the Terrier platform. We propose an automatic evaluation methodology generating the mean average precision of each query by cross-comparing the output of diverse search engines. We measure the correlation of two pre-retrieval predictors with mean average precision, which is obtained by our proposed evaluation methodology. Results show that the predictors are very effective for 1 and 2-term queries, which are the majority of the real queries in the intranet environment
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.