2 research outputs found
Selective Query Processing: a Risk-Sensitive Selection of System Configurations
In information retrieval systems, search parameters are optimized to ensure
high effectiveness based on a set of past searches and these optimized
parameters are then used as the system configuration for all subsequent
queries. A better approach, however, would be to adapt the parameters to fit
the query at hand. Selective query expansion is one such an approach, in which
the system decides automatically whether or not to expand the query, resulting
in two possible system configurations. This approach was extended recently to
include many other parameters, leading to many possible system configurations
where the system automatically selects the best configuration on a per-query
basis. To determine the ideal configurations to use on a per-query basis in
real-world systems we developed a method in which a restricted number of
possible configurations is pre-selected and then used in a meta-search engine
that decides the best search configuration on a per query basis. We define a
risk-sensitive approach for configuration pre-selection that considers the
risk-reward trade-off between the number of configurations kept, and system
effectiveness. For final configuration selection, the decision is based on
query feature similarities. We find that a relatively small number of
configurations (20) selected by our risk-sensitive model is sufficient to
increase effectiveness by about 15% according(P@10, nDCG@10) when compared to
traditional grid search using a single configuration and by about 20% when
compared to learning to rank documents. Our risk-sensitive approach works for
both diversity- and ad hoc-oriented searches. Moreover, the similarity-based
selection method outperforms the more sophisticated approaches. Thus, we
demonstrate the feasibility of developing per-query information retrieval
systems, which will guide future research in this direction.Comment: 30 pages, 5 figures, 8 tables; submitted to TOIS ACM journa
Learning to Rank System Configurations
International audienceInformation Retrieval (IR) systems heavily rely on a large number of parameters, such as the retrieval model or various query expansion parameters, whose values greatly in uence the overall retrieval effectiveness. However, setting all these parameters individually can often be a tedious task, since they can all affect one another, while also vary for different queries. We propose to tackle this problem by dealing with entire system configurations (i.e. a set of parameters representing an IR system) instead of single parameters, and to apply state-of-the-art Learning to Rank techniques to select the most appropriate configuration for a given query. The experiments we conducted on two TREC AdHoc collections show that this approach is feasible and significantly outperforms the traditional way to configure a system using grid search, as well as the top performing systems of the TREC tracks. We also show an analysis on the impact of different groups of parameters on retrieval effectiveness