4,063 research outputs found

    Index ordering by query-independent measures

    Get PDF
    Conventional approaches to information retrieval search through all applicable entries in an inverted file for a particular collection in order to find those documents with the highest scores. For particularly large collections this may be extremely time consuming. A solution to this problem is to only search a limited amount of the collection at query-time, in order to speed up the retrieval process. In doing this we can also limit the loss in retrieval efficacy (in terms of accuracy of results). The way we achieve this is to firstly identify the most “important” documents within the collection, and sort documents within inverted file lists in order of this “importance”. In this way we limit the amount of information to be searched at query time by eliminating documents of lesser importance, which not only makes the search more efficient, but also limits loss in retrieval accuracy. Our experiments, carried out on the TREC Terabyte collection, report significant savings, in terms of number of postings examined, without significant loss of effectiveness when based on several measures of importance used in isolation, and in combination. Our results point to several ways in which the computation cost of searching large collections of documents can be significantly reduced

    An empirical analysis of pruning techniques performance, retrievability and bias

    Get PDF
    Prior work on using retrievability measures in the evaluation of information retrieval (IR) systems has laid out the foundations for investigating the relation between retrieval performance and retrieval bias. While various factors influencing retrievability have been examined, showing how the retrieval model may influence bias, no prior work has examined the impact of the index (and how it is optimized) on retrieval bias. Intuitively, how the documents are represented, and what terms they contain, will influence whether they are retrievable or not. In this paper, we investigate how the retrieval bias of a system changes as the inverted index is optimized for efficiency through static index pruning. In our analysis, we consider four pruning methods and examine how they affect performance and bias on the TREC GOV2 Collection. Our results show that the relationship between these factors is varied and complex-and very much dependent on the pruning algorithm. We find that more pruning results in relatively little change or a slight decrease in bias up to a point, and then a dramatic increase. The increase in bias corresponds to a sharp decrease in early precision such as NDCG@10 and is also indicative of a large decrease in MAP. The findings suggest that the impact of pruning algorithms can be quite varied-but retrieval bias could be used to guide the pruning process. Further work is required to determine precisely which documents are most affected and how this impacts upon performance

    Doctor of Philosophy

    Get PDF
    dissertationWith the steady increase in online shopping, more and more consumers are resorting to Product Search Engines and shopping sites such as Yahoo! Shopping, Google Product Search, and Bing Shopping as their first stop for purchasing goods online. These sites act as intermediaries between shoppers and merchants to drive user experience by enabling faceted search, comparison of products based on their specifications, and ranking of products based on their attributes. The success of these systems heavily relies on the variety and quality of the products that they present to users. In that sense, product catalogs are to online shopping what the Web index is to Web search. Therefore, comprehensive product catalogs are fundamental to the success of Product Search Engines. Given the large number of products and categories, and the speed at which they are released to the market, constructing and keeping catalogs up-to-date becomes a challenging task, calling for the need of automated techniques that do not rely on human intervention. The main goal of this dissertation is to automatically construct catalogs for product search engines. To achieve this goal, the following problems must be addressed by these search engines: (i) product synthesis-creation of product instances that conform with the catalog schema; (ii) product discovery- derivation of product instances for products whose schemata are not present in the catalog; (iii) schema synthesis- construction of schemata for new product categories. We propose an end-to-end framework that automates, to a great extent, these tasks. We present a detailed experimental evaluation using real data sets which shows that our framework is effective, scaling to a large number of products and categories, and resilient to noise that is inherent in Web data

    An efficient message passing algorithm for multi-target tracking

    Get PDF
    We propose a new approach for multi-sensor multi-target tracking by constructing statistical models on graphs with continuous-valued nodes for target states and discrete-valued nodes for data association hypotheses. These graphical representations lead to message-passing algorithms for the fusion of data across time, sensor, and target that are radically different than algorithms such as those found in state-of-the-art multiple hypothesis tracking (MHT) algorithms. Important differences include: (a) our message-passing algorithms explicitly compute different probabilities and estimates than MHT algorithms; (b) our algorithms propagate information from future data about past hypotheses via messages backward in time (rather than doing this via extending track hypothesis trees forward in time); and (c) the combinatorial complexity of the problem is manifested in a different way, one in which particle-like, approximated, messages are propagated forward and backward in time (rather than hypotheses being enumerated and truncated over time). A side benefit of this structure is that it automatically provides smoothed target trajectories using future data. A major advantage is the potential for low-order polynomial (and linear in some cases) dependency on the length of the tracking interval N, in contrast with the exponential complexity in N for so-called N-scan algorithms. We provide experimental results that support this potential. As a result, we can afford to use longer tracking intervals, allowing us to incorporate out-of-sequence data seamlessly and to conduct track-stitching when future data provide evidence that disambiguates tracks well into the past
    corecore