1,018 research outputs found
Fisher's exact test explains a popular metric in information retrieval
Term frequency-inverse document frequency, or tf-idf for short, is a
numerical measure that is widely used in information retrieval to quantify the
importance of a term of interest in one out of many documents. While tf-idf was
originally proposed as a heuristic, much work has been devoted over the years
to placing it on a solid theoretical foundation. Following in this tradition,
we here advance the first justification for tf-idf that is grounded in
statistical hypothesis testing. More precisely, we first show that the
one-tailed version of Fisher's exact test, also known as the hypergeometric
test, corresponds well with a common tf-idf variant on selected real-data
information retrieval tasks. We then set forth a mathematical argument that
suggests the tf-idf variant approximates the negative logarithm of the
one-tailed Fisher's exact test P-value (i.e., a hypergeometric distribution
tail probability). The Fisher's exact test interpretation of this common tf-idf
variant furnishes the working statistician with a ready explanation of tf-idf's
long-established effectiveness.Comment: 26 pages, 4 figures, 1 tables, minor revision
Efficient & Effective Selective Query Rewriting with Efficiency Predictions
To enhance effectiveness, a user's query can be rewritten internally by the search engine in many ways, for example by applying proximity, or by expanding the query with related terms. However, approaches that benefit effectiveness often have a negative impact on efficiency, which has impacts upon the user satisfaction, if the query is excessively slow. In this paper, we propose a novel framework for using the predicted execution time of various query rewritings to select between alternatives on a per-query basis, in a manner that ensures both effectiveness and efficiency. In particular, we propose the prediction of the execution time of ephemeral (e.g., proximity) posting lists generated from uni-gram inverted index posting lists, which are used in establishing the permissible query rewriting alternatives that may execute in the allowed time. Experiments examining both the effectiveness and efficiency of the proposed approach demonstrate that a 49% decrease in mean response time (and 62% decrease in 95th-percentile response time) can be attained without significantly hindering the effectiveness of the search engine
Bioconductor: open software development for computational biology and bioinformatics.
The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples
Probability models for information retrieval based on divergence from randomness
This thesis devises a novel methodology based on probability theory, suitable for the construction of term-weighting models of Information Retrieval. Our term-weighting functions are created within a general framework made up of three components. Each of the three components is built independently from the others. We obtain the term-weighting functions from the general model in a purely theoretic way instantiating each component with different probability distribution forms.
The thesis begins with investigating the nature of the statistical inference involved in Information Retrieval. We explore the estimation problem underlying the process of sampling. De Finetti’s theorem is used to show how to convert the frequentist approach into Bayesian inference and we display and employ the derived estimation techniques in the context of Information Retrieval.
We initially pay a great attention to the construction of the basic sample spaces of Information Retrieval. The notion of single or multiple sampling from different populations in the context of Information Retrieval is extensively discussed and used through-out the thesis. The language modelling approach and the standard probabilistic model are studied under the same foundational view and are experimentally compared to the divergence-from-randomness approach.
In revisiting the main information retrieval models in the literature, we show that even language modelling approach can be exploited to assign term-frequency normalization to the models of divergence from randomness. We finally introduce a novel framework for the query expansion. This framework is based on the models of divergence-from-randomness and it can be applied to arbitrary models of IR, divergence-based, language modelling and probabilistic models included. We have done a very large number of experiment and results show that the framework generates highly effective Information Retrieval models
Analysing Timelines of National Histories across Wikipedia Editions: A Comparative Computational Approach
Portrayals of history are never complete, and each description inherently
exhibits a specific viewpoint and emphasis. In this paper, we aim to
automatically identify such differences by computing timelines and detecting
temporal focal points of written history across languages on Wikipedia. In
particular, we study articles related to the history of all UN member states
and compare them in 30 language editions. We develop a computational approach
that allows to identify focal points quantitatively, and find that Wikipedia
narratives about national histories (i) are skewed towards more recent events
(recency bias) and (ii) are distributed unevenly across the continents with
significant focus on the history of European countries (Eurocentric bias). We
also establish that national historical timelines vary across language
editions, although average interlingual consensus is rather high. We hope that
this paper provides a starting point for a broader computational analysis of
written history on Wikipedia and elsewhere
- …