15,968 research outputs found
Improving Term Frequency Normalization for Multi-topical Documents, and Application to Language Modeling Approaches
Term frequency normalization is a serious issue since lengths of documents
are various. Generally, documents become long due to two different reasons -
verbosity and multi-topicality. First, verbosity means that the same topic is
repeatedly mentioned by terms related to the topic, so that term frequency is
more increased than the well-summarized one. Second, multi-topicality indicates
that a document has a broad discussion of multi-topics, rather than single
topic. Although these document characteristics should be differently handled,
all previous methods of term frequency normalization have ignored these
differences and have used a simplified length-driven approach which decreases
the term frequency by only the length of a document, causing an unreasonable
penalization. To attack this problem, we propose a novel TF normalization
method which is a type of partially-axiomatic approach. We first formulate two
formal constraints that the retrieval model should satisfy for documents having
verbose and multi-topicality characteristic, respectively. Then, we modify
language modeling approaches to better satisfy these two constraints, and
derive novel smoothing methods. Experimental results show that the proposed
method increases significantly the precision for keyword queries, and
substantially improves MAP (Mean Average Precision) for verbose queries.Comment: 8 pages, conference paper, published in ECIR '0
Entity Query Feature Expansion Using Knowledge Base Links
Recent advances in automatic entity linking and knowledge base
construction have resulted in entity annotations for document and
query collections. For example, annotations of entities from large
general purpose knowledge bases, such as Freebase and the Google
Knowledge Graph. Understanding how to leverage these entity
annotations of text to improve ad hoc document retrieval is an open
research area. Query expansion is a commonly used technique to
improve retrieval effectiveness. Most previous query expansion
approaches focus on text, mainly using unigram concepts. In this
paper, we propose a new technique, called entity query feature
expansion (EQFE) which enriches the query with features from
entities and their links to knowledge bases, including structured
attributes and text. We experiment using both explicit query entity
annotations and latent entities. We evaluate our technique on TREC
text collections automatically annotated with knowledge base entity
links, including the Google Freebase Annotations (FACC1) data.
We find that entity-based feature expansion results in significant
improvements in retrieval effectiveness over state-of-the-art text
expansion approaches
Non-Compositional Term Dependence for Information Retrieval
Modelling term dependence in IR aims to identify co-occurring terms that are
too heavily dependent on each other to be treated as a bag of words, and to
adapt the indexing and ranking accordingly. Dependent terms are predominantly
identified using lexical frequency statistics, assuming that (a) if terms
co-occur often enough in some corpus, they are semantically dependent; (b) the
more often they co-occur, the more semantically dependent they are. This
assumption is not always correct: the frequency of co-occurring terms can be
separate from the strength of their semantic dependence. E.g. "red tape" might
be overall less frequent than "tape measure" in some corpus, but this does not
mean that "red"+"tape" are less dependent than "tape"+"measure". This is
especially the case for non-compositional phrases, i.e. phrases whose meaning
cannot be composed from the individual meanings of their terms (such as the
phrase "red tape" meaning bureaucracy). Motivated by this lack of distinction
between the frequency and strength of term dependence in IR, we present a
principled approach for handling term dependence in queries, using both lexical
frequency and semantic evidence. We focus on non-compositional phrases,
extending a recent unsupervised model for their detection [21] to IR. Our
approach, integrated into ranking using Markov Random Fields [31], yields
effectiveness gains over competitive TREC baselines, showing that there is
still room for improvement in the very well-studied area of term dependence in
IR
- …