13 research outputs found
Scalable DB+IR technology: processing Probabilistic Datalog with HySpirit
Probabilistic Datalog (PDatalog, proposed in 1995) is a probabilistic variant of Datalog and a nice conceptual idea to model Information Retrieval in a logical, rule-based programming paradigm. Making PDatalog work in real-world applications requires more than probabilistic facts and rules, and the semantics associated with the evaluation of the programs. We report in this paper some of the key features of the HySpirit system required to scale the execution of PDatalog programs.
Firstly, there is the requirement to express probability estimation in PDatalog. Secondly, fuzzy-like predicates are required to model vague predicates (e.g. vague match of attributes such as age or price). Thirdly, to handle large data sets there are scalability issues to be addressed, and therefore, HySpirit provides probabilistic relational indexes and parallel and distributed processing. The main contribution of this paper is a consolidated view on the methods of the HySpirit system to make PDatalog applicable in real-scale applications that involve a wide range of requirements typical for data (information) management and analysis
MapReduce for information retrieval evaluation: "Let's quickly test this on 12 TB of data"
We propose to use MapReduce to quickly test new retrieval approaches on a cluster of machines by sequentially scanning all documents. We present a small case study in which we use a cluster of 15 low cost machines to search a web crawl of 0.5 billion pages showing that sequential scanning is a viable approach to running large-scale information retrieval experiments with little effort. The code is available to other researchers at: http://mirex.sourceforge.net
A database approach to information retrieval:The remarkable relationship between language models and region models
In this report, we unify two quite distinct approaches to information retrieval: region models and language models. Region models were developed for structured document retrieval. They provide a well-defined behaviour as well as a simple query language that allows application developers to rapidly develop applications. Language models are particularly useful to reason about the ranking of search results, and for developing new ranking approaches. The unified model allows application developers to define complex language modeling approaches as logical queries on a textual database. We show a remarkable one-to-one relationship between region queries and the language models they represent for a wide variety of applications: simple ad-hoc search, cross-language retrieval, video retrieval, and web search
Techniques for improving efficiency and scalability for the integration of information retrieval and databases
PhDThis thesis is on the topic of integration of Information Retrieval (IR) and Databases (DB), with
particular focuses on improving efficiency and scalability of integrated IR and DB technology
(IR+DB). The main purpose of this study is to develop efficient and scalable techniques for
supporting integrated IR and DB technology, which is a popular approach today for handling
complex queries over text and structured data.
Our specific interest in this thesis is how to efficiently handle queries over large-scale text
and structured data. The work is based on a technology that integrates probability theory and
relational algebra, where retrievals for text and data are to be expressed in probabilistic logical
programs such as probabilistic relational algebra or probabilistic Datalog. To support efficient
processing of probabilistic logical programs, we proposed three optimization techniques
that focus on aspects covered logical and physical layers, which include: scoring-driven query
optimization using scoring expression, query processing with top-k incorporated pipeline, and
indexing with relational inverted index.
Specifically, scoring expressions are proposed for expressing the scoring or probabilistic semantics
of implied scoring functions of PRA expressions, so that efficient query execution plan
can be generated by rule-based scoring-driven optimizer. Secondly, to balance efficiency and
effectiveness so that to improve query response time, we studied methods for incorporating topk
algorithms into pipelined query execution engine for IR+DB systems. Thirdly, the proposed
relational inverted index integrates IR-style inverted index and DB-style tuple-based index, which
can be used to support efficient probability estimation and aggregation as well as conventional
relational operations.
Experiments were carried out to investigate the performances of proposed techniques. Experimental
results showed that the efficiency and scalability of an IR+DB prototype have been
improved, while the system can handle queries efficiently on considerable large data sets for a
number of IR tasks
A Probabilistic Framework for Information Modelling and Retrieval Based on User Annotations on Digital Objects
Annotations are a means to make critical remarks, to explain and
comment things, to add notes and give opinions, and to relate objects.
Nowadays, they can be found in digital libraries and collaboratories,
for example as a building block for scientific discussion on the one
hand or as private notes on the other. We further find them in product
reviews, scientific databases and many "Web 2.0" applications; even
well-established concepts like emails can be regarded as annotations
in a certain sense. Digital annotations can be (textual) comments,
markings (i.e. highlighted parts) and references to other documents
or document parts. Since annotations convey information which is
potentially important to satisfy a user's information need, this
thesis tries to answer the question of how to exploit annotations for
information retrieval. It gives a first answer to the question if
retrieval effectiveness can be improved with annotations.
A survey of the "annotation universe" reveals some facets of
annotations; for example, they can be content level annotations
(extending the content of the annotation object) or meta level ones
(saying something about the annotated object). Besides the annotations
themselves, other objects created during the process of annotation can
be interesting for retrieval, these being the annotated fragments.
These objects are integrated into an object-oriented model comprising
digital objects such as structured documents and annotations as well
as fragments. In this model, the different relationships among the
various objects are reflected. From this model, the basic data
structure for annotation-based retrieval, the structured annotation
hypertext, is derived.
In order to thoroughly exploit the information contained in structured
annotation hypertexts, a probabilistic, object-oriented logical
framework called POLAR is introduced. In POLAR, structured annotation
hypertexts can be modelled by means of probabilistic propositions and
four-valued logics. POLAR allows for specifying several relationships
among annotations and annotated (sub)parts or fragments. Queries can
be posed to extract the knowledge contained in structured annotation
hypertexts. POLAR supports annotation-based retrieval, i.e. document
and discussion search, by applying an augmentation strategy (knowledge
augmentation, propagating propositions from subcontexts like annotations,
or relevance augmentation, where retrieval status values are propagated)
in conjunction with probabilistic inference, where P(d -> q), the probability
that a document d implies a query q, is estimated.
POLAR's semantics is based on possible worlds and accessibility
relations. It is implemented on top of four-valued probabilistic Datalog.
POLAR's core retrieval functionality, knowledge augmentation with
probabilistic inference, is evaluated for discussion and document
search. The experiments show that all relevant POLAR objects, merged
annotation targets, fragments and content annotations, are able to
increase retrieval effectiveness when used as a context for discussion
or document search. Additional experiments reveal that we can determine
the polarity of annotations with an accuracy of around 80%
Techniques for organizational memory information systems
The KnowMore project aims at providing active support to humans working on knowledge-intensive tasks. To this end the knowledge available in the modeled business processes or their incarnations in specific workflows shall be used to improve information handling. We present a representation formalism for knowledge-intensive tasks and the specification of its object-oriented realization. An operational semantics is sketched by specifying the basic functionality of the Knowledge Agent which works on the knowledge intensive task representation.
The Knowledge Agent uses a meta-level description of all information sources available in the Organizational Memory. We discuss the main dimensions that such a description scheme must be designed along, namely information content, structure, and context. On top of relational database management systems, we basically realize deductive object- oriented modeling with a comfortable annotation facility. The concrete knowledge descriptions are obtained by configuring the generic formalism with ontologies which describe the required modeling dimensions.
To support the access to documents, data, and formal knowledge in an Organizational Memory an integrated domain ontology and thesaurus is proposed which can be constructed semi-automatically by combining document-analysis and knowledge engineering methods. Thereby the costs for up-front knowledge engineering and the need to consult domain experts can be considerably reduced. We present an automatic thesaurus generation tool and show how it can be applied to build and enhance an integrated ontology /thesaurus. A first evaluation shows that the proposed method does indeed facilitate knowledge acquisition and maintenance of an organizational memory
Probabilistic retrieval models - relationships, context-specific application, selection and implementation
PhDRetrieval models are the core components of information retrieval systems, which guide the document
and query representations, as well as the document ranking schemes. TF-IDF, binary
independence retrieval (BIR) model and language modelling (LM) are three of the most influential
contemporary models due to their stability and performance. The BIR model and LM
have probabilistic theory as their basis, whereas TF-IDF is viewed as a heuristic model, whose
theoretical justification always fascinates researchers.
This thesis firstly investigates the parallel derivation of BIR model, LM and Poisson model,
wrt event spaces, relevance assumptions and ranking rationales. It establishes a bridge between
the BIR model and LM, and derives TF-IDF from the probabilistic framework.
Then, the thesis presents the probabilistic logical modelling of the retrieval models. Various
ways of how to estimate and aggregate probability, and alternative implementation to nonprobabilistic
operator are demonstrated. Typical models have been implemented.
The next contribution concerns the usage of of context-specific frequencies, i.e., the frequencies
counted based on assorted element types or within different text scopes. The hypothesis
is that they can help to rank the elements in structured document retrieval. The thesis applies
context-specific frequencies on term weighting schemes in these models, and the outcome is a
generalised retrieval model with regard to both element and document ranking.
The retrieval models behave differently on the same query set: for some queries, one model
performs better, for other queries, another model is superior. Therefore, one idea to improve the
overall performance of a retrieval system is to choose for each query the model that is likely
to perform the best. This thesis proposes and empirically explores the model selection method
according to the correlation of query feature and query performance, which contributes to the
methodology of dynamically choosing a model.
In summary, this thesis contributes a study of probabilistic models and their relationships,
the probabilistic logical modelling of retrieval models, the usage and effect of context-specific
frequencies in models, and the selection of retrieval models
POLIS: a probabilistic summarisation logic for structured documents
PhDAs the availability of structured documents, formatted in markup languages such as SGML, RDF,
or XML, increases, retrieval systems increasingly focus on the retrieval of document-elements,
rather than entire documents. Additionally, abstraction layers in the form of formalised retrieval
logics have allowed developers to include search facilities into numerous applications, without
the need of having detailed knowledge of retrieval models.
Although automatic document summarisation has been recognised as a useful tool for reducing
the workload of information system users, very few such abstraction layers have been developed
for the task of automatic document summarisation. This thesis describes the development
of an abstraction logic for summarisation, called POLIS, which provides users (such as developers
or knowledge engineers) with a high-level access to summarisation facilities. Furthermore,
POLIS allows users to exploit the hierarchical information provided by structured documents.
The development of POLIS is carried out in a step-by-step way. We start by defining a series
of probabilistic summarisation models, which provide weights to document-elements at a user
selected level. These summarisation models are those accessible through POLIS. The formal
definition of POLIS is performed in three steps. We start by providing a syntax for POLIS,
through which users/knowledge engineers interact with the logic. This is followed by a definition
of the logics semantics. Finally, we provide details of an implementation of POLIS.
The final chapters of this dissertation are concerned with the evaluation of POLIS, which is
conducted in two stages. Firstly, we evaluate the performance of the summarisation models by
applying POLIS to two test collections, the DUC AQUAINT corpus, and the INEX IEEE corpus.
This is followed by application scenarios for POLIS, in which we discuss how POLIS can be used in specific IR tasks