30 research outputs found

    From Keyword Search to Exploration: How Result Visualization Aids Discovery on the Web

    No full text
    A key to the Web's success is the power of search. The elegant way in which search results are returned is usually remarkably effective. However, for exploratory search in which users need to learn, discover, and understand novel or complex topics, there is substantial room for improvement. Human computer interaction researchers and web browser designers have developed novel strategies to improve Web search by enabling users to conveniently visualize, manipulate, and organize their Web search results. This monograph offers fresh ways to think about search-related cognitive processes and describes innovative design approaches to browsers and related tools. For instance, while key word search presents users with results for specific information (e.g., what is the capitol of Peru), other methods may let users see and explore the contexts of their requests for information (related or previous work, conflicting information), or the properties that associate groups of information assets (group legal decisions by lead attorney). We also consider the both traditional and novel ways in which these strategies have been evaluated. From our review of cognitive processes, browser design, and evaluations, we reflect on the future opportunities and new paradigms for exploring and interacting with Web search results

    Learning collection fusion strategies

    Full text link

    Automatische Indexierung unter Einbeziehung semantischer Relationen

    Get PDF
    Im Rahmen von MILOS II wurde das erste MILOS-Projekt zur automatischen Indexierung von Titeldaten um eine semantische Komponente erweitert, indem Thesaurusrelationen der Schlagwortnormdatei eingebunden wurden. Der abschließend zur Evaluierung des Projekts durchgeführte Retrievaltest und seine Ergebnisse stehen im Mittelpunkt dieses Texts. Zusätzlich wird ein Überblick über bereits durchgeführte Retrievaltests (vorwiegend des anglo-amerikanischen Raums) gegeben und es wird erläutert, welche grundlegenden Fragestellungen bei der praktischen Durchführung eines Retrievaltests zu beachten sind

    Automatische Indexierung unter Einbeziehung semantischer Relationen : Ergebnisse des Retrievaltests zum MILOS II-Projekt

    Get PDF
    Im Rahmen von MILOS II wurde das erste MILOS-Projekt zur automatischen Indexierung von Titeldaten um eine semantische Komponente erweitert, indem Thesaurusrelationen der Schlagwortnormdatei eingebunden wurden. Der abschließend zur Evaluierung des Projekts durchgeführte Retrievaltest und seine Ergebnisse stehen im Mittelpunkt dieses Texts. Zusätzlich wird ein Überblick über bereits durchgeführte Retrievaltests (vorwiegend des anglo-amerikanischen Raums) gegeben und es wird erläutert, welche grundlegenden Fragestellungen bei der praktischen Durchführung eines Retrievaltests zu beachten sind

    Language Models and Smoothing Methods for Information Retrieval

    Get PDF
    Language Models and Smoothing Methods for Information Retrieval (Sprachmodelle und Glättungsmethoden für Information Retrieval) Najeeb A. Abdulmutalib Kurzfassung der Dissertation Retrievalmodelle bilden die theoretische Grundlage für effektive Information-Retrieval-Methoden. Statistische Sprachmodelle stellen eine neue Art von Retrievalmodellen dar, die seit etwa zehn Jahren in der Forschung betrachtet werde. Im Unterschied zu anderen Modellen können sie leichter an spezifische Aufgabenstellungen angepasst werden und liefern häufig bessere Retrievalergebnisse. In dieser Dissertation wird zunächst ein neues statistisches Sprachmodell vorgestellt, das explizit Dokumentlängen berücksichtigt. Aufgrund der spärlichen Beobachtungsdaten spielen Glättungsmethoden bei Sprachmodellen eine wichtige Rolle. Auch hierfür stellen wir eine neue Methode namens 'exponentieller Glättung' vor. Der experimentelle Vergleich mit konkurrierenden Ansätzen zeigt, dass unsere neuen Methoden insbesondere bei Kollektionen mit stark variierenden Dokumentlängen überlegene Ergebnisse liefert. In einem zweiten Schritt erweitern wir unseren Ansatz auf XML-Retrieval, wo hierarchisch strukturierte Dokumente betrachtet werden und beim fokussierten Retrieval möglichst kleine Dokumentteile gefunden werden sollen, die die Anfrage vollständig beantworten. Auch hier demonstriert der experimentelle Vergleich mit anderen Ansätzen die Qualität unserer neu entwickelten Methoden. Der dritte Teil der Arbeit beschäftigt sich mit dem Vergleich von Sprachmodellen und der klassischen tf*idf-Gewichtung. Neben einem besseren Verständnis für die existierenden Glättungsmethoden führt uns dieser Ansatz zur Entwicklung des Verfahrens der 'empirischen Glättung'. Die damit durchgeführten Retrievalerexperimente zeigen Verbesserungen gegenüber anderen Glättungsverfahren.Language Models and Smoothing Methods for Information Retrieval Najeeb A. Abdulmutalib Abstract of the Dissertation Designing an effective retrieval model that can rank documents accurately for a given query has been a central problem in information retrieval for several decades. An optimal retrieval model that is both effective and efficient and that can learn from feedback information over time is needed. Language models are new generation of retrieval models and have been applied since the last ten years to solve many different information retrieval problems. Compared with the traditional models such as the vector space model, they can be more easily adapted to model non traditional and complex retrieval problems and empirically they tend to achieve comparable or better performance than the traditional models. Developing new language models is currently an active research area in information retrieval. In the first stage of this thesis we present a new language model based on an odds formula, which explicitly incorporates document length as a parameter. To address the problem of data sparsity where there is rarely enough data to accurately estimate the parameters of a language model, smoothing gives a way to combine less specific, more accurate information with more specific, but noisier data. We introduce a new smoothing method called exponential smoothing, which can be combined with most language models. We present experimental results for various language models and smoothing methods on a collection with large document length variation, and show that our new methods compare favourably with the best approaches known so far. We discuss the collection effect on the retrieval function, where we investigate the performance of well known models and compare the results conducted using two variant collections. In the second stage we extend the current model from flat text retrieval to XML retrieval since there is a need for content-oriented XML retrieval systems that can efficiently and effectively store, search and retrieve information from XML document collections. Compared to traditional information retrieval, where whole documents are usually indexed and retrieved as single complete units, information retrieval from XML documents creates additional retrieval challenges. By exploiting the logical document structure, XML allows for more focussed retrieval that identifies elements rather than documents as answers to user queries. Finally we show how smoothing plays a role very similar to that of the idf function: beside the obvious role of smoothing, it also improves the accuracy of the estimated language model. The within document frequency and the collection frequency of a term actually influence the probability of relevance, which led us to a new class of smoothing function based on numeric prediction, which we call empirical smoothing. Its retrieval quality outperforms that of other smoothing methods

    A study of the kinematics of probabilities in information retrieval

    Get PDF
    In Information Retrieval (IR), probabilistic modelling is related to the use of a model that ranks documents in decreasing order of their estimated probability of relevance to a user's information need expressed by a query. In an IR system based on a probabilistic model, the user is guided to examine first the documents that are the most likely to be relevant to his need. If the system performed well, these documents should be at the top of the retrieved list. In mathematical terms the problem consists of estimating the probability P(R | q,d), that is the probability of relevance given a query q and a document d. This estimate should be performed for every document in the collection, and documents should then be ranked according to this measure. For this evaluation the system should make use of all the information available in the indexing term space. This thesis contains a study of the kinematics of probabilities in probabilistic IR. The aim is to get a better insight of the behaviour of the probabilistic models of IR currently in use and to propose new and more effective models by exploiting different kinematics of probabilities. The study is performed both from a theoretical and an experimental point of view. Theoretically, the thesis explores the use of the probability of a conditional, namely P(d → q), to estimate the conditional probability P(R | q,d). This is achieved by interpreting the term space in the context of the "possible worlds semantics". Previous approaches in this direction had as their basic assumption the consideration that "a document is a possible world". In this thesis a different approach is adopted, based on the assumption that "a term is a possible world". This approach enables the exploitation of term-term semantic relationships in the term space, estimated using an information theoretic measure. This form of information is rarely used in IR at retrieval time. Two new models of IR are proposed, based on two different way of estimating P(d → q) using a logical technique called Imaging. The first model is called Retrieval by Logical Imaging; the second is called Retrieval by General Logical Imaging, being a generalisation of the first model. The probability kinematics of these two models is compared with that of two other proposed models: the Retrieval by Joint Probability model and the Retrieval by Conditional Probability model. These last two models mimic the probability kinematics of the Vector Space model and of the Probabilistic Retrieval model. Experimentally, the retrieval effectiveness of the above four models is analysed and compared using five test collections of different sizes and characteristics. The results of this experimentation depend heavily on the choice of term weight and term similarity measures adopted. The most important conclusion of this thesis is that theoretically a probability transfer that takes into account the semantic similarity between the probability-donor and the probability-recipient is more effective than a probability transfer that does not take that into account. In the context of IR this is equivalent to saying that models that exploit the semantic similarity between terms in the term space at retrieval time are more effective that models that do not do that. Unfortunately, while the experimental investigation carried out using small test collections provide evidence supporting this conclusion, experiments performed using larger test collections do not provide as much supporting evidence (although they do not provide contrasting evidence either). The peculiar characteristics of the term space of different collections play an important role in shaping the effects that different probability kinematics have on the effectiveness of the retrieval process. The above result suggests the necessity and the usefulness of further investigations into more complex and optimised models of probabilistic IR, where probability kinematics follows non-classical approaches. The models proposed in this thesis are just two such approaches; other ones can be developed using recent results achieved in other fields, such as non-classical logics and belief revision theory

    POLIS: a probabilistic summarisation logic for structured documents

    Get PDF
    PhDAs the availability of structured documents, formatted in markup languages such as SGML, RDF, or XML, increases, retrieval systems increasingly focus on the retrieval of document-elements, rather than entire documents. Additionally, abstraction layers in the form of formalised retrieval logics have allowed developers to include search facilities into numerous applications, without the need of having detailed knowledge of retrieval models. Although automatic document summarisation has been recognised as a useful tool for reducing the workload of information system users, very few such abstraction layers have been developed for the task of automatic document summarisation. This thesis describes the development of an abstraction logic for summarisation, called POLIS, which provides users (such as developers or knowledge engineers) with a high-level access to summarisation facilities. Furthermore, POLIS allows users to exploit the hierarchical information provided by structured documents. The development of POLIS is carried out in a step-by-step way. We start by defining a series of probabilistic summarisation models, which provide weights to document-elements at a user selected level. These summarisation models are those accessible through POLIS. The formal definition of POLIS is performed in three steps. We start by providing a syntax for POLIS, through which users/knowledge engineers interact with the logic. This is followed by a definition of the logics semantics. Finally, we provide details of an implementation of POLIS. The final chapters of this dissertation are concerned with the evaluation of POLIS, which is conducted in two stages. Firstly, we evaluate the performance of the summarisation models by applying POLIS to two test collections, the DUC AQUAINT corpus, and the INEX IEEE corpus. This is followed by application scenarios for POLIS, in which we discuss how POLIS can be used in specific IR tasks

    Semantic Approaches for Knowledge Discovery and Retrieval in Biomedicine

    Get PDF
    corecore