173 research outputs found

    Queensland University of Technology at TREC 2005

    Get PDF
    The Information Retrieval and Web Intelligence (IR-WI) research group is a research team at the Faculty of Information Technology, QUT, Brisbane, Australia. The IR-WI group participated in the Terabyte and Robust track at TREC 2005, both for the first time. For the Robust track we applied our existing information retrieval system that was originally designed for use with structured (XML) retrieval to the domain of document retrieval. For the Terabyte track we experimented with an open source IR system, Zettair and performed two types of experiments. First, we compared Zettair’s performance on both a high-powered supercomputer and a distributed system across seven midrange personal computers. Second, we compared Zettair’s performance when a standard TREC title is used, compared with a natural language query, and a query expanded with synonyms. We compare the systems both in terms of efficiency and retrieval performance. Our results indicate that the distributed system is faster than the supercomputer, while slightly decreasing retrieval performance, and that natural language queries also slightly decrease retrieval performance, while our query expansion technique significantly decreased performance

    Enhancing access to the Bibliome: the TREC 2004 Genomics Track

    Get PDF
    BACKGROUND: The goal of the TREC Genomics Track is to improve information retrieval in the area of genomics by creating test collections that will allow researchers to improve and better understand failures of their systems. The 2004 track included an ad hoc retrieval task, simulating use of a search engine to obtain documents about biomedical topics. This paper describes the Genomics Track of the Text Retrieval Conference (TREC) 2004, a forum for evaluation of IR research systems, where retrieval in the genomics domain has recently begun to be assessed. RESULTS: A total of 27 research groups submitted 47 different runs. The most effective runs, as measured by the primary evaluation measure of mean average precision (MAP), used a combination of domain-specific and general techniques. The best MAP obtained by any run was 0.4075. Techniques that expanded queries with gene name lists as well as words from related articles had the best efficacy. However, many runs performed more poorly than a simple baseline run, indicating that careful selection of system features is essential. CONCLUSION: Various approaches to ad hoc retrieval provide a diversity of efficacy. The TREC Genomics Track and its test collection resources provide tools that allow improvement in information retrieval systems

    User Variability and IR System Evaluation

    Get PDF
    ABSTRACT Test collection design eliminates sources of user variability to make statistical comparisons among information retrieval (IR) systems more affordable. Does this choice unnecessarily limit generalizability of the outcomes to real usage scenarios? We explore two aspects of user variability with regard to evaluating the relative performance of IR systems, assessing effectiveness in the context of a subset of topics from three TREC collections, with the embodied information needs categorized against three levels of increasing task complexity. First, we explore the impact of widely differing queries that searchers construct for the same information need description. By executing those queries, we demonstrate that query formulation is critical to query effectiveness. The results also show that the range of scores characterizing effectiveness for a single system arising from these queries is comparable or greater than the range of scores arising from variation among systems using only a single query per topic. Second, our experiments reveal that searchers display substantial individual variation in the numbers of documents and queries they anticipate needing to issue, and there are underlying significant differences in these numbers in line with increasing task complexity levels. Our conclusion is that test collection design would be improved by the use of multiple query variations per topic, and could be further improved by the use of metrics which are sensitive to the expected numbers of useful documents

    Detection and management of redundancy for information retrieval

    Get PDF
    The growth of the web, authoring software, and electronic publishing has led to the emergence of a new type of document collection that is decentralised, amorphous, dynamic, and anarchic. In such collections, redundancy is a significant issue. Documents can spread and propagate across such collections without any control or moderation. Redundancy can interfere with the information retrieval process, leading to decreased user amenity in accessing information from these collections, and thus must be effectively managed. The precise definition of redundancy varies with the application. We restrict ourselves to documents that are co-derivative: those that share a common heritage, and hence contain passages of common text. We explore document fingerprinting, a well-known technique for the detection of co-derivative document pairs. Our new lossless fingerprinting algorithm improves the effectiveness of a range of document fingerprinting approaches. We empirically show that our algorithm can be highly effective at discovering co-derivative document pairs in large collections. We study the occurrence and management of redundancy in a range of application domains. On the web, we find that document fingerprinting is able to identify widespread redundancy, and that this redundancy has a significant detrimental effect on the quality of search results. Based on user studies, we suggest that redundancy is most appropriately managed as a postprocessing step on the ranked list and explain how and why this should be done. In the genomic area of sequence homology search, we explain why the existing techniques for redundancy discovery are increasingly inefficient, and present a critique of the current approaches to redundancy management. We show how document fingerprinting with a modified version of our algorithm provides significant efficiency improvements, and propose a new approach to redundancy management based on wildcards. We demonstrate that our scheme provides the benefits of existing techniques but does not have their deficiencies. Redundancy in distributed information retrieval systems - where different parts of the collection are searched by autonomous servers - cannot be effectively managed using traditional fingerprinting techniques. We thus propose a new data structure, the grainy hash vector, for redundancy detection and management in this environment. We show in preliminary tests that the grainy hash vector is able to accurately detect a good proportion of redundant document pairs while maintaining low resource usage

    Identifying effective translations for cross-lingual Arabic-to-English user-generated speech search

    Get PDF
    Cross Language Information Retrieval (CLIR) systems are a valuable tool to enable speakers of one language to search for content of interest expressed in a different language. A group for whom this is of particular interest is bilingual Arabic speakers who wish to search for English language content using information needs expressed in Arabic queries. A key challenge in CLIR is crossing the language barrier between the query and the documents. The most common approach to bridging this gap is automated query translation, which can be unreliable for vague or short queries. In this work, we examine the potential for improving CLIR effectiveness by predicting the translation effectiveness using Query Performance Prediction (QPP) techniques. We propose a novel QPP method to estimate the quality of translation for an Arabic-Engish Cross-lingual User-generated Speech Search (CLUGS) task. We present an empirical evaluation that demonstrates the quality of our method on alternative translation outputs extracted from an Arabic-to-English Machine Translation system developed for this task. Finally, we show how this framework can be integrated in CLUGS to find relevant translations for improved retrieval performance

    Index ordering by query-independent measures

    Get PDF
    There is an ever-increasing amount of data that is being produced from various data sources — this data must then be organised effectively if we hope to search though it. Traditional information retrieval approaches search through all available data in a particular collection in order to find the most suitable results, however, for particularly large collections this may be extremely time consuming. Our purposed solution to this problem is to only search a limited amount of the collection at query-time, in order to speed this retrieval process up. Although, in doing this we aim to limit the loss in retrieval efficacy (in terms of accuracy of results). The way we aim to do this is to firstly identify the most “important” documents within the collection, and then sort the documents within the collection in order of their "importance” in the collection. In this way we can choose to limit the amount of information to search through, by eliminating the documents of lesser importance, which should not only make the search more efficient, but should also limit any loss in retrieval accuracy. In this thesis we investigate various different query-independent methods that may indicate the importance of a document in a collection. The more accurate the measure is at determining an important document, the more effectively we can eliminate documents from the retrieval process - improving the query-throughput of the system, as well as providing a high level of accuracy in the returned results. The effectiveness of these approaches are evaluated using the datasets provided by the terabyte track at the Text REtreival Conference (TREC)

    Multi-Stage Search Architectures for Streaming Documents

    Get PDF
    The web is becoming more dynamic due to the increasing engagement and contribution of Internet users in the age of social media. A more dynamic web presents new challenges for web search--an important application of Information Retrieval (IR). A stream of new documents constantly flows into the web at a high rate, adding to the old content. In many cases, documents quickly lose their relevance. In these time-sensitive environments, finding relevant content in response to user queries requires a real-time search service; immediate availability of content for search and a fast ranking, which requires an optimized search architecture. These aspects of today's web are at odds with how academic IR researchers have traditionally viewed the web, as a collection of static documents. Moreover, search architectures have received little attention in the IR literature. Therefore, academic IR research, for the most part, does not provide a mechanism to efficiently handle a high-velocity stream of documents, nor does it facilitate real-time ranking. This dissertation addresses the aforementioned shortcomings. We present an efficient mech- anism to index a stream of documents, thereby enabling immediate availability of content. Our indexer works entirely in main memory and provides a mechanism to control inverted list con- tiguity, thereby enabling faster retrieval. Additionally, we consider document ranking with a machine-learned model, dubbed "Learning to Rank" (LTR), and introduce a novel multi-stage search architecture that enables fast retrieval and allows for more design flexibility. The stages of our architecture include candidate generation (top k retrieval), feature extraction, and docu- ment re-ranking. We compare this architecture with a traditional monolithic architecture where candidate generation and feature extraction occur together. As we lay out our architecture, we present optimizations to each stage to facilitate low-latency ranking. These optimizations include a fast approximate top k retrieval algorithm, document vectors for feature extraction, architecture- conscious implementations of tree ensembles for LTR using predication and vectorization, and algorithms to train tree-based LTR models that are fast to evaluate. We also study the efficiency- effectiveness tradeoffs of these techniques, and empirically evaluate our end-to-end architecture on microblog document collections. We show that our techniques improve efficiency without degrading quality

    Creación de una colección de prueba de literatura científica en español para evaluar sistemas de recuperación de información

    Get PDF
    La evaluación de sistemas de recuperación requiere contar con colecciones de prueba compuestas por un corpus de documentos, un conjunto de necesidades de información (tópicos) y los juicios de relevancia. Éstas permiten evaluar diferentes estrategias y sistemas ya que permiten comprender la naturaleza de los resultados, compararlos con otros y reproducir pruebas en iguales condiciones. El proceso de armado de una colección es una tarea que requiere un importante esfuerzo humano ya que no se puede realizar –de manera completa– automáticamente. En este trabajo se plantean los lineamientos para la construcción de una colección de prueba en español de dominio público a partir de artículos de investigación en el área de la informática y las ciencias de la computación. La creación de esta colección –destinada a la evaluación la recuperación “ad-hoc”– persigue como primer objetivo poner a disposición de la comunidad universitaria un corpus de documentos semi-estructurados que permita la evaluación de diferentes estrategias de búsqueda. Además, debido a que el tema de recuperación de información se encuentra en pleno crecimiento consideramos que en los próximos años se evaluará su incorporación como tema de grado en diferentes carreras. Es por ello es que creemos que este corpus sería un buen recurso didáctico para realizar tareas de laboratorio. Un segundo objetivo consiste en recolectar y procesar la mayor cantidad posible de artículos científicos publicados en español y crear una colección mayor que sirva para investigación de diversos aspectos del área de recuperación de información como: extracción de información, clasificación, respuestas a preguntas, resumen automático, entre otros. Se presenta una metodología para la selección de los documentos, la demarcación de su estructura, la creación de los tópicos y de los juicios de relevancia, junto con una primera prueba con un conjunto reducido de documentos.Eje: OtrosRed de Universidades con Carreras en Informática (RedUNCI

    Creación de una colección de prueba de literatura científica en español para evaluar sistemas de recuperación de información

    Get PDF
    La evaluación de sistemas de recuperación requiere contar con colecciones de prueba compuestas por un corpus de documentos, un conjunto de necesidades de información (tópicos) y los juicios de relevancia. Éstas permiten evaluar diferentes estrategias y sistemas ya que permiten comprender la naturaleza de los resultados, compararlos con otros y reproducir pruebas en iguales condiciones. El proceso de armado de una colección es una tarea que requiere un importante esfuerzo humano ya que no se puede realizar –de manera completa– automáticamente. En este trabajo se plantean los lineamientos para la construcción de una colección de prueba en español de dominio público a partir de artículos de investigación en el área de la informática y las ciencias de la computación. La creación de esta colección –destinada a la evaluación la recuperación “ad-hoc”– persigue como primer objetivo poner a disposición de la comunidad universitaria un corpus de documentos semi-estructurados que permita la evaluación de diferentes estrategias de búsqueda. Además, debido a que el tema de recuperación de información se encuentra en pleno crecimiento consideramos que en los próximos años se evaluará su incorporación como tema de grado en diferentes carreras. Es por ello es que creemos que este corpus sería un buen recurso didáctico para realizar tareas de laboratorio. Un segundo objetivo consiste en recolectar y procesar la mayor cantidad posible de artículos científicos publicados en español y crear una colección mayor que sirva para investigación de diversos aspectos del área de recuperación de información como: extracción de información, clasificación, respuestas a preguntas, resumen automático, entre otros. Se presenta una metodología para la selección de los documentos, la demarcación de su estructura, la creación de los tópicos y de los juicios de relevancia, junto con una primera prueba con un conjunto reducido de documentos.Eje: OtrosRed de Universidades con Carreras en Informática (RedUNCI
    corecore