3,052 research outputs found

    Experiments in terabyte searching, genomic retrieval and novelty detection for TREC 2004

    Get PDF
    In TREC2004, Dublin City University took part in three tracks, Terabyte (in collaboration with University College Dublin), Genomic and Novelty. In this paper we will discuss each track separately and present separate conclusions from this work. In addition, we present a general description of a text retrieval engine that we have developed in the last year to support our experiments into large scale, distributed information retrieval, which underlies all of the track experiments described in this document

    A comparative analysis of 21 literature search engines

    Get PDF
    With increasing number of bibliographic software, scientists and health professionals either make a subjective choice of tool(s) that could suit their needs or face a challenge of analyzing multiple features of a plethora of search programs. There is an urgent need for a thorough comparative analysis of the available bio-literature scanning tools, from the user’s perspective. We report results of the first time semi-quantitative comparison of 21 programs, which can search published (partial or full text) documents in life science areas. The observations can assist life science researchers and medical professionals to make an informed selection among the programs, depending on their search objectives. 
Some of the important findings are: 
1. Most of the hits obtained from Scopus, ReleMed, EBImed, CiteXplore, and HighWire Press were usually relevant (i.e. these tools show a better precision than other tools). 
2. But a very high number of relevant citations were retrieved by HighWire Press, Google Scholar, CiteXplore and Pubmed Central (they had better recall). 
3. HWP and CiteXplore seemed to have a good balance of precision and recall efficiencies. 
4. PubMed Central, PubMed and Scopus provided the most useful query systems. 
5. GoPubMed, BioAsk, EBIMed, ClusterMed could be more useful among the tools that can automatically process the retrieved citations for further scanning of bio-entities such as proteins, diseases, tissues, molecular interactions, etc. 
The authors suggest the use of PubMed, Scopus, Google Scholar and HighWire Press - for better coverage, and GoPubMed - to view the hits categorized based on the MeSH and gene ontology terms. The article is relavant to all life science subjects.
&#xa

    A test collection for evaluating retrieval of studies for inclusion in systematic reviews

    Get PDF
    This paper introduces a test collection for evaluating the effectiveness of different methods used to retrieve research studies for inclusion in systematic reviews. Systematic reviews appraise and synthesise studies that meet specific inclusion criteria. Systematic reviews intended for a biomedical science audience use boolean queries with many, often complex, search clauses to retrieve studies; these are then manually screened to determine eligibility for inclusion in the review. This process is expensive and time consuming. The development of systems that improve retrieval effectiveness will have an immediate impact by reducing the complexity and resources required for this process. Our test collection consists of approximately 26 million research studies extracted from the freely available MEDLINE database, 94 review (query) topics extracted from Cochrane systematic reviews, and corresponding relevance assessments. Tasks for which the collection can be used for information retrieval system evaluation are described and the use of the collection to evaluate common baselines within one such task is demonstrated. The test collection is available at https://github.com/ielab/SIGIR2017-PICO-Collection

    PhenDisco: phenotype discovery system for the database of genotypes and phenotypes.

    Get PDF
    The database of genotypes and phenotypes (dbGaP) developed by the National Center for Biotechnology Information (NCBI) is a resource that contains information on various genome-wide association studies (GWAS) and is currently available via NCBI's dbGaP Entrez interface. The database is an important resource, providing GWAS data that can be used for new exploratory research or cross-study validation by authorized users. However, finding studies relevant to a particular phenotype of interest is challenging, as phenotype information is presented in a non-standardized way. To address this issue, we developed PhenDisco (phenotype discoverer), a new information retrieval system for dbGaP. PhenDisco consists of two main components: (1) text processing tools that standardize phenotype variables and study metadata, and (2) information retrieval tools that support queries from users and return ranked results. In a preliminary comparison involving 18 search scenarios, PhenDisco showed promising performance for both unranked and ranked search comparisons with dbGaP's search engine Entrez. The system can be accessed at http://pfindr.net

    Integrating Medical Ontology and Pseudo Relevance Feedback For Medical Document Retrieval

    Get PDF
    The purpose of this thesis is to undertake and improve the accuracy of locating the relevant documents from a large amount of Electronic Medical Data (EMD). The unique goal of this research is to propose a new idea for using medical ontology to find an easy and more reliable approach for patients to have a better understanding of their diseases and also help doctors to find and further improve the possible methods of diagnosis and treatments. The empirical studies were based on the dataset provided by CLEF focused on health care data. In this research, I have used Information Retrieval to find and obtain relevant information within the large amount of data sets provided by CLEF. I then used ranking functionality on the Terrier platform to calculate and evaluate the matching documents in the collection of data sets. BM25 was used as the base normalization method to retrieve the results and Pseudo Relevance Feedback weighting model to retrieve the information regarding patients health history and medical records in order to find more accurate results. I then used Unified Medical Language System to develop indexing of the queries while searching on the Internet and looking for health related documents. UMLS software was actually used to link the computer system with the health and biomedical terms and vocabularies into classify tools; it works as a dictionary for the patients by translating the medical terms. Later I would like to work on using medical ontology to create a relationship between the documents regarding the medical data and my retrieved results
    corecore