2 research outputs found

    Modelling the usefulness of document collections for query expansion in patient search

    Get PDF
    Dealing with the medical terminology is a challenge when searching for patients based on the relevance of their medical records towards a given query. Existing work used query expansion (QE) to extract expansion terms from different document collections to improve query representation. However, the usefulness of particular document collections for QE was not measured and taken into account during retrieval. In this work, we investigate two automatic approaches that measure and leverage the usefulness of document collections when exploiting multiple document collections to improve query representation. These two approaches are based on resource selection and learning to rank techniques, respectively. We evaluate our approaches using the TREC Medical Records track’s test collection. Our results show the potential of the proposed approaches, since they can effectively exploit 14 different document collections, including both domain-specific (e.g. MEDLINE abstracts) and generic (e.g. blogs and webpages) collections, and significantly outperform existing effective baselines, including the best systems participating at the TREC Medical Records track. Our analysis shows that the different collections are not equally useful for QE, while our two approaches can automatically weight the usefulness of expansion terms extracted from different document collections effectively.This is the author accepted manuscript. The final version is available from ACM via http://dx.doi.org/10.1145/2806416.280661

    Can Word Embedding Help Term Mismatch Problem?–A Result Analysis on Clinical Retrieval Tasks

    Get PDF
    Clinical Decision Support (CDS) systems assist doctors to make clinical decisions by searching for medical literature based on patients’ medical records. Past studies showed that correctly predicting patient’s diagnosis can significantly increase the performance of such clinical retrieval systems. However, our studies showed that there are still a large portion of relevant documents ranked very low due to term mismatch problem. Different to other retrieval tasks, queries issued to this clinical retrieval system have already been expanded with the most informative terms for disease prediction. It is therefore a great challenge for traditional Pseudo Relevance Feedback (PRF) methods to incorporate new informative terms from top K pseudo relevant documents. Consequently, we explore in this paper word embedding for obtaining further improvements because the word vectors were all trained on much larger collections and they can identify words that are used in similar contexts. Our study utilized test collections from the CDS track in TREC 2015, trained on 2014 data. Experiment results show that word embedding can significantly improve retrieval performance, and term mismatch problem can be largely resolved, particularly for the low ranked relevant documents. However, for highly ranked documents with less term mismatching problem, word emending’s improvement can also be replaced by a traditional language model
    corecore