15 research outputs found

    Utilizing Knowledge Bases In Information Retrieval For Clinical Decision Support And Precision Medicine

    Get PDF
    Accurately answering queries that describe a clinical case and aim at finding articles in a collection of medical literature requires utilizing knowledge bases in capturing many explicit and latent aspects of such queries. Proper representation of these aspects needs knowledge-based query understanding methods that identify the most important query concepts as well as knowledge-based query reformulation methods that add new concepts to a query. In the tasks of Clinical Decision Support (CDS) and Precision Medicine (PM), the query and collection documents may have a complex structure with different components, such as disease and genetic variants that should be transformed to enable an effective information retrieval. In this work, we propose methods for representing domain-specific queries based on weighted concepts of different types whether exist in the query itself or extracted from the knowledge bases and top retrieved documents. Besides, we propose an optimization framework, which allows unifying query analysis and expansion by jointly determining the importance weights for the query and expansion concepts depending on their type and source. We also propose a probabilistic model to reformulate the query given genetic information in the query and collection documents. We observe significant improvement of retrieval accuracy will be obtained for our proposed methods over state-of-the-art baselines for the tasks of clinical decision support and precision medicine

    Biomedical information extraction for matching patients to clinical trials

    Get PDF
    Digital Medical information had an astonishing growth on the last decades, driven by an unprecedented number of medical writers, which lead to a complete revolution in what and how much information is available to the health professionals. The problem with this wave of information is that performing a precise selection of the information retrieved by medical information repositories is very exhaustive and time consuming for physicians. This is one of the biggest challenges for physicians with the new digital era: how to reduce the time spent finding the perfect matching document for a patient (e.g. intervention articles, clinical trial, prescriptions). Precision Medicine (PM) 2017 is the track by the Text REtrieval Conference (TREC), that is focused on this type of challenges exclusively for oncology. Using a dataset with a large amount of clinical trials, this track is a good real life example on how information retrieval solutions can be used to solve this types of problems. This track can be a very good starting point for applying information extraction and retrieval methods, in a very complex domain. The purpose of this thesis is to improve a system designed by the NovaSearch team for TREC PM 2017 Clinical Trials task, which got ranked on the top-5 systems of 2017. The NovaSearch team also participated on the 2018 track and got a 15% increase on precision compared to the 2017 one. It was used multiple IR techniques for information extraction and processing of data, including rank fusion, query expansion (e.g. Pseudo relevance feedback, Mesh terms expansion) and experiments with Learning to Rank (LETOR) algorithms. Our goal is to retrieve the best possible set of trials for a given patient, using precise documents filters to exclude the unwanted clinical trials. This work can open doors in what can be done for searching and perceiving the criteria to exclude or include the trials, helping physicians even on the more complex and difficult information retrieval tasks

    The Ensemble MESH-Term Query Expansion Models Using Multiple LDA Topic Models and ANN Classifiers in Health Information Retrieval

    Get PDF
    Information retrieval in the health field has several challenges. Health information terminology is difficult for consumers (laypeople) to understand. Formulating a query with professional terms is not easy for consumers because health-related terms are more familiar to health professionals. If health terms related to a query are automatically added, it would help consumers to find relevant information. The proposed query expansion (QE) models show how to expand a query using MeSH (Medical Subject Headings) terms. The documents were represented by MeSH terms (i.e. Bag-of-MeSH), which were included in the full-text articles. And then the MeSH terms were used to generate LDA (Latent Dirichlet Analysis) topic models. A query and the top k retrieved documents were used to find MeSH terms as topic words related to the query. LDA topic words were filtered by 1) threshold values of topic probability (TP) and word probability (WP) or 2) an ANN (Artificial Neural Network) classifier. Threshold values were effective in an LDA model with a specific number of topics to increase IR performance in terms of infAP (inferred Average Precision) and infNDCG (inferred Normalized Discounted Cumulative Gain), which are common IR metrics for large data collections with incomplete judgments. The top k words were chosen by the word score based on (TP *WP) and retrieved document ranking in an LDA model with specific thresholds. The QE model with specific thresholds for TP and WP showed improved mean infAP and infNDCG scores in an LDA model, comparing with the baseline result. However, the threshold values optimized for a particular LDA model did not perform well in other LDA models with different numbers of topics. An ANN classifier was employed to overcome the weakness of the QE model depending on LDA thresholds by automatically categorizing MeSH terms (positive/negative/neutral) for QE. ANN classifiers were trained on word features related to the LDA model and collection. Two types of QE models (WSW & PWS) using an LDA model and an ANN classifier were proposed: 1) Word Score Weighting (WSW) where the probability of being a positive/negative/neutral word was used to weight the original word score, and 2) Positive Word Selection (PWS) where positive words were identified by the ANN classifier. Forty WSW models showed better average mean infAP and infNDCG scores than the PWS models when the top 7 words were selected for QE. Both approaches based on a binary ANN classifier were effective in increasing infAP and infNDCG, statistically, significantly, compared with the scores of the baseline run. A 3-class classifier performed worse than the binary classifier. The proposed ensemble QE models integrated multiple ANN classifiers with multiple LDA models. Ensemble QE models combined multiple WSW/PWS models and one or multiple classifiers. Multiple classifiers were more effective in selecting relevant words for QE than one classifier. In ensemble QE (WSW/PWS) models, the top k words added to the original queries were effective to increase infAP and infNDCG scores. The ensemble QE model (WSW) using three classifiers showed statistically significant improvements for infAP and infNDCG in the mean scores for 30 queries when the top 3 words were added. The ensemble QE model (PWS) using four classifiers showed statistically significant improvements for 30 queries in the mean infAP and infNDCG scores

    Neural Representations of Concepts and Texts for Biomedical Information Retrieval

    Get PDF
    Information retrieval (IR) methods are an indispensable tool in the current landscape of exponentially increasing textual data, especially on the Web. A typical IR task involves fetching and ranking a set of documents (from a large corpus) in terms of relevance to a user\u27s query, which is often expressed as a short phrase. IR methods are the backbone of modern search engines where additional system-level aspects including fault tolerance, scale, user interfaces, and session maintenance are also addressed. In addition to fetching documents, modern search systems may also identify snippets within the documents that are potentially most relevant to the input query. Furthermore, current systems may also maintain preprocessed structured knowledge derived from textual data as so called knowledge graphs, so certain types of queries that are posed as questions can be parsed as such; a response can be an output of one or more named entities instead of a ranked list of documents (e.g., what diseases are associated with EGFR mutations? ). This refined setup is often termed as question answering (QA) in the IR and natural language processing (NLP) communities. In biomedicine and healthcare, specialized corpora are often at play including research articles by scientists, clinical notes generated by healthcare professionals, consumer forums for specific conditions (e.g., cancer survivors network), and clinical trial protocols (e.g., www.clinicaltrials.gov). Biomedical IR is specialized given the types of queries and the variations in the texts are different from that of general Web documents. For example, scientific articles are more formal with longer sentences but clinical notes tend to have less grammatical conformity and are rife with abbreviations. There is also a mismatch between the vocabulary of consumers and the lingo of domain experts and professionals. Queries are also different and can range from simple phrases (e.g., COVID-19 symptoms ) to more complex implicitly fielded queries (e.g., chemotherapy regimens for stage IV lung cancer patients with ALK mutations ). Hence, developing methods for different configurations (corpus, query type, user type) needs more deliberate attention in biomedical IR. Representations of documents and queries are at the core of IR methods and retrieval methodology involves coming up with these representations and matching queries with documents based on them. Traditional IR systems follow the approach of keyword based indexing of documents (the so called inverted index) and matching query phrases against the document index. It is not difficult to see that this keyword based matching ignores the semantics of texts (synonymy at the lexeme level and entailment at phrase/clause/sentence levels) and this has lead to dimensionality reduction methods such as latent semantic indexing that generally have scale-related concerns; such methods also do not address similarity at the sentence level. Since the resurgence of neural network methods in NLP, the IR field has also moved to incorporate advances in neural networks into current IR methods. This dissertation presents four specific methodological efforts toward improving biomedical IR. Neural methods always begin with dense embeddings for words and concepts to overcome the limitations of one-hot encoding in traditional NLP/IR. In the first effort, we present a new neural pre-training approach to jointly learn word and concept embeddings for downstream use in applications. In the second study, we present a joint neural model for two essential subtasks of information extraction (IE): named entity recognition (NER) and entity normalization (EN). Our method detects biomedical concept phrases in texts and links them to the corresponding semantic types and entity codes. These first two studies provide essential tools to model textual representations as compositions of both surface forms (lexical units) and high level concepts with potential downstream use in QA. In the third effort, we present a document reranking model that can help surface documents that are likely to contain answers (e.g, factoids, lists) to a question in a QA task. The model is essentially a sentence matching neural network that learns the relevance of a candidate answer sentence to the given question parametrized with a bilinear map. In the fourth effort, we present another document reranking approach that is tailored for precision medicine use-cases. It combines neural query-document matching and faceted text summarization. The main distinction of this effort from previous efforts is to pivot from a query manipulation setup to transforming candidate documents into pseudo-queries via neural text summarization. Overall, our contributions constitute nontrivial advances in biomedical IR using neural representations of concepts and texts

    Promoting understandability in consumer healt information seach

    Get PDF
    Nowadays, in the area of Consumer Health Information Retrieval, techniques and methodologies are still far from being effective in answering complex health queries. One main challenge comes from the varying and limited medical knowledge background of consumers; the existing language gap be- tween non-expert consumers and the complex medical resources confuses them. So, returning not only topically relevant but also understandable health information to the user is a significant and practical challenge in this area. In this work, the main research goal is to study ways to promote under- standability in Consumer Health Information Retrieval. To help reaching this goal, two research questions are issued: (i) how to bridge the existing language gap; (ii) how to return more understandable documents. Two mod- ules are designed, each answering one research question. In the first module, a Medical Concept Model is proposed for use in health query processing; this model integrates Natural Language Processing techniques into state-of- the-art Information Retrieval. Moreover, aiming to integrate syntactic and semantic information, word embedding models are explored as query expan- sion resources. The second module is designed to learn understandability from past data; a two-stage learning to rank model is proposed with rank aggregation methods applied on single field-based ranking models. These proposed modules are assessed on FIRE’2016 CHIS track data and CLEF’2016-2018 eHealth IR data collections. Extensive experimental com- parisons with the state-of-the-art baselines on the considered data collec- tions confirmed the effectiveness of the proposed approaches: regarding un- derstandability relevance, the improvement is 11.5%, 9.3% and 16.3% in RBP, uRBP and uRBPgr evaluation metrics, respectively; in what concerns to topical relevance, the improvement is 7.8%, 16.4% and 7.6% in P@10, NDCG@10 and MAP evaluation metrics, respectively; Sumário: Promoção da Compreensibilidade na Pesquisa de Informação de Saúde pelo Consumidor Atualmente as técnicas e metodologias utilizadas na área da Recuperação de Informação em Saúde estão ainda longe de serem efetivas na resposta às interrogações colocadas pelo consumidor. Um dos principais desafios é o variado e limitado conhecimento médico dos consumidores; a lacuna lin- guística entre os consumidores e os complexos recursos médicos confundem os consumidores não especializados. Assim, a disponibilização, não apenas de informação de saúde relevante, mas também compreensível, é um desafio significativo e prático nesta área. Neste trabalho, o objetivo é estudar formas de promover a compreensibili- dade na Recuperação de Informação em Saúde. Para tal, são são levantadas duas questões de investigação: (i) como diminuir as diferenças de linguagem existente entre consumidores e recursos médicos; (ii) como recuperar textos mais compreensíveis. São propostos dois módulos, cada um para respon- der a uma das questões. No primeiro módulo é proposto um Modelo de Conceitos Médicos para inclusão no processo da consulta de informação que integra técnicas de Processamento de Linguagem Natural na Recuperação de Informação. Mais ainda, com o objetivo de incorporar informação sin- tática e semântica, são também explorados modelos de word embedding na expansão de consultas. O segundo módulo é desenhado para aprender a com- preensibilidade a partir de informação do passado; é proposto um modelo de learning to rank de duas etapas, com métodos de agregação aplicados sobre os modelos de ordenação criados com informação de campos específicos dos documentos. Os módulos propostos são avaliados nas coleções CHIS do FIRE’2016 e eHealth do CLEF’2016-2018. Comparações experimentais extensivas real- izadas com modelos atuais (baselines) confirmam a eficácia das abordagens propostas: relativamente à relevância da compreensibilidade, obtiveram-se melhorias de 11.5%, 9.3% e 16.3 % nas medidas de avaliação RBP, uRBP e uRBPgr, respectivamente; no que respeita à relevância dos tópicos recupera- dos, obtiveram-se melhorias de 7.8%, 16.4% e 7.6% nas medidas de avaliação P@10, NDCG@10 e MAP, respectivamente

    Information filtering in high velocity text streams using limited memory - An event-driven approach to text stream analysis

    Get PDF
    This dissertation is concerned with the processing of high velocity text streams using event processing means. It comprises a scientific approach for combining the area of information filtering and event processing. In order to be able to process text streams within event driven means, an event reference model was developed that allows for the conversion of unstructured or semi-structured text streams into discrete event types on which event processing engines can operate. Additionally, a set of essential reference processes in the domain of information filtering and text stream analysis were described using event-driven concepts. In a second step, a reference architecture was designed that described essential architectural components required for the design of information ltering and text stream analysis systems in an event-driven manner. Further to this, a set of architectural patterns for building event driven text analysis systems was derived that support the design and implementation of such systems. Subsequently, a prototype was built using the theoretic foundations. This system was initially used to study the effect of sliding window sizes on the properties of dynamic sub-corpora. It could be shown that small sliding window based corpora are similar to larger sliding windows and thus can be used as a resource-saving alternative. Next, a study of several linguistic aspects of text streams was undertaken that showed that event stream summary statistics can provide interesting insights into the characteristics of high velocity text streams. Finally, four essential information filtering and text stream analysis components were studied, viz. filter policies, term weighting, thresholds and query expansion. These were studied using three temporal search profile types and were evaluated using standard information retrieval performance measures. The goal was to study the efficiency of traditional as well as new algorithms within the given context of high velocity text stream data, in order to provide advise which methods work best. The results of this dissertation are intended to provide software architects and developers with valuable information for the design and implementation of event-driven text stream analysis systems

    Simulation and Modeling for Improving Access to Care for Underserved Populations

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)This research, through partnership with seven Community Health Centers (CHCs) in Indiana, constructed effective outpatient appointment scheduling systems by determining care needs of CHC patients, designing an infrastructure for meaningful use of patient health records and clinic operational data, and developing prediction and simulation models for improving access to care for underserved populations. The aims of this study are 1) redesigning appointment scheduling templates based on patient characteristics, diagnoses, and clinic capacities in underserved populations; 2) utilizing predictive modeling to improve understanding the complexity of appointment adherence in underserved populations; and 3) developing simulation models with complex data to guide operational decision-making in community health centers. This research addresses its aims by applying a multi-method approach from different disciplines, such as statistics, industrial engineering, computer science, health informatics, and social sciences. First, a novel method was developed to use Electronic Health Record (EHR) data for better understanding appointment needs of the target populations based on their characteristics and reasons for seeking health, which helped simplify, improve, and redesign current appointment type and duration models. Second, comprehensive and informative predictive models were developed to better understand appointment non-adherence in community health centers. Logistic Regression, Naïve Bayes Classifier, and Artificial Neural Network found factors contributing to patient no-show. Predictors of appointment non-adherence might be used by outpatient clinics to design interventions reducing overall clinic no-show rates. Third, a simulation model was developed to assess and simulate scheduling systems in CHCs, and necessary steps to extract information for simulation modeling of scheduling systems in CHCs are described. Agent-Based Models were built in AnyLogic to test different scenarios of scheduling methods, and to identify how these scenarios could impact clinic access performance. This research potentially improves well-being of and care quality and timeliness for uninsured, underinsured, and underserved patients, and it helps clinics predict appointment no-shows and ensures scheduling systems are capable of properly meeting the populations’ care needs.2021-12-2

    Secondary Analysis of Electronic Health Records

    Get PDF
    Health Informatics; Ethics; Data Mining and Knowledge Discovery; Statistics for Life Sciences, Medicine, Health Science
    corecore