605 research outputs found

    Exploiting semantics for improving clinical information retrieval

    Get PDF
    Clinical information retrieval (IR) presents several challenges including terminology mismatch and granularity mismatch. One of the main objectives in clinical IR is to fill the semantic gap among the queries and documents and going beyond keywords matching. To address these issues, in this study we attempt to use semantic information to improve the performance of clinical IR systems by representing queries in an expressive and meaningful context. In this study we propose query context modeling to improve the effectiveness of clinical IR systems. To model query contexts we propose two novel approaches to modeling medical query contexts. The first approach concerns modeling medical query contexts based on mining semantic-based AR for improving clinical text retrieval. The query context is derived from the rules that cover the query and then weighted according to their semantic relatedness to the query concepts. In our second approach we model a representative query context by developing query domain ontology. To develop query domain ontology we extract all the concepts that have semantic relationship with the query concept(s) in UMLS ontologies. Query context represents concepts extracted from query domain ontology and weighted according to their semantic relatedness to the query concept(s). The query context is then exploited in the patient records query expansion and re-ranking for improving clinical retrieval performance. We evaluate this approach on the TREC Medical Records dataset. Results show that our proposed approach significantly improves the retrieval performance compare to classic keyword-based IR model

    Visual Analytics of Electronic Health Records with a focus on Acute Kidney Injury

    Get PDF
    The increasing use of electronic platforms in healthcare has resulted in the generation of unprecedented amounts of data in recent years. The amount of data available to clinical researchers, physicians, and healthcare administrators continues to grow, which creates an untapped resource with the ability to improve the healthcare system drastically. Despite the enthusiasm for adopting electronic health records (EHRs), some recent studies have shown that EHR-based systems hardly improve the ability of healthcare providers to make better decisions. One reason for this inefficacy is that these systems do not allow for human-data interaction in a manner that fits and supports the needs of healthcare providers. Another reason is the information overload, which makes healthcare providers often misunderstand, misinterpret, ignore, or overlook vital data. The emergence of a type of computational system known as visual analytics (VA), has the potential to reduce the complexity of EHR data by combining advanced analytics techniques with interactive visualizations to analyze, synthesize, and facilitate high-level activities while allowing users to get more involved in a discourse with the data. The purpose of this research is to demonstrate the use of sophisticated visual analytics systems to solve various EHR-related research problems. This dissertation includes a framework by which we identify gaps in existing EHR-based systems and conceptualize the data-driven activities and tasks of our proposed systems. Two novel VA systems (VISA_M3R3 and VALENCIA) and two studies are designed to bridge the gaps. VISA_M3R3 incorporates multiple regression, frequent itemset mining, and interactive visualization to assist users in the identification of nephrotoxic medications. Another proposed system, VALENCIA, brings a wide range of dimension reduction and cluster analysis techniques to analyze high-dimensional EHRs, integrate them seamlessly, and make them accessible through interactive visualizations. The studies are conducted to develop prediction models to classify patients who are at risk of developing acute kidney injury (AKI) and identify AKI-associated medication and medication combinations using EHRs. Through healthcare administrative datasets stored at the ICES-KDT (Kidney Dialysis and Transplantation program), London, Ontario, we have demonstrated how our proposed systems and prediction models can be used to solve real-world problems

    Ontologies and Computational Methods for Traditional Chinese Medicine

    Get PDF
    Perinteinen kiinalainen lääketiede (PKL) on tuhansia vuosia vanha hoitomuoto, jonka tarkoituksena on terveyden ylläpito, tautien ennaltaehkäisemisen ja terveydellisten ongelmien hoito. Useat vuosittain julkaistavat tutkimukset tukevat hoitojen tehokkuutta ja PKL onkin jatkuvasti kasvattamassa suosiotaan maailmanlaajuisesti. Kiinassa PKL ollut suosittu hoitomuoto jo pitkään ja nykyään sitä harjoitetaan rinnakkain länsimaisen lääketieteen kanssa. Viime vuosikymmeninä tapahtuneen tietotekniikan kehityksen ja yleistymisen myötä myös PKL:n menetelmät ovat muuttuneet ja tietotekniikkaa on alettu hyödyntämään PKL:n tutkimuksessa. PKL:n tietoa on tallennettu digitaaliseen muotoon, minkä seurauksena on syntynyt suuri määrä erilaisia tietokantoja. Tieto on jakautunut eri tietokantoihin, joiden terminologia ei ole yhtenevää. Tämä aiheuttaa ongelmia tiedon löytämisessä ja tietoa hyödyntävien sovellusten kehittämisessä. Tässä työssä selvitetään, mitä PKL on, ja mikä sen asema on nykyään Kiinassa ja muualla maailmalla. Työn tarkoituksena on tutkia PKL:n tietoteknisten sovelluksen kehittämistä ja siihen liittyviä haasteita. Työssä perehdytään PKL:n ontologioiden ja semanttisten työkalujen toimintaan, sekä PKL:n laskennallisiin menetelmiin ja niiden tarjoamiin mahdollisuuksiin. Lisäksi kerrotaan uusimmista kansainvälisesti merkittävistä projekteista ja pohditaan tulevaisuuden näkymiä. Jo kehitetyt PKL:n tietotekniset sovellukset tarjoavat uusia mahdollisuuksia tiedon etsimiseen ja parantavat tutkijoiden mahdollisuutta jakaa tietoa ja tehdä yhteistyötä. Tietokoneavusteiset diagnoosityökalut ja asiantuntijajärjestelmät tarjoavat mahdollisuuksia lääkärin tekemän diagnoosin varmistamiseen. Tulevaisuudessa laskennallisia menetelmiä hyödyntäen voitaisiin tarjota terveyttä ja hyvinvointia edistäviä palveluja verkossa.Traditional Chinese Medicine (TCM) has been used for thousands of years in China for the purposes of health maintenance, disease prevention and treatment of health problems. Several published studies support the effectiveness of TCM treatments and the global use of TCM is constantly increasing. In China, Western and Chinese medicine are practiced in parallel. During the past few decades, the use of information technology in medicine has increased rapidly. The development of information technology has opened up new possibilities for information storage and sharing, as well as communication and interaction between people. Along with the growing use of information technology, a wide variety of patient databases and other electronic sources of information have emerged. However, the information is fragmented and dispersed, and the terminology is ambiguous. The objective of the thesis is to examine the position of TCM today, and to find out what changes and new opportunities the modern information technology brings for different aspects of TCM. This study describes how ontologies and semantic tools can be utilized when collecting existing knowledge and combining different databases. Also different computational methods and TCM expert systems are introduced. Finally, the most recent projects in the field of TCM are discussed and the future challenges are reflected. The computational methods for TCM, such as diagnostic tools and expert systems, could be very useful in anticipating and preventing health problems. E-science and knowledge discovery offer new ways for knowledge sharing and cooperation. TCM expert systems can be used to generate diagnosis or automatic clinical alerts. In the future, a comprehensive and easily accessible online health service system could be developed and used to improve the health and well-being of people

    Dynamic And Quantitative Radiomics Analysis In Interventional Radiology

    Get PDF
    Interventional Radiology (IR) is a subspecialty of radiology that performs invasive procedures driven by diagnostic imaging for predictive and therapeutic purpose. The development of artificial intelligence (AI) has revolutionized the industry of IR. Researchers have created sophisticated models backed by machine learning algorithms and optimization methodologies for image registration, cellular structure detection and computer-aided disease diagnosis and prognosis predictions. However, due to the incapacity of the human eye to detect tiny structural characteristics and inter-radiologist heterogeneity, conventional experience-based IR visual evaluations may have drawbacks. Radiomics, a technique that utilizes machine learning, offers a practical and quantifiable solution to this issue. This technology has been used to evaluate the heterogeneity of malignancies that are difficult to detect by the human eye by creating an automated pipeline for the extraction and analysis of high throughput computational imaging characteristics from radiological medical pictures. However, it is a demanding task to directly put radiomics into applications in IR because of the heterogeneity and complexity of medical imaging data. Furthermore, recent radiomics studies are based on static images, while many clinical applications (such as detecting the occurrence and development of tumors and assessing patient response to chemotherapy and immunotherapy) is a dynamic process. Merely incorporating static features cannot comprehensively reflect the metabolic characteristics and dynamic processes of tumors or soft tissues. To address these issues, we proposed a robust feature selection framework to manage the high-dimensional small-size data. Apart from that, we explore and propose a descriptor in the view of computer vision and physiology by integrating static radiomics features with time-varying information in tumor dynamics. The major contributions to this study include: Firstly, we construct a result-driven feature selection framework, which could efficiently reduce the dimension of the original feature set. The framework integrates different feature selection techniques to ensure the distinctiveness, uniqueness, and generalization ability of the output feature set. In the task of classification hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC) in primary liver cancer, only three radiomics features (chosen from more than 1, 800 features of the proposed framework) can obtain an AUC of 0.83 in the independent dataset. Besides, we also analyze features’ pattern and contributions to the results, enhancing clinical interpretability of radiomics biomarkers. Secondly, we explore and build a pulmonary perfusion descriptor based on 18F-FDG whole-body dynamic PET images. Our major novelties include: 1) propose a physiology-and-computer-vision-interpretable descriptor construction framework by the decomposition of spatiotemporal information into three dimensions: shades of grey levels, textures, and dynamics. 2) The spatio-temporal comparison of pulmonary descriptor intra and inter patients is feasible, making it possible to be an auxiliary diagnostic tool in pulmonary function assessment. 3) Compared with traditional PET metabolic biomarker analysis, the proposed descriptor incorporates image’s temporal information, which enables a better understanding of the time-various mechanisms and detection of visual perfusion abnormalities among different patients. 4) The proposed descriptor eliminates the impact of vascular branching structure and gravity effect by utilizing time warping algorithms. Our experimental results showed that our proposed framework and descriptor are promising tools to medical imaging analysis

    UWOMJ Volume 18, Number 2, March 1948

    Get PDF
    Schulich School of Medicine & Dentistryhttps://ir.lib.uwo.ca/uwomj/1139/thumbnail.jp

    Analysis of consciousness for complete locked-in syndrome patients

    Get PDF
    This thesis presents methods for detecting consciousness in patients with complete locked-in syndrome (CLIS). CLIS patients are unable to speak and have lost all muscle movement. Externally, the internal brain activity of such patients cannot be easily perceived, but CLIS patients are considered to be still conscious and cognitively active. Detecting the current state of consciousness of CLIS patients is non-trivial, and it is difficult to ascertain whether CLIS patients are conscious or not. Thus, it is vital to develop alternative ways to re-establish communication with these patients during periods of awareness, and a possible platform is through brain–computer interface (BCI). Since consciousness is required to use BCI correctly, this study proposes a modus operandi to analyze not only in intracranial electrocorticography (ECoG) signals with greater signal-to-noise ratio (SNR) and higher signal amplitude, but also in non-invasive electroencephalography (EEG) signals. By applying three different time-domain analysis approaches sample entropy, permutation entropy, and Poincaré plot as feature extraction to prevent disease-related reductions of brainwave frequency bands in CLIS patients, and cross-validated to improve the probability of correctly detecting the conscious states of CLIS patients. Due to the lack a of 'ground truth' that could be used as teaching input to correct the outcomes, k-Means and DBSCAN these unsupervised learning methods were used to reveal the presence of different levels of consciousness for individual participation in the experiment first in locked-in state (LIS) patients with ALSFRS-R score of 0. The results of these different methods converge on the specific periods of consciousness of CLIS/LIS patients, coinciding with the period during which CLIS/LIS patients recorded communication with an experimenter. To determine methodological feasibility, the methods were also applied to patients with disorders of consciousness (DOC). The results indicate that the use of sample entropy might be helpful to detect awareness not only in CLIS/LIS patients but also in minimally conscious state (MCS)/unresponsive wakefulness syndrome (UWS) patients, and showed good resolution for both ECoG signals up to 24 hours a day and EEG signals focused on one or two hours at the time of the experiment. This thesis focus on consistent results across multiple channels to avoid compensatory effects of brain injury. Unlike most techniques designed to help clinicians diagnose and understand patients' long-term disease progression or distinguish between different disease types on the clinical scales of consciousness. The aim of this investigation is to develop a reliable brain-computer interface-based communication aid eventually to provide family members with a method for short-term communication with CLIS patients in daily life, and at the same time, this will keep patients' brains active to increase patients' willingness to live and improve their quality of life (QOL)

    Computational methods for physiological data

    Get PDF
    Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2009.Author is also affiliated with the MIT Dept. of Electrical Engineering and Computer Science. Cataloged from PDF version of thesis.Includes bibliographical references (p. 177-188).Large volumes of continuous waveform data are now collected in hospitals. These datasets provide an opportunity to advance medical care, by capturing rare or subtle phenomena associated with specific medical conditions, and by providing fresh insights into disease dynamics over long time scales. We describe how progress in medicine can be accelerated through the use of sophisticated computational methods for the structured analysis of large multi-patient, multi-signal datasets. We propose two new approaches, morphologic variability (MV) and physiological symbolic analysis, for the analysis of continuous long-term signals. MV studies subtle micro-level variations in the shape of physiological signals over long periods. These variations, which are often widely considered to be noise, can contain important information about the state of the underlying system. Symbolic analysis studies the macro-level information in signals by abstracting them into symbolic sequences. Converting continuous waveforms into symbolic sequences facilitates the development of efficient algorithms to discover high risk patterns and patients who are outliers in a population. We apply our methods to the clinical challenge of identifying patients at high risk of cardiovascular mortality (almost 30% of all deaths worldwide each year). When evaluated on ECG data from over 4,500 patients, high MV was strongly associated with both cardiovascular death and sudden cardiac death. MV was a better predictor of these events than other ECG-based metrics. Furthermore, these results were independent of information in echocardiography, clinical characteristics, and biomarkers.(cont.) Our symbolic analysis techniques also identified groups of patients exhibiting a varying risk of adverse outcomes. One group, with a particular set of symbolic characteristics, showed a 23 fold increased risk of death in the months following a mild heart attack, while another exhibited a 5 fold increased risk of future heart attacks.by Zeeshan Hassan Syed.Ph.D

    Facilitating and Enhancing Biomedical Knowledge Translation: An in Silico Approach to Patient-centered Pharmacogenomic Outcomes Research

    Get PDF
    Current research paradigms such as traditional randomized control trials mostly rely on relatively narrow efficacy data which results in high internal validity and low external validity. Given this fact and the need to address many complex real-world healthcare questions in short periods of time, alternative research designs and approaches should be considered in translational research. In silico modeling studies, along with longitudinal observational studies, are considered as appropriate feasible means to address the slow pace of translational research. Taking into consideration this fact, there is a need for an approach that tests newly discovered genetic tests, via an in silico enhanced translational research model (iS-TR) to conduct patient-centered outcomes research and comparative effectiveness research studies (PCOR CER). In this dissertation, it was hypothesized that retrospective EMR analysis and subsequent mathematical modeling and simulation prediction could facilitate and accelerate the process of generating and translating pharmacogenomic knowledge on comparative effectiveness of anticoagulation treatment plan(s) tailored to well defined target populations which eventually results in a decrease in overall adverse risk and improve individual and population outcomes. To test this hypothesis, a simulation modeling framework (iS-TR) was proposed which takes advantage of the value of longitudinal electronic medical records (EMRs) to provide an effective approach to translate pharmacogenomic anticoagulation knowledge and conduct PCOR CER studies. The accuracy of the model was demonstrated by reproducing the outcomes of two major randomized clinical trials for individualizing warfarin dosing. A substantial, hospital healthcare use case that demonstrates the value of iS-TR when addressing real world anticoagulation PCOR CER challenges was also presented
    corecore