433 research outputs found

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Natural Language Processing of Clinical Notes on Chronic Diseases: Systematic Review

    Get PDF
    Novel approaches that complement and go beyond evidence-based medicine are required in the domain of chronic diseases, given the growing incidence of such conditions on the worldwide population. A promising avenue is the secondary use of electronic health records (EHRs), where patient data are analyzed to conduct clinical and translational research. Methods based on machine learning to process EHRs are resulting in improved understanding of patient clinical trajectories and chronic disease risk prediction, creating a unique opportunity to derive previously unknown clinical insights. However, a wealth of clinical histories remains locked behind clinical narratives in free-form text. Consequently, unlocking the full potential of EHR data is contingent on the development of natural language processing (NLP) methods to automatically transform clinical text into structured clinical data that can guide clinical decisions and potentially delay or prevent disease onset

    Doctor of Philosophy

    Get PDF
    DissertationHealth information technology (HIT) in conjunction with quality improvement (QI) methodologies can promote higher quality care at lower costs. Unfortunately, most inpatient hospital settings have been slow to adopt HIT and QI methodologies. Successful adoption requires close attention to workflow. Workflow is the sequence of tasks, processes, and the set of people or resources needed for those tasks that are necessary to accomplish a given goal. Assessing the impact on workflow is an important component of determining whether a HIT implementation will be successful, but little research has been conducted on the impact of eMeasure (electronic performance measure) implementation on workflow. One solution to addressing implementation challenges such as the lack of attention to workflow is an implementation toolkit. An implementation toolkit is an assembly of instruments such as checklists, forms, and planning documents. We developed an initial eMeasure Implementation Toolkit for the heart failure (HF) eMeasure to allow QI and information technology (IT) professionals and their team to assess the impact of implementation on workflow. During the development phase of the toolkit, we undertook a literature review to determine the components of the toolkit. We conducted stakeholder interviews with HIT and QI key informants and subject matter experts (SMEs) at the US Department of Veteran Affairs (VA). Key informants provided a broad understanding about the context of workflow during eMeasure implementation. Based on snowball sampling, we also interviewed other SMEs based on the recommendations of the key informants who suggested tools and provided information essential to the toolkit development. The second phase involved evaluation of the toolkit for relevance and clarity, by experts in non-VA settings. The experts evaluated the sections of the toolkit that contained the tools, via a survey. The final toolkit provides a distinct set of resources and tools, which were iteratively developed during the research and available to users in a single source document. The research methodology provided a strong unified overarching implementation framework in the form of the Promoting Action on Research Implementation in Health Services (PARIHS) model in combination with a sociotechnical model of HIT that strengthened the overall design of the study

    Natural Language Processing in Electronic Health Records in Relation to Healthcare Decision-making: A Systematic Review

    Full text link
    Background: Natural Language Processing (NLP) is widely used to extract clinical insights from Electronic Health Records (EHRs). However, the lack of annotated data, automated tools, and other challenges hinder the full utilisation of NLP for EHRs. Various Machine Learning (ML), Deep Learning (DL) and NLP techniques are studied and compared to understand the limitations and opportunities in this space comprehensively. Methodology: After screening 261 articles from 11 databases, we included 127 papers for full-text review covering seven categories of articles: 1) medical note classification, 2) clinical entity recognition, 3) text summarisation, 4) deep learning (DL) and transfer learning architecture, 5) information extraction, 6) Medical language translation and 7) other NLP applications. This study follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Result and Discussion: EHR was the most commonly used data type among the selected articles, and the datasets were primarily unstructured. Various ML and DL methods were used, with prediction or classification being the most common application of ML or DL. The most common use cases were: the International Classification of Diseases, Ninth Revision (ICD-9) classification, clinical note analysis, and named entity recognition (NER) for clinical descriptions and research on psychiatric disorders. Conclusion: We find that the adopted ML models were not adequately assessed. In addition, the data imbalance problem is quite important, yet we must find techniques to address this underlining problem. Future studies should address key limitations in studies, primarily identifying Lupus Nephritis, Suicide Attempts, perinatal self-harmed and ICD-9 classification

    Doctor of Philosophy

    Get PDF
    dissertationElectronic Health Records (EHRs) provide a wealth of information for secondary uses. Methods are developed to improve usefulness of free text query and text processing and demonstrate advantages to using these methods for clinical research, specifically cohort identification and enhancement. Cohort identification is a critical early step in clinical research. Problems may arise when too few patients are identified, or the cohort consists of a nonrepresentative sample. Methods of improving query formation through query expansion are described. Inclusion of free text search in addition to structured data search is investigated to determine the incremental improvement of adding unstructured text search over structured data search alone. Query expansion using topic- and synonym-based expansion improved information retrieval performance. An ensemble method was not successful. The addition of free text search compared to structured data search alone demonstrated increased cohort size in all cases, with dramatic increases in some. Representation of patients in subpopulations that may have been underrepresented otherwise is also shown. We demonstrate clinical impact by showing that a serious clinical condition, scleroderma renal crisis, can be predicted by adding free text search. A novel information extraction algorithm is developed and evaluated (Regular Expression Discovery for Extraction, or REDEx) for cohort enrichment. The REDEx algorithm is demonstrated to accurately extract information from free text clinical iv narratives. Temporal expressions as well as bodyweight-related measures are extracted. Additional patients and additional measurement occurrences are identified using these extracted values that were not identifiable through structured data alone. The REDEx algorithm transfers the burden of machine learning training from annotators to domain experts. We developed automated query expansion methods that greatly improve performance of keyword-based information retrieval. We also developed NLP methods for unstructured data and demonstrate that cohort size can be greatly increased, a more complete population can be identified, and important clinical conditions can be detected that are often missed otherwise. We found a much more complete representation of patients can be obtained. We also developed a novel machine learning algorithm for information extraction, REDEx, that efficiently extracts clinical values from unstructured clinical text, adding additional information and observations over what is available in structured text alone

    A systematic review of natural language processing applied to radiology reports

    Get PDF
    NLP has a significant role in advancing healthcare and has been found to be key in extracting structured information from radiology reports. Understanding recent developments in NLP application to radiology is of significance but recent reviews on this are limited. This study systematically assesses recent literature in NLP applied to radiology reports. Our automated literature search yields 4,799 results using automated filtering, metadata enriching steps and citation search combined with manual review. Our analysis is based on 21 variables including radiology characteristics, NLP methodology, performance, study, and clinical application characteristics. We present a comprehensive analysis of the 164 publications retrieved with each categorised into one of 6 clinical application categories. Deep learning use increases but conventional machine learning approaches are still prevalent. Deep learning remains challenged when data is scarce and there is little evidence of adoption into clinical practice. Despite 17% of studies reporting greater than 0.85 F1 scores, it is hard to comparatively evaluate these approaches given that most of them use different datasets. Only 14 studies made their data and 15 their code available with 10 externally validating results. Automated understanding of clinical narratives of the radiology reports has the potential to enhance the healthcare process but reproducibility and explainability of models are important if the domain is to move applications into clinical use. More could be done to share code enabling validation of methods on different institutional data and to reduce heterogeneity in reporting of study properties allowing inter-study comparisons. Our results have significance for researchers providing a systematic synthesis of existing work to build on, identify gaps, opportunities for collaboration and avoid duplication

    Birth defects surveillance : a manual for programme managers

    Get PDF
    Second edition.Congenital anomalies, also known as birth defects, are structural or functional abnormalities, including metabolic disorders, which are present at birth. Congenital anomalies are a diverse group of disorders of prenatal origin, which can be caused by single-gene defects, chromosomal disorders, multifactorial inheritance, environmental teratogens or micronutrient malnutrition.This manual is intended to serve as a tool for the development, implementation and ongoing improvement of a congenital anomalies surveillance programme, particularly for countries with limited resources. The focus of the manual is on population-based and hospital-based surveillance programmes. Some countries might not find it feasible to begin with the development of a population-based programme. Therefore, the manual covers the methodology needed for the development of both population-based and hospital-based surveillance programmes. Further, although many births in predominantly low- and middle-income countries (LMICs) occur outside of hospitals, some countries with limited resources might choose to start with a hospital-based surveillance programme and expand it later into one that is population-based. Any country wishing to expand its current hospital-based programme into a population-based programme, or to begin the initial development of a population-based system, should find this manual helpful in reaching its goal.This manual provides selected examples of congenital anomalies (see Appendix A). These anomalies are severe enough that many would probably be captured during the first few days following birth. While a number of the anomalies listed are external and easily identified by physical exam, others are internal and typically require more advanced diagnostic evaluations such as imaging. However, because of their severity and frequency, all these selected conditions have significant public health impact, and for some there is a potential for primary prevention. Nevertheless, these are just suggestions; countries might choose to monitor a subset of these conditions or add other congenital anomalies to meet their needs.WHO thanks the United States Centers for Disease Control and Prevention, especially the National Center on Birth Defects and Developmental Disabilities, for providing financial support for the publication of this manual as part of the cooperative agreement 5 E11 DP002196, Global prevention of noncommunicable diseases and promotion of health. Supported in part by contract from Task Force for Global Health to the International Center on Birth Defects (ICBD) of the ICBDSR. We gratefully acknowledge and thank the United States Agency for International Development for providing financial support for this work.Suggested citation. Birth defects surveillance: a manual for programme managers, second edition. Geneva: World Health Organization; 2020. Licence: CC BY-NC-SA 3.0 IGO.9789240015395 (\u200eelectronic version)\u200e9789240015401 (\u200eprint version)\u200eBirth-Defects-Surveillance-A-Manual-for-Programme-Managers-2020Manual-P.pdfAcknowledgements -- Financial support -- Abbreviations -- Objectives of the manual -- 1. Surveillance of congenital anomalies -- 2. Planning activities and tools -- 3. Approaches to surveillance -- 4. Dianosing congenital anomalies -- 5. Congenital infectious syndromes -- 6. Coding and diagnosis -- 7. Primer on data quality in birth defects surveillance.2020cooperative agreement 5 E11 DP002196891

    Artificial Intelligence-Based Methods for Fusion of Electronic Health Records and Imaging Data

    Full text link
    Healthcare data are inherently multimodal, including electronic health records (EHR), medical images, and multi-omics data. Combining these multimodal data sources contributes to a better understanding of human health and provides optimal personalized healthcare. Advances in artificial intelligence (AI) technologies, particularly machine learning (ML), enable the fusion of these different data modalities to provide multimodal insights. To this end, in this scoping review, we focus on synthesizing and analyzing the literature that uses AI techniques to fuse multimodal medical data for different clinical applications. More specifically, we focus on studies that only fused EHR with medical imaging data to develop various AI methods for clinical applications. We present a comprehensive analysis of the various fusion strategies, the diseases and clinical outcomes for which multimodal fusion was used, the ML algorithms used to perform multimodal fusion for each clinical application, and the available multimodal medical datasets. We followed the PRISMA-ScR guidelines. We searched Embase, PubMed, Scopus, and Google Scholar to retrieve relevant studies. We extracted data from 34 studies that fulfilled the inclusion criteria. In our analysis, a typical workflow was observed: feeding raw data, fusing different data modalities by applying conventional machine learning (ML) or deep learning (DL) algorithms, and finally, evaluating the multimodal fusion through clinical outcome predictions. Specifically, early fusion was the most used technique in most applications for multimodal learning (22 out of 34 studies). We found that multimodality fusion models outperformed traditional single-modality models for the same task. Disease diagnosis and prediction were the most common clinical outcomes (reported in 20 and 10 studies, respectively) from a clinical outcome perspective.Comment: Accepted in Nature Scientific Reports. 20 page

    Data Quality and Completeness in a Web Stroke Registry as the Basis for Data and Process Mining

    Get PDF
    Electronic health records often show missing values and errors jeopardizing their effective exploitation. We illustrate the re-engineering process needed to improve the data quality of a web-based, multicentric stroke registry by proposing a knowledge-based data entry support able to help users to homogeneously interpret data items, and to prevent and detect treacherous errors. The re-engineering also improves stroke units coordination and networking, through ancillary tools for monitoring patient enrollments, calculating stroke care indicators, analyzing compliance with clinical practice guidelines, and entering stroke units profiles. Finally we report on some statistics, such as calculation of indicators for assessing the quality of stroke care, data mining for knowledge discovery, and process mining for comparing different processes of care delivery. The most important results of the re-engineering are an improved user experience with data entry, and a definitely better data quality that guarantees the reliability of data analyses

    Can point-of-care ultrasound improve the current community diagnostic pathway for acute dyspnoea and suspected heart failure in older people? A feasibility study of comparative accuracy and implementation

    Get PDF
    Diagnosing heart failure (HF) is challenging in elderly, acutely dyspnoeic community patients with high frailty and multiple comorbidities. Current readily accessible diagnostic tools prevent a definitive diagnosis of HF at the point-of-care. There is growing evidence that novices can learn focused point-of-care ultrasound (POCUS) to increase diagnostic accuracy of clinical examinations and improve immediate clinical management. Despite the abundance of data supporting POCUS by different users in different settings, there is a notable absence of attention to contextual complexities that influence implementation. This limits generalisability and leaves uncertainty regarding how and where POCUS should be placed to maximise clinical- and cost-effectiveness. This thesis examines whether nurse-led POCUS serves as a useful triage tool when added to the clinical examination of elderly patients with acute dyspnoea at risk of HF. It details a comprehensive approach to intervention development. An explanatory-sequential mixed-methods approach provided preliminary data regarding feasibility, acceptability, accuracy, and clinical impact of POCUS in the proposed context. It concludes that, following bespoke training, community nurses can accurately and reliably detect left ventricular systolic dysfunction and signs of pulmonary congestion using POCUS in elderly acutely dyspnoeic community patients with suspected HF. Adding POCUS improved the diagnostic accuracy of the assessment, reduced time-to-diagnosis, and could improve triaging of echocardiography referrals, without missing significant dysfunction. Despite contextual challenges of the home-setting, nurse-led POCUS was feasible in most patients and welcomed by nurses. Training and support were perceived as key determinants in implementation success while training interruption was seen as a major barrier. Preliminary findings suggest nurse-led POCUS as a triage tool has the potential to improve the current diagnostic pathway for elderly patients with suspected HF. It provides valuable data to support further larger-scale research and proposes refinements to research methods. POCUS has potential for more widespread clinical use, but exploration of contextual influences is pivotal in ensuring effective implementation in new contexts
    • …
    corecore