149 research outputs found

    A reporting and analysis framework for structured evaluation of COVID-19 clinical and imaging data

    Get PDF
    The COVID-19 pandemic has worldwide individual and socioeconomic consequences. Chest computed tomography has been found to support diagnostics and disease monitoring. A standardized approach to generate, collect, analyze, and share clinical and imaging information in the highest quality possible is urgently needed. We developed systematic, computer-assisted and context-guided electronic data capture on the FDA-approved mint LesionTM software platform to enable cloud-based data collection and real-time analysis. The acquisition and annotation include radiological findings and radiomics performed directly on primary imaging data together with information from the patient history and clinical data. As proof of concept, anonymized data of 283 patients with either suspected or confirmed SARS-CoV-2 infection from eight European medical centers were aggregated in data analysis dashboards. Aggregated data were compared to key findings of landmark research literature. This concept has been chosen for use in the national COVID-19 response of the radiological departments of all university hospitals in Germany

    A lung cancer detection approach based on shape index and curvedness superpixel candidate selection

    Get PDF
    Orientador : Lucas Ferrari de OliveiraDissertação (mestrado) - Universidade Federal do Paraná, Setor de Tecnologia, Programa de Pós-Graduação em Engenharia Elétrica. Defesa: Curitiba, 29/08/2016Inclui referências : f. 72-76Área de concentração: Sistemas eletrônicosResumo: Câncer é uma das causas com mais mortalidade mundialmente. Câncer de pulmão é o tipo de câncer mais comum (excluíndo câncer de pele não-melanoma). Seus sintomas aparecem em estágios mais avançados, o que dificulta o seu tratamento. Para diagnosticar o paciente, a tomografia computadorizada é utilizada. Ela é composta de diversos cortes, que mapeiam uma região 3D de interesse. Apesar de fornecer muitos detalhes, por serem gerados vários cortes, a análise de exames de tomografia computadorizada se torna exaustiva, o que pode influenciar negativamente no diagnóstico feito pelo especialista. O objetivo deste trabalho é o desenvolvimento de métodos para a segmentação do pulmão e a detecção de nódulos em imagens de tomografia computadorizada do tórax. As imagens são segmentadas para separar o pulmão das outras estruturas e após, detecção de nódulos utilizando a técnicas de superpixeis são aplicadas. A técnica de Rótulamento dos Eixos teve uma média de preservação de nódulos de 93,53% e a técnica Monotone Chain Convex Hull apresentou melhores resultados com uma taxa de 97,78%. Para a detecção dos nódulos, as técnicas Felzenszwalb e SLIC são empregadas para o agrupamento de regiões de nódulos em superpixeis. Uma seleção de candidatos à nódulos baseada em shape index e curvedness é aplicada para redução do número de superpixeis. Para a classificação desses candidatos, foi utilizada a técnica de Florestas Aleatórias. A base de imagens utilizada foi a LIDC, que foi dividida em duas sub-bases: uma de desenvolvimento, composta pelos pacientes 0001 a 0600, e uma de validação, composta pelos pacientes 0601 a 1012. Na base de validação, a técnica Felzenszwalb obteve uma sensibilidade de 60,61% e 7,2 FP/exame. Palavras-chaves: Câncer de pulmão. Detecção de nódulos. Superpixel. Shape index.Abstract: Cancer is one of the causes with more mortality worldwide. Lung cancer is the most common type (excluding non-melanoma skin cancer). Its symptoms appear mostly in advanced stages, which difficult its treatment. For patient diagnostic, computer tomography (CT) is used. CT is composed of many slices, which maps a 3D region of interest. Although it provides many details, its analysis is very exhaustive, which may has negatively influence in the specialist's diagnostic. The objective of this work is the development of lung segmentation and nodule detection methods in chest CT images. These images are segmented to separate the lung region from other parts and, after that, nodule detection using superpixel methods is applied. The Axes' Labeling had a mean of nodule preservation of 93.53% and the Monotone Chain Convex Hull method presented better results, with a mean of 97.78%. For nodule detection, the Felzenszwalb and SLIC methods are employed to group nodule regions. A nodule candidate selection based on shape index and curvedness is applied for superpixel reduction. Then, classification of these candidates is realized by the Random Forest. The LIDC database was divided into two data sets: a development data set composed of the CT scans of patients 0001 to 0600, and a untouched, validation data set, composed of patients 0601 to 1012. For the validation data set, the Felzenszwalb method had a sensitivity of 60.61% and 7.2 FP/scan. Key-words: Lung cancer. Nodule detection. Superpixel. Shape index

    Deep Learning in Chest Radiography: From Report Labeling to Image Classification

    Get PDF
    Chest X-ray (CXR) is the most common examination performed by a radiologist. Through CXR, radiologists must correctly and immediately diagnose a patient’s thorax to avoid the progression of life-threatening diseases. Not only are certified radiologists hard to find but also stress, fatigue, and lack of experience all contribute to the quality of an examination. As a result, providing a technique to aid radiologists in reading CXRs and a tool to help bridge the gap for communities without adequate access to radiological services would yield a huge advantage for patients and patient care. This thesis considers one essential task, CXR image classification, with Deep Learning (DL) technologies from the following three aspects: understanding the intersection of CXR interpretation and DL; extracting multiple image labels from radiology reports to facilitate the training of DL classifiers; and developing CXR classifiers using DL. First, we explain the core concepts and categorize the existing data and literature for researchers entering this field for ease of reference. Using CXRs and DL for medical image diagnosis is a relatively recent field of study because large, publicly available CXR datasets have not been around for very long. Second, we contribute to labeling large datasets with multi-label image annotations extracted from CXR reports. We describe the development of a DL-based report labeler named CXRlabeler, focusing on inductive sequential transfer learning. Lastly, we explain the design of three novel Convolutional Neural Network (CNN) classifiers, i.e., MultiViewModel, Xclassifier, and CovidXrayNet, for binary image classification, multi-label image classification, and multi-class image classification, respectively. This dissertation showcases significant progress in the field of automated CXR interpretation using DL; all source code used is publicly available. It provides methods and insights that can be applied to other medical image interpretation tasks

    Can GPT-4V(ision) Serve Medical Applications? Case Studies on GPT-4V for Multimodal Medical Diagnosis

    Full text link
    Driven by the large foundation models, the development of artificial intelligence has witnessed tremendous progress lately, leading to a surge of general interest from the public. In this study, we aim to assess the performance of OpenAI's newest model, GPT-4V(ision), specifically in the realm of multimodal medical diagnosis. Our evaluation encompasses 17 human body systems, including Central Nervous System, Head and Neck, Cardiac, Chest, Hematology, Hepatobiliary, Gastrointestinal, Urogenital, Gynecology, Obstetrics, Breast, Musculoskeletal, Spine, Vascular, Oncology, Trauma, Pediatrics, with images taken from 8 modalities used in daily clinic routine, e.g., X-ray, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Digital Subtraction Angiography (DSA), Mammography, Ultrasound, and Pathology. We probe the GPT-4V's ability on multiple clinical tasks with or without patent history provided, including imaging modality and anatomy recognition, disease diagnosis, report generation, disease localisation. Our observation shows that, while GPT-4V demonstrates proficiency in distinguishing between medical image modalities and anatomy, it faces significant challenges in disease diagnosis and generating comprehensive reports. These findings underscore that while large multimodal models have made significant advancements in computer vision and natural language processing, it remains far from being used to effectively support real-world medical applications and clinical decision-making. All images used in this report can be found in https://github.com/chaoyi-wu/GPT-4V_Medical_Evaluation

    Towards Interpretable Machine Learning in Medical Image Analysis

    Get PDF
    Over the past few years, ML has demonstrated human expert level performance in many medical image analysis tasks. However, due to the black-box nature of classic deep ML models, translating these models from the bench to the bedside to support the corresponding stakeholders in the desired tasks brings substantial challenges. One solution is interpretable ML, which attempts to reveal the working mechanisms of complex models. From a human-centered design perspective, interpretability is not a property of the ML model but an affordance, i.e., a relationship between algorithm and user. Thus, prototyping and user evaluations are critical to attaining solutions that afford interpretability. Following human-centered design principles in highly specialized and high stakes domains, such as medical image analysis, is challenging due to the limited access to end users. This dilemma is further exacerbated by the high knowledge imbalance between ML designers and end users. To overcome the predicament, we first define 4 levels of clinical evidence that can be used to justify the interpretability to design ML models. We state that designing ML models with 2 levels of clinical evidence: 1) commonly used clinical evidence, such as clinical guidelines, and 2) iteratively developed clinical evidence with end users are more likely to design models that are indeed interpretable to end users. In this dissertation, we first address how to design interpretable ML in medical image analysis that affords interpretability with these two different levels of clinical evidence. We further highly recommend formative user research as the first step of the interpretable model design to understand user needs and domain requirements. We also indicate the importance of empirical user evaluation to support transparent ML design choices to facilitate the adoption of human-centered design principles. All these aspects in this dissertation increase the likelihood that the algorithms afford interpretability and enable stakeholders to capitalize on the benefits of interpretable ML. In detail, we first propose neural symbolic reasoning to implement public clinical evidence into the designed models for various routinely performed clinical tasks. We utilize the routinely applied clinical taxonomy for abnormality classification in chest x-rays. We also establish a spleen injury grading system by strictly following the clinical guidelines for symbolic reasoning with the detected and segmented salient clinical features. Then, we propose the entire interpretable pipeline for UM prognostication with cytopathology images. We first perform formative user research and found that pathologists believe cell composition is informative for UM prognostication. Thus, we build a model to analyze cell composition directly. Finally, we conduct a comprehensive user study to assess the human factors of human-machine teaming with the designed model, e.g., whether the proposed model indeed affords interpretability to pathologists. The human-centered design process is proven to be truly interpretable to pathologists for UM prognostication. All in all, this dissertation introduces a comprehensive human-centered design for interpretable ML solutions in medical image analysis that affords interpretability to end users

    Generating semantically enriched diagnostics for radiological images using machine learning

    Get PDF
    Development of Computer Aided Diagnostic (CAD) tools to aid radiologists in pathology detection and decision making relies considerably on manually annotated images. With the advancement of deep learning techniques for CAD development, these expert annotations no longer need to be hand-crafted, however, deep learning algorithms require large amounts of data in order to generalise well. One way in which to access large volumes of expert-annotated data is through radiological exams consisting of images and reports. Using past radiological exams obtained from hospital archiving systems has many advantages: they are expert annotations available in large quantities, covering a population-representative variety of pathologies, and they provide additional context to pathology diagnoses, such as anatomical location and severity. Learning to auto-generate such reports from images presents many challenges such as the difficulty in representing and generating long, unstructured textual information, accounting for spelling errors and repetition or redundancy, and the inconsistency across different annotators. In this thesis, the problem of learning to automate disease detection from radiological exams is approached from three directions. Firstly, a report generation model is developed such that it is conditioned on radiological image features. Secondly, a number of approaches are explored aimed at extracting diagnostic information from free-text reports. Finally, an alternative approach to image latent space learning from current state-of-the-art is developed that can be applied to accelerated image acquisition.Open Acces

    Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases

    Get PDF
    Cardiothoracic and pulmonary diseases are a significant cause of mortality and morbidity worldwide. The COVID-19 pandemic has highlighted the lack of access to clinical care, the overburdened medical system, and the potential of artificial intelligence (AI) in improving medicine. There are a variety of diseases affecting the cardiopulmonary system including lung cancers, heart disease, tuberculosis (TB), etc., in addition to COVID-19-related diseases. Screening, diagnosis, and management of cardiopulmonary diseases has become difficult owing to the limited availability of diagnostic tools and experts, particularly in resource-limited regions. Early screening, accurate diagnosis and staging of these diseases could play a crucial role in treatment and care, and potentially aid in reducing mortality. Radiographic imaging methods such as computed tomography (CT), chest X-rays (CXRs), and echo ultrasound (US) are widely used in screening and diagnosis. Research on using image-based AI and machine learning (ML) methods can help in rapid assessment, serve as surrogates for expert assessment, and reduce variability in human performance. In this Special Issue, “Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases”, we have highlighted exemplary primary research studies and literature reviews focusing on novel AI/ML methods and their application in image-based screening, diagnosis, and clinical management of cardiopulmonary diseases. We hope that these articles will help establish the advancements in AI
    corecore