24 research outputs found

    Machine Learning in Medical Image Analysis

    Get PDF
    Machine learning is playing a pivotal role in medical image analysis. Many algorithms based on machine learning have been applied in medical imaging to solve classification, detection, and segmentation problems. Particularly, with the wide application of deep learning approaches, the performance of medical image analysis has been significantly improved. In this thesis, we investigate machine learning methods for two key challenges in medical image analysis: The first one is segmentation of medical images. The second one is learning with weak supervision in the context of medical imaging. The first main contribution of the thesis is a series of novel approaches for image segmentation. First, we propose a framework based on multi-scale image patches and random forests to segment small vessel disease (SVD) lesions on computed tomography (CT) images. This framework is validated in terms of spatial similarity, estimated lesion volumes, visual score ratings and was compared with human experts. The results showed that the proposed framework performs as well as human experts. Second, we propose a generic convolutional neural network (CNN) architecture called the DRINet for medical image segmentation. The DRINet approach is robust in three different types of segmentation tasks, which are multi-class cerebrospinal fluid (CSF) segmentation on brain CT images, multi-organ segmentation on abdomen CT images, and multi-class tumour segmentation on brain magnetic resonance (MR) images. Finally, we propose a CNN-based framework to segment acute ischemic lesions on diffusion weighted (DW)-MR images, where the lesions are highly variable in terms of position, shape, and size. Promising results were achieved on a large clinical dataset. The second main contribution of the thesis is two novel strategies for learning with weak supervision. First, we propose a novel strategy called context restoration to make use of the images without annotations. The context restoration strategy is a proxy learning process based on the CNN, which extracts semantic features from images without using annotations. It was validated on classification, localization, and segmentation problems and was superior to existing strategies. Second, we propose a patch-based framework using multi-instance learning to distinguish normal and abnormal SVD on CT images, where there are only coarse-grained labels available. Our framework was observed to work better than classic methods and clinical practice.Open Acces

    Iterative annotation to ease neural network training: Specialized machine learning in medical image analysis

    Get PDF
    Neural networks promise to bring robust, quantitative analysis to medical fields, but adoption is limited by the technicalities of training these networks. To address this translation gap between medical researchers and neural networks in the field of pathology, we have created an intuitive interface which utilizes the commonly used whole slide image (WSI) viewer, Aperio ImageScope (Leica Biosystems Imaging, Inc.), for the annotation and display of neural network predictions on WSIs. Leveraging this, we propose the use of a human-in-the-loop strategy to reduce the burden of WSI annotation. We track network performance improvements as a function of iteration and quantify the use of this pipeline for the segmentation of renal histologic findings on WSIs. More specifically, we present network performance when applied to segmentation of renal micro compartments, and demonstrate multi-class segmentation in human and mouse renal tissue slides. Finally, to show the adaptability of this technique to other medical imaging fields, we demonstrate its ability to iteratively segment human prostate glands from radiology imaging data.Comment: 15 pages, 7 figures, 2 supplemental figures (on the last page

    Towards Interpretable Machine Learning in Medical Image Analysis

    Get PDF
    Over the past few years, ML has demonstrated human expert level performance in many medical image analysis tasks. However, due to the black-box nature of classic deep ML models, translating these models from the bench to the bedside to support the corresponding stakeholders in the desired tasks brings substantial challenges. One solution is interpretable ML, which attempts to reveal the working mechanisms of complex models. From a human-centered design perspective, interpretability is not a property of the ML model but an affordance, i.e., a relationship between algorithm and user. Thus, prototyping and user evaluations are critical to attaining solutions that afford interpretability. Following human-centered design principles in highly specialized and high stakes domains, such as medical image analysis, is challenging due to the limited access to end users. This dilemma is further exacerbated by the high knowledge imbalance between ML designers and end users. To overcome the predicament, we first define 4 levels of clinical evidence that can be used to justify the interpretability to design ML models. We state that designing ML models with 2 levels of clinical evidence: 1) commonly used clinical evidence, such as clinical guidelines, and 2) iteratively developed clinical evidence with end users are more likely to design models that are indeed interpretable to end users. In this dissertation, we first address how to design interpretable ML in medical image analysis that affords interpretability with these two different levels of clinical evidence. We further highly recommend formative user research as the first step of the interpretable model design to understand user needs and domain requirements. We also indicate the importance of empirical user evaluation to support transparent ML design choices to facilitate the adoption of human-centered design principles. All these aspects in this dissertation increase the likelihood that the algorithms afford interpretability and enable stakeholders to capitalize on the benefits of interpretable ML. In detail, we first propose neural symbolic reasoning to implement public clinical evidence into the designed models for various routinely performed clinical tasks. We utilize the routinely applied clinical taxonomy for abnormality classification in chest x-rays. We also establish a spleen injury grading system by strictly following the clinical guidelines for symbolic reasoning with the detected and segmented salient clinical features. Then, we propose the entire interpretable pipeline for UM prognostication with cytopathology images. We first perform formative user research and found that pathologists believe cell composition is informative for UM prognostication. Thus, we build a model to analyze cell composition directly. Finally, we conduct a comprehensive user study to assess the human factors of human-machine teaming with the designed model, e.g., whether the proposed model indeed affords interpretability to pathologists. The human-centered design process is proven to be truly interpretable to pathologists for UM prognostication. All in all, this dissertation introduces a comprehensive human-centered design for interpretable ML solutions in medical image analysis that affords interpretability to end users

    Iterative annotation to ease neural network training: Specialized machine learning in medical image analysis

    Get PDF
    Neural networks promise to bring robust, quantitative analysis to medical fields, but adoption is limited by the technicalities of training these networks. To address this translation gap between medical researchers and neural networks in the field of pathology, we have created an intuitive interface which utilizes the commonly used whole slide image (WSI) viewer, Aperio ImageScope (Leica Biosystems Imaging, Inc.), for the annotation and display of neural network predictions on WSIs. Leveraging this, we propose the use of a human-in-the-loop strategy to reduce the burden of WSI annotation. We track network performance improvements as a function of iteration and quantify the use of this pipeline for the segmentation of renal histologic findings on WSIs. More specifically, we present network performance when applied to segmentation of renal micro compartments, and demonstrate multi-class segmentation in human and mouse renal tissue slides. Finally, to show the adaptability of this technique to other medical imaging fields, we demonstrate its ability to iteratively segment human prostate glands from radiology imaging data.Comment: 15 pages, 7 figures, 2 supplemental figures (on the last page

    AutoML Systems For Medical Imaging

    Full text link
    The integration of machine learning in medical image analysis can greatly enhance the quality of healthcare provided by physicians. The combination of human expertise and computerized systems can result in improved diagnostic accuracy. An automated machine learning approach simplifies the creation of custom image recognition models by utilizing neural architecture search and transfer learning techniques. Medical imaging techniques are used to non-invasively create images of internal organs and body parts for diagnostic and procedural purposes. This article aims to highlight the potential applications, strategies, and techniques of AutoML in medical imaging through theoretical and empirical evidence.Comment: 11 pages, 4 figures; Acceptance of the chapter for the Springer book "Data-driven approaches to medical imaging

    Computer aided diagnosis of cerebrovascular disease based on DSA image

    Get PDF
    In recent years, the incidence of cerebrovascular diseases in China has shown a significant upward trend, and it has become a common disease threatening people's lives. Digital Subtraction Angiography (DSA) is the gold standard for the diagnosis of clinical cerebrovascular disease, and it is the most direct method to check the brain lesion. At present, there are the following two problems in the clinical research of DSA images: DSA is a real-time image with numerous frames, containing much useless information in frames; thus, human interpretation and annotation are time-consuming and labor-intensive. The blood vessel structure in DSA images is so complicated that high practical skills are required for clinicians. In the computer-aided diagnosis of DSA sequence images, there is currently a lack of automatic and effective computer-aided diagnosis algorithms for cerebrovascular diseases. Based on the above issues, the main work of this paper is as follows: 1.A multi-target detection algorithm based on Faster-RCNN is designed and applied to the analysis of brain DSA images. The algorithm divides DSA images into arterial phase, capillary phase, pre-venous phase and sinus phase by identifying the main blood vessel structure in each frame. And on this basis, we analyze the time relationship between the time phases. 2.On the basis of DSA phase detection, a key frame location algorithm based on single blood vessel structure detection is designed for moyamoya disease. First, the target detection model is applied to locate the internal carotid artery and the Willis circle. Then, five frames of images are extracted from the arterial period as keyframes. Finally, the nidus' ROI is determined according to the position of the internal carotid artery. 3.A diagnostic method for cerebral arteriovenous malformation (AVM) is designed, which combines temporal features and radiomics features. First, on the basis of DSA time phase detection, we propose a deep learning network to extract vascular time features from the DSA video; then, the time feature is combined with the radiomics features of the static keyframe to establish an AVM diagnosis model. While assisting diagnosis, this method does not require any human intervention, and reduces the workload of clinicians. The diagnostic model that combines time features and radiomics features is applied to the study of AVM staging. The experimental results prove that the classification model trained by fusion features has better diagnostic performance than the model trained by either time features or radiomics features. Based on the above three parts, this paper establishes a cerebrovascular disease analysis framework based on radiomics method and deep learning. We introduce corresponding solutions for DSA automatic image reading, rapid diagnosis of moyamoya disease, and precise diagnosis of AVM. The method proposed in this paper has practical significance for assisting the diagnosis of cerebrovascular disease and reducing the burden of medical staff.Digital Subtraction Angiography(DSA), Radiomics analysis, Arteriovenous malformations, Moyamoya, Faster-RCNN, Temporal features, Fusion feature

    A Bayesian Updating Scheme for Pandemics: Estimating the Infection Dynamics of COVID-19

    Get PDF
    Epidemic models play a key role in understanding and responding to the emerging COVID-19 pandemic. Widely used compartmental models are static and are of limited use to evaluate intervention strategies of combatting the pandemic. Applying the technology of data assimilation, we propose a Bayesian updating approach for estimating epidemiological parameters using observable information to assess the impacts of different intervention strategies. We adopt a concise renewal model and propose new parameters by disentangling the reduction of instantaneous reproduction number R_t into mitigation and suppression factors to quantify intervention impacts at a finer granularity. A data assimilation framework is developed to estimate these parameters including constructing an observation function and developing a Bayesian updating scheme. A statistical analysis framework is built to quantify the impacts of intervention strategies by monitoring the evolution of the estimated parameters. We reveal the intervention impacts in European countries and Wuhan and the resurgence risk in the United States

    Assessing emphysema in CT scans of the lungs:Using machine learning, crowdsourcing and visual similarity

    Get PDF
    corecore