409 research outputs found

    The Effectiveness of Transfer Learning Systems on Medical Images

    Get PDF
    Deep neural networks have revolutionized the performances of many machine learning tasks such as medical image classification and segmentation. Current deep learning (DL) algorithms, specifically convolutional neural networks are increasingly becoming the methodological choice for most medical image analysis. However, training these deep neural networks requires high computational resources and very large amounts of labeled data which is often expensive and laborious. Meanwhile, recent studies have shown the transfer learning (TL) paradigm as an attractive choice in providing promising solutions to challenges of shortage in the availability of labeled medical images. Accordingly, TL enables us to leverage the knowledge learned from related data to solve a new problem. The objective of this dissertation is to examine the effectiveness of TL systems on medical images. First, a comprehensive systematic literature review was performed to provide an up-to-date status of TL systems on medical images. Specifically, we proposed a novel conceptual framework to organize the review. Second, a novel DL network was pretrained on natural images and utilized to evaluate the effectiveness of TL on a very large medical image dataset, specifically Chest X-rays images. Lastly, domain adaptation using an autoencoder was evaluated on the medical image dataset and the results confirmed the effectiveness of TL through fine-tuning strategies. We make several contributions to TL systems on medical image analysis: Firstly, we present a novel survey of TL on medical images and propose a new conceptual framework to organize the findings. Secondly, we propose a novel DL architecture to improve learned representations of medical images while mitigating the problem of vanishing gradients. Additionally, we identified the optimal cut-off layer (OCL) that provided the best model performance. We found that the higher layers in the proposed deep model give a better feature representation of our medical image task. Finally, we analyzed the effect of domain adaptation by fine-tuning an autoencoder on our medical images and provide theoretical contributions on the application of the transductive TL approach. The contributions herein reveal several research gaps to motivate future research and contribute to the body of literature in this active research area of TL systems on medical image analysis

    Defending Against Adversarial Attacks On Medical Imaging Ai Systems

    Get PDF
    Although deep learning systems trained on medical images have shown state-of-the-art performance in many clinical prediction tasks, recent studies demonstrate that these systems can be fooled by carefully crafted adversarial images. It has raised concerns on the practical deployment of deep learning based medical image classification systems. Although an array of defense techniques have been developed and proved to be effective in computer vision, defending against adversarial attacks on medical images remains largely an uncharted territory due to their unique challenges: crafted adversarial noises added to a highly standardized medical image can make it a hard sample for model to predict and label scarcity limits adversarial generalizability. To tackle these challenges, we propose two defending methods: one unsupervised learning approach to detect those crafted hard samples and one robust medical imaging AI framework based on an additional Semi-Supervised Adversarial Training (SSAT) module to enhance the overall system robustness, followed by a new measure for assessing systems adversarial risk. We systematically demonstrate the advantages of our methods over the existing adversarial defense techniques under diverse real-world settings of adversarial attacks using benchmark X-ray and OCT imaging data sets

    Generating semantically enriched diagnostics for radiological images using machine learning

    Get PDF
    Development of Computer Aided Diagnostic (CAD) tools to aid radiologists in pathology detection and decision making relies considerably on manually annotated images. With the advancement of deep learning techniques for CAD development, these expert annotations no longer need to be hand-crafted, however, deep learning algorithms require large amounts of data in order to generalise well. One way in which to access large volumes of expert-annotated data is through radiological exams consisting of images and reports. Using past radiological exams obtained from hospital archiving systems has many advantages: they are expert annotations available in large quantities, covering a population-representative variety of pathologies, and they provide additional context to pathology diagnoses, such as anatomical location and severity. Learning to auto-generate such reports from images presents many challenges such as the difficulty in representing and generating long, unstructured textual information, accounting for spelling errors and repetition or redundancy, and the inconsistency across different annotators. In this thesis, the problem of learning to automate disease detection from radiological exams is approached from three directions. Firstly, a report generation model is developed such that it is conditioned on radiological image features. Secondly, a number of approaches are explored aimed at extracting diagnostic information from free-text reports. Finally, an alternative approach to image latent space learning from current state-of-the-art is developed that can be applied to accelerated image acquisition.Open Acces

    Paying per-label attention for multi-label extraction from radiology reports

    Get PDF
    Funding: This work is part of the Industrial Centre for AI Research in digital Diagnostics (iCAIRD) which is funded by Innovate UK on behalf of UK Research and Innovation (UKRI) [project number: 104690].Training medical image analysis models requires large amounts of expertly annotated data which is time-consuming and expensive to obtain. Images are often accompanied by free-text radiology reports which are a rich source of information. In this paper, we tackle the automated extraction of structured labels from head CT reports for imaging of suspected stroke patients, using deep learning. Firstly, we propose a set of 31 labels which correspond to radiographic findings (e.g. hyperdensity) and clinical impressions (e.g. haemorrhage) related to neurological abnormalities. Secondly, inspired by previous work, we extend existing state-of-the-art neural network models with a label-dependent attention mechanism. Using this mechanism and simple synthetic data augmentation, we are able to robustly extract many labels with a single model, classified according to the radiologist's reporting (positive, uncertain, negative). This approach can be used in further research to effectively extract many labels from medical text.PostprintPostprin

    Assessing emphysema in CT scans of the lungs:Using machine learning, crowdsourcing and visual similarity

    Get PDF

    Adversarial Machine Learning For Advanced Medical Imaging Systems

    Get PDF
    Although deep neural networks (DNNs) have achieved significant advancement in various challenging tasks of computer vision, they are also known to be vulnerable to so-called adversarial attacks. With only imperceptibly small perturbations added to a clean image, adversarial samples can drastically change models’ prediction, resulting in a significant drop in DNN’s performance. This phenomenon poses a serious threat to security-critical applications of DNNs, such as medical imaging, autonomous driving, and surveillance systems. In this dissertation, we present adversarial machine learning approaches for natural image classification and advanced medical imaging systems. We start by describing our advanced medical imaging systems to tackle the major challenges of on-device deployment: automation, uncertainty, and resource constraint. It is followed by novel unsupervised and semi-supervised robust training schemes to enhance the adversarial robustness of these medical imaging systems. These methods are designed to tackle the unique challenges of defending against adversarial attacks on medical imaging systems and are sufficiently flexible to generalize to various medical imaging modalities and problems. We continue on developing novel training scheme to enhance adversarial robustness of the general DNN based natural image classification models. Based on a unique insight into the predictive behavior of DNNs that they tend to misclassify adversarial samples into the most probable false classes, we propose a new loss function as a drop-in replacement for the cross-entropy loss to improve DNN\u27s adversarial robustness. Specifically, it enlarges the probability gaps between true class and false classes and prevents them from being melted by small perturbations. Finally, we conclude the dissertation by summarizing original contributions and discussing our future work that leverages DNN interpretability constraint on adversarial training to tackle the central machine learning problem of generalization gap

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis
    • …
    corecore