63,832 research outputs found

    Novel Deep Learning Models for Medical Imaging Analysis

    Get PDF
    abstract: Deep learning is a sub-field of machine learning in which models are developed to imitate the workings of the human brain in processing data and creating patterns for decision making. This dissertation is focused on developing deep learning models for medical imaging analysis of different modalities for different tasks including detection, segmentation and classification. Imaging modalities including digital mammography (DM), magnetic resonance imaging (MRI), positron emission tomography (PET) and computed tomography (CT) are studied in the dissertation for various medical applications. The first phase of the research is to develop a novel shallow-deep convolutional neural network (SD-CNN) model for improved breast cancer diagnosis. This model takes one type of medical image as input and synthesizes different modalities for additional feature sources; both original image and synthetic image are used for feature generation. This proposed architecture is validated in the application of breast cancer diagnosis and proved to be outperforming the competing models. Motivated by the success from the first phase, the second phase focuses on improving medical imaging synthesis performance with advanced deep learning architecture. A new architecture named deep residual inception encoder-decoder network (RIED-Net) is proposed. RIED-Net has the advantages of preserving pixel-level information and cross-modality feature transferring. The applicability of RIED-Net is validated in breast cancer diagnosis and Alzheimer’s disease (AD) staging. Recognizing medical imaging research often has multiples inter-related tasks, namely, detection, segmentation and classification, my third phase of the research is to develop a multi-task deep learning model. Specifically, a feature transfer enabled multi-task deep learning model (FT-MTL-Net) is proposed to transfer high-resolution features from segmentation task to low-resolution feature-based classification task. The application of FT-MTL-Net on breast cancer detection, segmentation and classification using DM images is studied. As a continuing effort on exploring the transfer learning in deep models for medical application, the last phase is to develop a deep learning model for both feature transfer and knowledge from pre-training age prediction task to new domain of Mild cognitive impairment (MCI) to AD conversion prediction task. It is validated in the application of predicting MCI patients’ conversion to AD with 3D MRI images.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Promising Deep Semantic Nuclei Segmentation Models for Multi-Institutional Histopathology Images of Different Organs

    Get PDF
    Nuclei segmentation in whole-slide imaging (WSI) plays a crucial role in the field of computational pathology. It is a fundamental task for different applications, such as cancer cell type classification, cancer grading, and cancer subtype classification. However, existing nuclei segmentation methods face many challenges, such as color variation in histopathological images, the overlapping and clumped nuclei, and the ambiguous boundary between different cell nuclei, that limit their performance. In this paper, we present promising deep semantic nuclei segmentation models for multi-institutional WSI images (i.e., collected from different scanners) of different organs. Specifically, we study the performance of pertinent deep learning-based models with nuclei segmentation in WSI images of different stains and various organs. We also propose a feasible deep learning nuclei segmentation model formed by combining robust deep learning architectures. A comprehensive comparative study with existing software and related methods in terms of different evaluation metrics and the number of parameters of each model, emphasizes the efficacy of the proposed nuclei segmentation models

    Deep Learning With Attention Mechanisms in Breast Ultrasound Image Segmentation and Classification

    Get PDF
    Breast cancer is a great threat to women’s health. Breast ultrasound (BUS) imaging is commonly used in the early detection of breast cancer as a portable, valuable, and widely available diagnosis tool. Automated BUS image analysis can assist radiologists in making accurate and fast decisions. Generally, automated BUS image analysis includes BUS image segmentation and classification. BUS image segmentation automatically extracts tumor regions from a BUS image. BUS image classification automatically classifies breast tumors into benign or malignant categories. Multi-task learning accomplishes segmentation and classification simultaneously, which makes it more appealing and practical than an either individual task. Deep neural networks have recently been employed to achieve better image segmentation and classification results than conventional approaches. In addition, attention mechanisms are applied to deep neural networks to make them focus on the important parts of the input to improve the segmentation and classification performance. However, BUS image segmentation and classification are still challenging due to the lack of public training data and the high variability of tumors in shape, size, and location. In this dissertation, we introduce three different deep learning architectures with attention mechanisms, each of which aims to address the drawbacks of their peers and evaluate their performance in terms of segmentation and classification accuracy on two public BUS datasets. First, we propose a Multi-Scale Self-Attention Network (MSSA-Net) for BUS image segmentation that can be trained on small BUS image datasets. We design a multi-scale attention mechanism to explore relationships between pixels to improve the feature representation and achieve better segmentation accuracy. Second, we propose a Multi-Task Learning Network with Context-Oriented Self-Attention (MTL-COSA) to segment tumors and classify them as benign or malignant automatically and simultaneously. We design a COSA attention mechanism that utilizes segmentation outputs to estimate the tumor boundary, which is treated as prior medical knowledge, to guide the network to learn contextual relationships for better feature representations to improve both segmentation and classification accuracy. Third, we propose a Regional-Attentive Multi-Task Learning framework (RMTL-Net) for simultaneous BUS image segmentation and classification. We design a regional attention mechanism that employs the segmentation output to guide the classifier to learn important category-sensitive information in three regions of BUS images and fuse them to achieve better classification accuracy. We conduct experiments on two public BUS image datasets to show the superiority of the proposed three methods to several state-of-the-art methods for BUS image segmentation, classification, and Multi-task learning

    Multi-class Cervical Cancer Classification using Transfer Learning-based Optimized SE-ResNet152 model in Pap Smear Whole Slide Images

    Get PDF
    Among the main factors contributing to death globally is cervical cancer, regardless of whether it can be avoided and treated if the afflicted tissues are removed early. Cervical screening programs must be made accessible to everyone and effectively, which is a difficult task that necessitates, among other things, identifying the population\u27s most vulnerable members. Therefore, we present an effective deep-learning method for classifying the multi-class cervical cancer disease using Pap smear images in this research. The transfer learning-based optimized SE-ResNet152 model is used for effective multi-class Pap smear image classification. The reliable significant image features are accurately extracted by the proposed network model. The network\u27s hyper-parameters are optimized using the Deer Hunting Optimization (DHO) algorithm. Five SIPaKMeD dataset categories and six CRIC dataset categories constitute the 11 classes for cervical cancer diseases. A Pap smear image dataset with 8838 images and various class distributions is used to evaluate the proposed method. The introduction of the cost-sensitive loss function throughout the classifier\u27s learning process rectifies the dataset\u27s imbalance. When compared to prior existing approaches on multi-class Pap smear image classification, 99.68% accuracy, 98.82% precision, 97.86% recall, and 98.64% F1-Score are achieved by the proposed method on the test set. For automated preliminary diagnosis of cervical cancer diseases, the proposed method produces better identification results in hospitals and cervical cancer clinics due to the positive classification results

    Multi-Scale Attention-based Multiple Instance Learning for Classification of Multi-Gigapixel Histology Images

    Full text link
    Histology images with multi-gigapixel of resolution yield rich information for cancer diagnosis and prognosis. Most of the time, only slide-level label is available because pixel-wise annotation is labour intensive task. In this paper, we propose a deep learning pipeline for classification in histology images. Using multiple instance learning, we attempt to predict the latent membrane protein 1 (LMP1) status of nasopharyngeal carcinoma (NPC) based on haematoxylin and eosin-stain (H&E) histology images. We utilised attention mechanism with residual connection for our aggregation layers. In our 3-fold cross-validation experiment, we achieved average accuracy, AUC and F1-score 0.936, 0.995 and 0.862, respectively. This method also allows us to examine the model interpretability by visualising attention scores. To the best of our knowledge, this is the first attempt to predict LMP1 status on NPC using deep learning

    Studies on deep learning approach in breast lesions detection and cancer diagnosis in mammograms

    Get PDF
    Breast cancer accounts for the largest proportion of newly diagnosed cancers in women recently. Early diagnosis of breast cancer can improve treatment outcomes and reduce mortality. Mammography is convenient and reliable, which is the most commonly used method for breast cancer screening. However, manual examinations are limited by the cost and experience of radiologists, which introduce a high false positive rate and false examination. Therefore, a high-performance computer-aided diagnosis (CAD) system is significant for lesions detection and cancer diagnosis. Traditional CADs for cancer diagnosis require a large number of features selected manually and remain a high false positive rate. The methods based on deep learning can automatically extract image features through the network, but their performance is limited by the problems of multicenter data biases, the complexity of lesion features, and the high cost of annotations. Therefore, it is necessary to propose a CAD system to improve the ability of lesion detection and cancer diagnosis, which is optimized for the above problems. This thesis aims to utilize deep learning methods to improve the CADs' performance and effectiveness of lesion detection and cancer diagnosis. Starting from the detection of multi-type lesions using deep learning methods based on full consideration of characteristics of mammography, this thesis explores the detection method of microcalcification based on multiscale feature fusion and the detection method of mass based on multi-view enhancing. Then, a classification method based on multi-instance learning is developed, which integrates the detection results from the above methods, to realize the precise lesions detection and cancer diagnosis in mammography. For the detection of microcalcification, a microcalcification detection network named MCDNet is proposed to overcome the problems of multicenter data biases, the low resolution of network inputs, and scale differences between microcalcifications. In MCDNet, Adaptive Image Adjustment mitigates the impact of multicenter biases and maximizes the input effective pixels. Then, the proposed pyramid network with shortcut connections ensures that the feature maps for detection contain more precise localization and classification information about multiscale objects. In the structure, trainable Weighted Feature Fusion is proposed to improve the detection performance of both scale objects by learning the contribution of feature maps in different stages. The experiments show that MCDNet outperforms other methods on robustness and precision. In case the average number of false positives per image is 1, the recall rates of benign and malignant microcalcification are 96.8% and 98.9%, respectively. MCDNet can effectively help radiologists detect microcalcifications in clinical applications. For the detection of breast masses, a weakly supervised multi-view enhancing mass detection network named MVMDNet is proposed to solve the lack of lesion-level labels. MVMDNet can be trained on the image-level labeled dataset and extract the extra localization information by exploring the geometric relation between multi-view mammograms. In Multi-view Enhancing, Spatial Correlation Attention is proposed to extract correspondent location information between different views while Sigmoid Weighted Fusion module fuse diagnostic and auxiliary features to improve the precision of localization. CAM-based Detection module is proposed to provide detections for mass through the classification labels. The results of experiments on both in-house dataset and public dataset, [email protected] and [email protected] (recall rate@average number of false positive per image), demonstrate MVMDNet achieves state-of-art performances among weakly supervised methods and has robust generalization ability to alleviate the multicenter biases. In the study of cancer diagnosis, a breast cancer classification network named CancerDNet based on Multi-instance Learning is proposed. CancerDNet successfully solves the problem that the features of lesions are complex in whole image classification utilizing the lesion detection results from the previous chapters. Whole Case Bag Learning is proposed to combined the features extracted from four-view, which works like a radiologist to realize the classification of each case. Low-capacity Instance Learning and High-capacity Instance Learning successfully integrate the detections of multi-type lesions into the CancerDNet, so that the model can fully consider lesions with complex features in the classification task. CancerDNet achieves the AUC of 0.907 and AUC of 0.925 on the in-house and the public datasets, respectively, which is better than current methods. The results show that CancerDNet achieves a high-performance cancer diagnosis. In the works of the above three parts, this thesis fully considers the characteristics of mammograms and proposes methods based on deep learning for lesions detection and cancer diagnosis. The results of experiments on in-house and public datasets show that the methods proposed in this thesis achieve the state-of-the-art in the microcalcifications detection, masses detection, and the case-level classification of cancer and have a strong ability of multicenter generalization. The results also prove that the methods proposed in this thesis can effectively assist radiologists in making the diagnosis while saving labor costs
    • …
    corecore