11 research outputs found

    Adversarial Convolutional Networks with Weak Domain-Transfer for Multi-sequence Cardiac MR Images Segmentation

    Get PDF
    Analysis and modeling of the ventricles and myocardium are important in the diagnostic and treatment of heart diseases. Manual delineation of those tissues in cardiac MR (CMR) scans is laborious and time-consuming. The ambiguity of the boundaries makes the segmentation task rather challenging. Furthermore, the annotations on some modalities such as Late Gadolinium Enhancement (LGE) MRI, are often not available. We propose an end-to-end segmentation framework based on convolutional neural network (CNN) and adversarial learning. A dilated residual U-shape network is used as a segmentor to generate the prediction mask; meanwhile, a CNN is utilized as a discriminator model to judge the segmentation quality. To leverage the available annotations across modalities per patient, a new loss function named weak domain-transfer loss is introduced to the pipeline. The proposed model is evaluated on the public dataset released by the challenge organizer in MICCAI 2019, which consists of 45 sets of multi-sequence CMR images. We demonstrate that the proposed adversarial pipeline outperforms baseline deep-learning methods.Comment: 9 pages, 4 figures, conferenc

    Arsitektur U-Net pada Segmentasi Citra Hati sebagai Deteksi Dini Kanker Liver

    Get PDF
    Hati adalah salah satu organ tubuh manusia yang bertanggung jawab untuk mencerna, meyerap, dan memproses makanan serta berfungsi untuk menyaring darah dari saluran pencernaan sebelum dibawa kebagian organ tubuh lainnya. Hati sangat rentan terhadap berbagai penyakit, salah satunya yaitu kanker liver. untuk itu perlu dilakukannya deteksi sejak dini atau diagnosa terhadap organ hati. Untuk mengatasi permasalahan tersebut, pada penelitian ini dilakukan segmentasi hati menggunakan metode Convolutional Neural Network (CNN) dengan arsitektur U-Net pada citra hati. Langkah awal pada penelitian ini dilakukan pre-processing data yang menerapkan teknik green channel, histogram equalization (HE), dan contrast limited adaptive histogram equalization (CLAHE). Setelah itu dilakukan proses segmentasi sesuai dengan metode yang diusulkan. Penelitian ini menggunakan dataset hati yang didapatkan dari website Kaggle. Hasil penelitian menggunakan metode CNN arsitektur U-Net pada data mendapatkan nilai akurasi sebesar 97,62%, sensitivitas sebesar 89,84%, spesifisitas sebesar 98,37%, koefisen jaccard sebesar 76,99%, dan dice similarity coefficient (DSC) sebesar 87%. Berdasarkan hasil tersebut, dapat disimpulkan bahwa metode yang diusulkan memiliki hasil yang sangat baik dalam melakukan segmentasi terhadap citra hati

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    NerveFormer: A Cross-Sample Aggregation Network for Corneal Nerve Segmentation

    Get PDF
    The segmentation of corneal nerves in corneal confocal microscopy (CCM) is of great to the quantification of clinical parameters in the diagnosis of eye-related diseases and systematic diseases. Existing works mainly use convolutional neural networks to improve the segmentation accuracy, while further improvement is needed to mitigate the nerve discontinuity and noise interference. In this paper, we propose a novel corneal nerve segmentation network, named NerveFormer, to resolve the above-mentioned limitations. The proposed NerveFormer includes a Deformable and External Attention Module (DEAM), which exploits the Transformer-based Deformable Attention (TDA) and External Attention (TEA) mechanisms. TDA is introduced to explore the local internal nerve features in a single CCM, while TEA is proposed to model global external nerve features across different CCM images. Specifically, to efficiently fuse the internal and external nerve features, TDA obtains the query set required by TEA, thereby strengthening the characterization ability of TEA. Therefore, the proposed model aggregates the learned features from both single-sample and cross-sample, allowing for better extraction of corneal nerve features across the whole dataset. Experimental results on two public CCM datasets show that our proposed method achieves state-of-the-art performance, especially in terms of segmentation continuity and noise discrimination

    Deep Visual Unsupervised Domain Adaptation for Classification Tasks:A Survey

    Get PDF
    corecore