95 research outputs found

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Deep learning for image-based liver analysis — A comprehensive review focusing on malignant lesions

    Get PDF
    Deep learning-based methods, in particular, convolutional neural networks and fully convolutional networks are now widely used in the medical image analysis domain. The scope of this review focuses on the analysis using deep learning of focal liver lesions, with a special interest in hepatocellular carcinoma and metastatic cancer; and structures like the parenchyma or the vascular system. Here, we address several neural network architectures used for analyzing the anatomical structures and lesions in the liver from various imaging modalities such as computed tomography, magnetic resonance imaging and ultrasound. Image analysis tasks like segmentation, object detection and classification for the liver, liver vessels and liver lesions are discussed. Based on the qualitative search, 91 papers were filtered out for the survey, including journal publications and conference proceedings. The papers reviewed in this work are grouped into eight categories based on the methodologies used. By comparing the evaluation metrics, hybrid models performed better for both the liver and the lesion segmentation tasks, ensemble classifiers performed better for the vessel segmentation tasks and combined approach performed better for both the lesion classification and detection tasks. The performance was measured based on the Dice score for the segmentation, and accuracy for the classification and detection tasks, which are the most commonly used metrics.publishedVersio

    Automated liver tissues delineation based on machine learning techniques: A survey, current trends and future orientations

    Get PDF
    There is no denying how machine learning and computer vision have grown in the recent years. Their highest advantages lie within their automation, suitability, and ability to generate astounding results in a matter of seconds in a reproducible manner. This is aided by the ubiquitous advancements reached in the computing capabilities of current graphical processing units and the highly efficient implementation of such techniques. Hence, in this paper, we survey the key studies that are published between 2014 and 2020, showcasing the different machine learning algorithms researchers have used to segment the liver, hepatic-tumors, and hepatic-vasculature structures. We divide the surveyed studies based on the tissue of interest (hepatic-parenchyma, hepatic-tumors, or hepatic-vessels), highlighting the studies that tackle more than one task simultaneously. Additionally, the machine learning algorithms are classified as either supervised or unsupervised, and further partitioned if the amount of works that fall under a certain scheme is significant. Moreover, different datasets and challenges found in literature and websites, containing masks of the aforementioned tissues, are thoroughly discussed, highlighting the organizers original contributions, and those of other researchers. Also, the metrics that are used excessively in literature are mentioned in our review stressing their relevancy to the task at hand. Finally, critical challenges and future directions are emphasized for innovative researchers to tackle, exposing gaps that need addressing such as the scarcity of many studies on the vessels segmentation challenge, and why their absence needs to be dealt with in an accelerated manner.Comment: 41 pages, 4 figures, 13 equations, 1 table. A review paper on liver tissues segmentation based on automated ML-based technique

    On Medical Image Segmentation and on Modeling Long Term Dependencies

    Get PDF
    La délimitation (segmentation) des tumeurs malignes à partir d’images médicales est importante pour le diagnostic du cancer, la planification des traitements ciblés, ainsi que les suivis de la progression du cancer et de la réponse aux traitements. Cependant, bien que la segmentation manuelle des images médicales soit précise, elle prend du temps, nécessite des opérateurs experts et est souvent peu pratique lorsque de grands ensembles de données sont utilisés. Ceci démontre la nécessité d’une segmentation automatique. Cependant, la segmentation automatisée des tumeurs est particulièrement difficile en raison de la variabilité de l’apparence des tumeurs, de l’équipement d’acquisition d’image et des paramètres d’acquisition, et de la variabilité entre les patients. Les tumeurs varient en type, taille, emplacement et quantité; le reste de l’image varie en raison des différences anatomiques entre les patients, d’une chirurgie antérieure ou d’une thérapie ablative, de différences dans l’amélioration du contraste des tissus et des artefacts d’image. De plus, les protocoles d’acquisition du scanner varient considérablement entre les cliniques et les caractéristiques de l’image varient selon le modèle du scanner. En raison de toutes ces variabilités, un modèle de segmentation doit être suffisamment flexible pour apprendre les caractéristiques générales des données. L’avènement des réseaux profonds de neurones à convolution (convolutional neural networks, CNN) a permis une classification exacte et précise des images hautement variables et, par extension, une segmentation de haute qualité des images. Cependant, ces modèles doivent être formés sur d’énormes quantités de données étiquetées. Cette contrainte est particulièrement difficile dans le contexte de la segmentation des images médicales, car le nombre de segmentations pouvant être produites est limité dans la pratique par la nécessité d’employer des opérateurs experts pour réaliser un tel étiquetage. De plus, les variabilités d’intérêt dans les images médicales semblent suivre une distribution à longue traîne, ce qui signifie qu’un nombre particulièrement important de données utilisées pour l’entraînement peut être nécessaire pour fournir un échantillon suffisant de chaque type de variabilité à un CNN. Cela démontre la nécessité de développer des stratégies pour la formation de ces modèles avec des segmentations de vérité-terrain disponibles limitées.----------ABSTRACT: The delineation (segmentation) of malignant tumours in medical images is important for cancer diagnosis, the planning of targeted treatments, and the tracking of cancer progression and treatment response. However, although manual segmentation of medical images is accurate, it is time consuming, requires expert operators, and is often impractical with large datasets. This motivates the need for training automated segmentation. However, automated segmentation of tumours is particularly challenging due to variability in tumour appearance, image acquisition equipment and acquisition parameters, and variability across patients. Tumours vary in type, size, location, and quantity; the rest of the image varies due to anatomical differences between patients, prior surgery or ablative therapy, differences in contrast enhancement of tissues, and image artefacts. Furthermore, scanner acquisition protocols vary considerably between clinical sites and image characteristics vary according to the scanner model. Due to all of these variabilities, a segmentation model must be flexible enough to learn general features from the data. The advent of deep convolutional neural networks (CNN) allowed for accurate and precise classification of highly variable images and, by extension, of high quality segmentation images. However, these models must be trained on enormous quantities of labeled data. This constraint is particularly challenging in the context of medical image segmentation because the number of segmentations that can be produced is limited in practice by the need to employ expert operators to do such labeling. Furthermore, the variabilities of interest in medical images appear to follow a long tail distribution, meaning a particularly large amount of training data may be required to provide a sufficient sample of each type of variability to a CNN. This motivates the need to develop strategies for training these models with limited ground truth segmentations available

    Automated liver tissues delineation techniques: A systematic survey on machine learning current trends and future orientations

    Get PDF
    Machine learning and computer vision techniques have grown rapidly in recent years due to their automation, suitability, and ability to generate astounding results. Hence, in this paper, we survey the key studies that are published between 2014 and 2022, showcasing the different machine learning algorithms researchers have used to segment the liver, hepatic tumors, and hepatic-vasculature structures. We divide the surveyed studies based on the tissue of interest (hepatic-parenchyma, hepatic-tumors, or hepatic-vessels), highlighting the studies that tackle more than one task simultaneously. Additionally, the machine learning algorithms are classified as either supervised or unsupervised, and they are further partitioned if the amount of work that falls under a certain scheme is significant. Moreover, different datasets and challenges found in literature and websites containing masks of the aforementioned tissues are thoroughly discussed, highlighting the organizers' original contributions and those of other researchers. Also, the metrics used excessively in the literature are mentioned in our review, stressing their relevance to the task at hand. Finally, critical challenges and future directions are emphasized for innovative researchers to tackle, exposing gaps that need addressing, such as the scarcity of many studies on the vessels' segmentation challenge and why their absence needs to be dealt with sooner than later. 2022 The Author(s)This publication was made possible by an Award [GSRA6-2-0521-19034] from Qatar National Research Fund (a member of Qatar Foundation). The contents herein are solely the responsibility of the authors. Open Access funding provided by the Qatar National LibraryScopu

    Medical Image Segmentation by Deep Convolutional Neural Networks

    Get PDF
    Medical image segmentation is a fundamental and critical step for medical image analysis. Due to the complexity and diversity of medical images, the segmentation of medical images continues to be a challenging problem. Recently, deep learning techniques, especially Convolution Neural Networks (CNNs) have received extensive research and achieve great success in many vision tasks. Specifically, with the advent of Fully Convolutional Networks (FCNs), automatic medical image segmentation based on FCNs is a promising research field. This thesis focuses on two medical image segmentation tasks: lung segmentation in chest X-ray images and nuclei segmentation in histopathological images. For the lung segmentation task, we investigate several FCNs that have been successful in semantic and medical image segmentation. We evaluate the performance of these different FCNs on three publicly available chest X-ray image datasets. For the nuclei segmentation task, since the challenges of this task are difficulty in segmenting the small, overlapping and touching nuclei, and limited ability of generalization to nuclei in different organs and tissue types, we propose a novel nuclei segmentation approach based on a two-stage learning framework and Deep Layer Aggregation (DLA). We convert the original binary segmentation task into a two-step task by adding nuclei-boundary prediction (3-classes) as an intermediate step. To solve our two-step task, we design a two-stage learning framework by stacking two U-Nets. The first stage estimates nuclei and their coarse boundaries while the second stage outputs the final fine-grained segmentation map. Furthermore, we also extend the U-Nets with DLA by iteratively merging features across different levels. We evaluate our proposed method on two public diverse nuclei datasets. The experimental results show that our proposed approach outperforms many standard segmentation architectures and recently proposed nuclei segmentation methods, and can be easily generalized across different cell types in various organs

    The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis.

    Get PDF
    Recently, deep learning frameworks have rapidly become the main methodology for analyzing medical images. Due to their powerful learning ability and advantages in dealing with complex patterns, deep learning algorithms are ideal for image analysis challenges, particularly in the field of digital pathology. The variety of image analysis tasks in the context of deep learning includes classification (e.g., healthy vs. cancerous tissue), detection (e.g., lymphocytes and mitosis counting), and segmentation (e.g., nuclei and glands segmentation). The majority of recent machine learning methods in digital pathology have a pre- and/or post-processing stage which is integrated with a deep neural network. These stages, based on traditional image processing methods, are employed to make the subsequent classification, detection, or segmentation problem easier to solve. Several studies have shown how the integration of pre- and post-processing methods within a deep learning pipeline can further increase the model's performance when compared to the network by itself. The aim of this review is to provide an overview on the types of methods that are used within deep learning frameworks either to optimally prepare the input (pre-processing) or to improve the results of the network output (post-processing), focusing on digital pathology image analysis. Many of the techniques presented here, especially the post-processing methods, are not limited to digital pathology but can be extended to almost any image analysis field
    • …
    corecore