19 research outputs found

    Liver Segmentation and Liver Cancer Detection Based on Deep Convolutional Neural Network: A Brief Bibliometric Survey

    Get PDF
    Background: This study analyzes liver segmentation and cancer detection work, with the perspectives of machine learning and deep learning and different image processing techniques from the year 2012 to 2020. The study uses different Bibliometric analysis methods. Methods: The articles on the topic were obtained from one of the most popular databases- Scopus. The year span for the analysis is considered to be from 2012 to 2020. Scopus analyzer facilitates the analysis of the databases with different categories such as documents by source, year, and county and so on. Analysis is also done by using different units of analysis such as co-authorship, co-occurrences, citation analysis etc. For this analysis Vosviewer Version 1.6.15 is used. Results: In the study, a total of 518 articles on liver segmentation and liver cancer were obtained between the years 2012 to 2020. From the statistical analysis and network analysis it can be concluded that, the maximum articles are published in the year 2020 with China is the highest contributor followed by United States and India. Conclusions: Outcome from Scoups database is 518 articles with English language has the largest number of articles. Statistical analysis is done in terms of different parameters such as Authors, documents, country, affiliation etc. The analysis clearly indicates the potential of the topic. Network analysis of different parameters is also performed. This also indicate that there is a lot of scope for further research in terms of advanced algorithms of computer vision, deep learning and machine learning

    Automated liver tissues delineation based on machine learning techniques: A survey, current trends and future orientations

    Get PDF
    There is no denying how machine learning and computer vision have grown in the recent years. Their highest advantages lie within their automation, suitability, and ability to generate astounding results in a matter of seconds in a reproducible manner. This is aided by the ubiquitous advancements reached in the computing capabilities of current graphical processing units and the highly efficient implementation of such techniques. Hence, in this paper, we survey the key studies that are published between 2014 and 2020, showcasing the different machine learning algorithms researchers have used to segment the liver, hepatic-tumors, and hepatic-vasculature structures. We divide the surveyed studies based on the tissue of interest (hepatic-parenchyma, hepatic-tumors, or hepatic-vessels), highlighting the studies that tackle more than one task simultaneously. Additionally, the machine learning algorithms are classified as either supervised or unsupervised, and further partitioned if the amount of works that fall under a certain scheme is significant. Moreover, different datasets and challenges found in literature and websites, containing masks of the aforementioned tissues, are thoroughly discussed, highlighting the organizers original contributions, and those of other researchers. Also, the metrics that are used excessively in literature are mentioned in our review stressing their relevancy to the task at hand. Finally, critical challenges and future directions are emphasized for innovative researchers to tackle, exposing gaps that need addressing such as the scarcity of many studies on the vessels segmentation challenge, and why their absence needs to be dealt with in an accelerated manner.Comment: 41 pages, 4 figures, 13 equations, 1 table. A review paper on liver tissues segmentation based on automated ML-based technique

    On Medical Image Segmentation and on Modeling Long Term Dependencies

    Get PDF
    La dĂ©limitation (segmentation) des tumeurs malignes Ă  partir d’images mĂ©dicales est importante pour le diagnostic du cancer, la planification des traitements ciblĂ©s, ainsi que les suivis de la progression du cancer et de la rĂ©ponse aux traitements. Cependant, bien que la segmentation manuelle des images mĂ©dicales soit prĂ©cise, elle prend du temps, nĂ©cessite des opĂ©rateurs experts et est souvent peu pratique lorsque de grands ensembles de donnĂ©es sont utilisĂ©s. Ceci dĂ©montre la nĂ©cessitĂ© d’une segmentation automatique. Cependant, la segmentation automatisĂ©e des tumeurs est particuliĂšrement difficile en raison de la variabilitĂ© de l’apparence des tumeurs, de l’équipement d’acquisition d’image et des paramĂštres d’acquisition, et de la variabilitĂ© entre les patients. Les tumeurs varient en type, taille, emplacement et quantitĂ©; le reste de l’image varie en raison des diffĂ©rences anatomiques entre les patients, d’une chirurgie antĂ©rieure ou d’une thĂ©rapie ablative, de diffĂ©rences dans l’amĂ©lioration du contraste des tissus et des artefacts d’image. De plus, les protocoles d’acquisition du scanner varient considĂ©rablement entre les cliniques et les caractĂ©ristiques de l’image varient selon le modĂšle du scanner. En raison de toutes ces variabilitĂ©s, un modĂšle de segmentation doit ĂȘtre suffisamment flexible pour apprendre les caractĂ©ristiques gĂ©nĂ©rales des donnĂ©es. L’avĂšnement des rĂ©seaux profonds de neurones Ă  convolution (convolutional neural networks, CNN) a permis une classification exacte et prĂ©cise des images hautement variables et, par extension, une segmentation de haute qualitĂ© des images. Cependant, ces modĂšles doivent ĂȘtre formĂ©s sur d’énormes quantitĂ©s de donnĂ©es Ă©tiquetĂ©es. Cette contrainte est particuliĂšrement difficile dans le contexte de la segmentation des images mĂ©dicales, car le nombre de segmentations pouvant ĂȘtre produites est limitĂ© dans la pratique par la nĂ©cessitĂ© d’employer des opĂ©rateurs experts pour rĂ©aliser un tel Ă©tiquetage. De plus, les variabilitĂ©s d’intĂ©rĂȘt dans les images mĂ©dicales semblent suivre une distribution Ă  longue traĂźne, ce qui signifie qu’un nombre particuliĂšrement important de donnĂ©es utilisĂ©es pour l’entraĂźnement peut ĂȘtre nĂ©cessaire pour fournir un Ă©chantillon suffisant de chaque type de variabilitĂ© Ă  un CNN. Cela dĂ©montre la nĂ©cessitĂ© de dĂ©velopper des stratĂ©gies pour la formation de ces modĂšles avec des segmentations de vĂ©ritĂ©-terrain disponibles limitĂ©es.----------ABSTRACT: The delineation (segmentation) of malignant tumours in medical images is important for cancer diagnosis, the planning of targeted treatments, and the tracking of cancer progression and treatment response. However, although manual segmentation of medical images is accurate, it is time consuming, requires expert operators, and is often impractical with large datasets. This motivates the need for training automated segmentation. However, automated segmentation of tumours is particularly challenging due to variability in tumour appearance, image acquisition equipment and acquisition parameters, and variability across patients. Tumours vary in type, size, location, and quantity; the rest of the image varies due to anatomical differences between patients, prior surgery or ablative therapy, differences in contrast enhancement of tissues, and image artefacts. Furthermore, scanner acquisition protocols vary considerably between clinical sites and image characteristics vary according to the scanner model. Due to all of these variabilities, a segmentation model must be flexible enough to learn general features from the data. The advent of deep convolutional neural networks (CNN) allowed for accurate and precise classification of highly variable images and, by extension, of high quality segmentation images. However, these models must be trained on enormous quantities of labeled data. This constraint is particularly challenging in the context of medical image segmentation because the number of segmentations that can be produced is limited in practice by the need to employ expert operators to do such labeling. Furthermore, the variabilities of interest in medical images appear to follow a long tail distribution, meaning a particularly large amount of training data may be required to provide a sufficient sample of each type of variability to a CNN. This motivates the need to develop strategies for training these models with limited ground truth segmentations available

    TPCNN: Two-path convolutional neural network for tumor and liver segmentation in CT images using a novel encoding approach

    Get PDF
    Automatic liver and tumour segmentation in CT images are crucial in numerous clinical applications, such as postoperative assessment, surgical planning, and pathological diagnosis of hepatic diseases. However, there are still a considerable number of difficulties to overcome due to the fuzzy boundary, irregular shapes, and complex tissues of the liver. In this paper, for liver and tumor segmentation and to overcome the mentioned challenges a simple but powerful strategy is presented based on a cascade convolutional neural network. At the first, the input image is normalized using the Z-Score algorithm. This normalized image provides more information about the boundary of tumor and liver. Also, the Local Direction of Gradient (LDOG) which is a novel encoding algorithm is proposed to demonstrate some key features inside the image. The proposed encoding image is highly effective in recognizing the border of liver, even in the regions close to the touching organs. Then, a cascade CNN structure for extracting both local and semi-global features is used which utilized the original image and two other obtained images as the input data. Rather than using a complex deep CNN model with a lot of hyperparameters, we employ a simple but effective model to decrease the train and testing time. Our technique outperforms the state-of-the-art works in terms of segmentation accuracy and efficiency

    Computer aided diagnosis system for breast cancer using deep learning.

    Get PDF
    The recent rise of big data technology surrounding the electronic systems and developed toolkits gave birth to new promises for Artificial Intelligence (AI). With the continuous use of data-centric systems and machines in our lives, such as social media, surveys, emails, reports, etc., there is no doubt that data has gained the center of attention by scientists and motivated them to provide more decision-making and operational support systems across multiple domains. With the recent breakthroughs in artificial intelligence, the use of machine learning and deep learning models have achieved remarkable advances in computer vision, ecommerce, cybersecurity, and healthcare. Particularly, numerous applications provided efficient solutions to assist radiologists and doctors for medical imaging analysis, which has remained the essence of the visual representation that is used to construct the final observation and diagnosis. Medical research in cancerology and oncology has been recently blended with the knowledge gained from computer engineering and data science experts. In this context, an automatic assistance or commonly known as Computer-aided Diagnosis (CAD) system has become a popular area of research and development in the last decades. As a result, the CAD systems have been developed using multidisciplinary knowledge and expertise and they have been used to analyze the patient information to assist clinicians and practitioners in their decision-making process. Treating and preventing cancer remains a crucial task that radiologists and oncologists face every day to detect and investigate abnormal tumors. Therefore, a CAD system could be developed to provide decision support for many applications in the cancer patient care processes, such as lesion detection, characterization, cancer staging, tumors assessment, recurrence, and prognosis prediction. Breast cancer has been considered one of the common types of cancers in females across the world. It was also considered the leading cause of mortality among women, and it has been increased drastically every year. Early detection and diagnosis of abnormalities in screened breasts has been acknowledged as the optimal solution to examine the risk of developing breast cancer and thus reduce the increasing mortality rate. Accordingly, this dissertation proposes a new state-of-the-art CAD system for breast cancer diagnosis that is based on deep learning technology and cutting-edge computer vision techniques. Mammography screening has been recognized as the most effective tool to early detect breast lesions for reducing the mortality rate. It helps reveal abnormalities in the breast such as Mass lesion, Architectural Distortion, Microcalcification. With the number of daily patients that were screened is continuously increasing, having a second reading tool or assistance system could leverage the process of breast cancer diagnosis. Mammograms could be obtained using different modalities such as X-ray scanner and Full-Field Digital mammography (FFDM) system. The quality of the mammograms, the characteristics of the breast (i.e., density, size) or/and the tumors (i.e., location, size, shape) could affect the final diagnosis. Therefore, radiologists could miss the lesions and consequently they could generate false detection and diagnosis. Therefore, this work was motivated to improve the reading of mammograms in order to increase the accuracy of the challenging tasks. The efforts presented in this work consists of new design and implementation of neural network models for a fully integrated CAD system dedicated to breast cancer diagnosis. The approach is designed to automatically detect and identify breast lesions from the entire mammograms at a first step using fusion models’ methodology. Then, the second step only focuses on the Mass lesions and thus the proposed system should segment the detected bounding boxes of the Mass lesions to mask their background. A new neural network architecture for mass segmentation was suggested that was integrated with a new data enhancement and augmentation technique. Finally, a third stage was conducted using a stacked ensemble of neural networks for classifying and diagnosing the pathology (i.e., malignant, or benign), the Breast Imaging Reporting and Data System (BI-RADS) assessment score (i.e., from 2 to 6), or/and the shape (i.e., round, oval, lobulated, irregular) of the segmented breast lesions. Another contribution was achieved by applying the first stage of the CAD system for a retrospective analysis and comparison of the model on Prior mammograms of a private dataset. The work was conducted by joining the learning of the detection and classification model with the image-to-image mapping between Prior and Current screening views. Each step presented in the CAD system was evaluated and tested on public and private datasets and consequently the results have been fairly compared with benchmark mammography datasets. The integrated framework for the CAD system was also tested for deployment and showcase. The performance of the CAD system for the detection and identification of breast masses reached an overall accuracy of 97%. The segmentation of breast masses was evaluated together with the previous stage and the approach achieved an overall performance of 92%. Finally, the classification and diagnosis step that defines the outcome of the CAD system reached an overall pathology classification accuracy of 96%, a BIRADS categorization accuracy of 93%, and a shape classification accuracy of 90%. Results given in this dissertation indicate that our suggested integrated framework might surpass the current deep learning approaches by using all the proposed automated steps. Limitations of the proposed work could occur on the long training time of the different methods which is due to the high computation of the developed neural networks that have a huge number of the trainable parameters. Future works can include new orientations of the methodologies by combining different mammography datasets and improving the long training of deep learning models. Moreover, motivations could upgrade the CAD system by using annotated datasets to integrate more breast cancer lesions such as Calcification and Architectural distortion. The proposed framework was first developed to help detect and identify suspicious breast lesions in X-ray mammograms. Next, the work focused only on Mass lesions and segment the detected ROIs to remove the tumor’s background and highlight the contours, the texture, and the shape of the lesions. Finally, the diagnostic decision was predicted to classify the pathology of the lesions and investigate other characteristics such as the tumors’ grading assessment and type of the shape. The dissertation presented a CAD system to assist doctors and experts to identify the risk of breast cancer presence. Overall, the proposed CAD method incorporates the advances of image processing, deep learning, and image-to-image translation for a biomedical application

    Explainable AI and susceptibility to adversarial attacks in classification and segmentation of breast ultrasound images

    Get PDF
    Ultrasound is a non-invasive imaging modality that can be conveniently used to classify suspicious breast nodules and potentially detect the onset of breast cancer. Recently, Convolutional Neural Networks (CNN) techniques have shown promising results in classifying ultrasound images of the breast into benign or malignant. However, CNN inference acts as a black-box model, and as such, its decision-making is not interpretable. Therefore, increasing effort has been dedicated to explaining this process, most notably through Gradient-weighted Class Activation Mapping (Grad-CAM) and other techniques that provide visual explanations into inner workings of CNNs. In addition to interpretation, these methods provide clinically important information, such as identifying the location for biopsy or treatment. In this work, we analyze how adversarial assaults that are practically undetectable may be devised to alter these importance maps dramatically. Furthermore, we will show that this change in the importance maps can come with or without altering the classification result, rendering them even harder to detect. As such, care must be taken when using these importance maps to shed light on the inner workings of deep learning. Finally, we utilize Multi-Task Learning (MTL) and propose a new network based on deep residual networks to improve the classification accuracies. Our sensitivity and specificity values are comparable to the state of the art results

    Machine Learning towards General Medical Image Segmentation

    Get PDF
    The quality of patient care associated with diagnostic radiology is proportionate to a physician\u27s workload. Segmentation is a fundamental limiting precursor to diagnostic and therapeutic procedures. Advances in machine learning aims to increase diagnostic efficiency to replace single applications with generalized algorithms. We approached segmentation as a multitask shape regression problem, simultaneously predicting coordinates on an object\u27s contour while jointly capturing global shape information. Shape regression models inherent point correlations to recover ambiguous boundaries not supported by clear edges and region homogeneity. Its capabilities was investigated using multi-output support vector regression (MSVR) on head and neck (HaN) CT images. Subsequently, we incorporated multiplane and multimodality spinal images and presented the first deep learning multiapplication framework for shape regression, the holistic multitask regression network (HMR-Net). MSVR and HMR-Net\u27s performance were comparable or superior to state-of-the-art algorithms. Multiapplication frameworks bridges any technical knowledge gaps and increases workflow efficiency

    Deep learning applications in the prostate cancer diagnostic pathway

    Get PDF
    Prostate cancer (PCa) is the second most frequently diagnosed cancer in men worldwide and the fifth leading cause of cancer death in men, with an estimated 1.4 million new cases in 2020 and 375,000 deaths. The risk factors most strongly associated to PCa are advancing age, family history, race, and mutations of the BRCA genes. Since the aforementioned risk factors are not preventable, early and accurate diagnoses are a key objective of the PCa diagnostic pathway. In the UK, clinical guidelines recommend multiparametric magnetic resonance imaging (mpMRI) of the prostate for use by radiologists to detect, score, and stage lesions that may correspond to clinically significant PCa (CSPCa), prior to confirmatory biopsy and histopathological grading. Computer-aided diagnosis (CAD) of PCa using artificial intelligence algorithms holds a currently unrealized potential to improve upon the diagnostic accuracy achievable by radiologist assessment of mpMRI, improve the reporting consistency between radiologists, and reduce reporting time. In this thesis, we build and evaluate deep learning-based CAD systems for the PCa diagnostic pathway, which address gaps identified in the literature. First, we introduce a novel patient-level classification framework, PCF, which uses a stacked ensemble of convolutional neural networks (CNNs) and support vector machines (SVMs) to assign a probability of having CSPCa to patients, using mpMRI and clinical features. Second, we introduce AutoProstate, a deep-learning powered framework for automated PCa assessment and reporting; AutoProstate utilizes biparametric MRI and clinical data to populate an automatic diagnostic report containing segmentations of the whole prostate, prostatic zones, and candidate CSPCa lesions, as well as several derived characteristics that are clinically valuable. Finally, as automatic segmentation algorithms have not yet reached the desired robustness for clinical use, we introduce interactive click-based segmentation applications for the whole prostate and prostatic lesions, with potential uses in diagnosis, active surveillance progression monitoring, and treatment planning

    Advanced Imaging Analysis for Predicting Tumor Response and Improving Contour Delineation Uncertainty

    Get PDF
    ADVANCED IMAGING ANALYSIS FOR PREDICTING TUMOR RESPONSE AND IMPROVING CONTOUR DELINEATION UNCERTAINTY By Rebecca Nichole Mahon, MS A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at Virginia Commonwealth University. Virginia Commonwealth University, 2018 Major Director: Dr. Elisabeth Weiss, Professor, Department of Radiation Oncology Radiomics, an advanced form of imaging analysis, is a growing field of interest in medicine. Radiomics seeks to extract quantitative information from images through use of computer vision techniques to assist in improving treatment. Early prediction of treatment response is one way of improving overall patient care. This work seeks to explore the feasibility of building predictive models from radiomic texture features extracted from magnetic resonance (MR) and computed tomography (CT) images of lung cancer patients. First, repeatable primary tumor texture features from each imaging modality were identified to ensure a sufficient number of repeatable features existed for model development. Then a workflow was developed to build models to predict overall survival and local control using single modality and multi-modality radiomics features. The workflow was also applied to normal tissue contours as a control study. Multiple significant models were identified for the single modality MR- and CT-based models, while the multi-modality models were promising indicating exploration with a larger cohort is warranted. Another way advances in imaging analysis can be leveraged is in improving accuracy of contours. Unfortunately, the tumor can be close in appearance to normal tissue on medical images creating high uncertainty in the tumor boundary. As the entire defined target is treated, providing physicians with additional information when delineating the target volume can improve the accuracy of the contour and potentially reduce the amount of normal tissue incorporated into the contour. Convolution neural networks were developed and trained to identify the tumor interface with normal tissue and for one network to identify the tumor location. A mock tool was presented using the output of the network to provide the physician with the uncertainty in prediction of the interface type and the probability of the contour delineation uncertainty exceeding 5mm for the top three predictions
    corecore