4,136 research outputs found

    Medical imaging analysis with artificial neural networks

    Get PDF
    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Medical Image Segmentation by Deep Convolutional Neural Networks

    Get PDF
    Medical image segmentation is a fundamental and critical step for medical image analysis. Due to the complexity and diversity of medical images, the segmentation of medical images continues to be a challenging problem. Recently, deep learning techniques, especially Convolution Neural Networks (CNNs) have received extensive research and achieve great success in many vision tasks. Specifically, with the advent of Fully Convolutional Networks (FCNs), automatic medical image segmentation based on FCNs is a promising research field. This thesis focuses on two medical image segmentation tasks: lung segmentation in chest X-ray images and nuclei segmentation in histopathological images. For the lung segmentation task, we investigate several FCNs that have been successful in semantic and medical image segmentation. We evaluate the performance of these different FCNs on three publicly available chest X-ray image datasets. For the nuclei segmentation task, since the challenges of this task are difficulty in segmenting the small, overlapping and touching nuclei, and limited ability of generalization to nuclei in different organs and tissue types, we propose a novel nuclei segmentation approach based on a two-stage learning framework and Deep Layer Aggregation (DLA). We convert the original binary segmentation task into a two-step task by adding nuclei-boundary prediction (3-classes) as an intermediate step. To solve our two-step task, we design a two-stage learning framework by stacking two U-Nets. The first stage estimates nuclei and their coarse boundaries while the second stage outputs the final fine-grained segmentation map. Furthermore, we also extend the U-Nets with DLA by iteratively merging features across different levels. We evaluate our proposed method on two public diverse nuclei datasets. The experimental results show that our proposed approach outperforms many standard segmentation architectures and recently proposed nuclei segmentation methods, and can be easily generalized across different cell types in various organs

    Hierarchical Cluster Analysis to Aid Diagnostic Image Data Visualization of MS and Other Medical Imaging Modalities

    Get PDF
    Perceiving abnormal regions in the images of different medical modalities plays a crucial role in diagnosis and subsequent treatment planning. In medical images to visually perceive abnormalities’ extent and boundaries requires substantial experience. Consequently, manually drawn region of interest (ROI) to outline boundaries of abnormalities suffers from limitations of human perception leading to inter-observer variability. As an alternative to human drawn ROI, it is proposed the use of a computer-based segmenta- tion algorithm to segment digital medical image data. Hierarchical Clustering-based Segmentation (HCS) process is a generic unsupervised segmentation process that can be used to segment dissimilar regions in digital images. HCS process generates a hierarchy of segmented images by partitioning an image into its constituent regions at hierarchical levels of allowable dissimilarity between its different regions. The hierarchy represents the continuous merging of similar, spatially adjacent, and/or disjoint regions as the allowable threshold value of dissimilarity between regions, for merging, is gradually increased. This chapter discusses in detail first the implementation of the HCS process, second the implementa- tion details of how the HCS process is used for the presentation of multi-modal imaging data (MALDI and MRI) of a biological sample, third the implementation details of how the process is used as a perception aid for X-ray mammogram readers, and finally the implementation details of how it is used as an interpreta- tion aid for the interpretation of Multi-parametric Magnetic Resonance Imaging (mpMRI) of the Prostate

    Hierarchical Cluster Analysis to Aid Diagnostic Image Data Visualization of MS and Other Medical Imaging Modalities

    Get PDF
    Perceiving abnormal regions in the images of different medical modalities plays a crucial role in diagnosis and subsequent treatment planning. In medical images to visually perceive abnormalities’ extent and boundaries requires substantial experience. Consequently, manually drawn region of interest (ROI) to outline boundaries of abnormalities suffers from limitations of human perception leading to inter-observer variability. As an alternative to human drawn ROI, it is proposed the use of a computer-based segmenta- tion algorithm to segment digital medical image data. Hierarchical Clustering-based Segmentation (HCS) process is a generic unsupervised segmentation process that can be used to segment dissimilar regions in digital images. HCS process generates a hierarchy of segmented images by partitioning an image into its constituent regions at hierarchical levels of allowable dissimilarity between its different regions. The hierarchy represents the continuous merging of similar, spatially adjacent, and/or disjoint regions as the allowable threshold value of dissimilarity between regions, for merging, is gradually increased. This chapter discusses in detail first the implementation of the HCS process, second the implementa- tion details of how the HCS process is used for the presentation of multi-modal imaging data (MALDI and MRI) of a biological sample, third the implementation details of how the process is used as a perception aid for X-ray mammogram readers, and finally the implementation details of how it is used as an interpreta- tion aid for the interpretation of Multi-parametric Magnetic Resonance Imaging (mpMRI) of the Prostate

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes
    • …
    corecore