32 research outputs found

    Invariant Scattering Transform for Medical Imaging

    Full text link
    Invariant scattering transform introduces new area of research that merges the signal processing with deep learning for computer vision. Nowadays, Deep Learning algorithms are able to solve a variety of problems in medical sector. Medical images are used to detect diseases brain cancer or tumor, Alzheimer's disease, breast cancer, Parkinson's disease and many others. During pandemic back in 2020, machine learning and deep learning has played a critical role to detect COVID-19 which included mutation analysis, prediction, diagnosis and decision making. Medical images like X-ray, MRI known as magnetic resonance imaging, CT scans are used for detecting diseases. There is another method in deep learning for medical imaging which is scattering transform. It builds useful signal representation for image classification. It is a wavelet technique; which is impactful for medical image classification problems. This research article discusses scattering transform as the efficient system for medical image analysis where it's figured by scattering the signal information implemented in a deep convolutional network. A step by step case study is manifested at this research work.Comment: 11 pages, 8 figures and 1 tabl

    Computer-Aided Cancer Diagnosis and Grading via Sparse Directional Image Representations

    Get PDF
    Prostate cancer and breast cancer are the second cause of death among cancers in males and females, respectively. If not diagnosed, prostate and breast cancers can spread and metastasize to other organs and bones and make it impossible for treatment. Hence, early diagnosis of cancer is vital for patient survival. Histopathological evaluation of the tissue is used for cancer diagnosis. The tissue is taken during biopsies and stained using hematoxylin and eosin (H&E) stain. Then a pathologist looks for abnormal changes in the tissue to diagnose and grade the cancer. This process can be time-consuming and subjective. A reliable and repetitive automatic cancer diagnosis method can greatly reduce the time while producing more reliable results. The scope of this dissertation is developing computer vision and machine learning algorithms for automatic cancer diagnosis and grading methods with accuracy acceptable by the expert pathologists. Automatic image classification relies on feature representation methods. In this dissertation we developed methods utilizing sparse directional multiscale transforms - specifically shearlet transform - for medical image analysis. We particularly designed theses computer visions-based algorithms and methods to work with H&E images and MRI images. Traditional signal processing methods (e.g. Fourier transform, wavelet transform, etc.) are not suitable for detecting carcinoma cells due to their lack of directional sensitivity. However, shearlet transform has inherent directional sensitivity and multiscale framework that enables it to detect different edges in the tissue images. We developed techniques for extracting holistic and local texture features from the histological and MRI images using histogram and co-occurrence of shearlet coefficients, respectively. Then we combined these features with the color and morphological features using multiple kernel learning (MKL) algorithm and employed support vector machines (SVM) with MKL to classify the medical images. We further investigated the impact of deep neural networks in representing the medical images for cancer detection. The aforementioned engineered features have a few limitations. They lack generalizability due to being tailored to the specific texture and structure of the tissues. They are time-consuming and expensive and need prepossessing and sometimes it is difficult to extract discriminative features from the images. On the other hand, feature learning techniques use multiple processing layers and learn feature representations directly from the data. To address these issues, we have developed a deep neural network containing multiple layers of convolution, max-pooling, and fully connected layers, trained on the Red, Green, and Blue (RGB) images along with the magnitude and phase of shearlet coefficients. Then we developed a weighted decision fusion deep neural network that assigns weights on the output probabilities and update those weights via backpropagation. The final decision was a weighted sum of the decisions from the RGB, and the magnitude and the phase of shearlet networks. We used the trained networks for classification of benign and malignant H&E images and Gleason grading. Our experimental results show that our proposed methods based on feature engineering and feature learning outperform the state-of-the-art and are even near perfect (100%) for some databases in terms of classification accuracy, sensitivity, specificity, F1 score, and area under the curve (AUC) and hence are promising computer-based methods for cancer diagnosis and grading using images

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Mammography image classification using image processing and support vector machine

    Get PDF

    A new convolutional neural network based on combination of circlets and wavelets for macular OCT classification

    Get PDF
    Artificial intelligence (AI) algorithms, encompassing machine learning and deep learning, can assist ophthalmologists in early detection of various ocular abnormalities through the analysis of retinal optical coherence tomography (OCT) images. Despite considerable progress in these algorithms, several limitations persist in medical imaging fields, where a lack of data is a common issue. Accordingly, specific image processing techniques, such as time–frequency transforms, can be employed in conjunction with AI algorithms to enhance diagnostic accuracy. This research investigates the influence of non-data-adaptive time–frequency transforms, specifically X-lets, on the classification of OCT B-scans. For this purpose, each B-scan was transformed using every considered X-let individually, and all the sub-bands were utilized as the input for a designed 2D Convolutional Neural Network (CNN) to extract optimal features, which were subsequently fed to the classifiers. Evaluating per-class accuracy shows that the use of the 2D Discrete Wavelet Transform (2D-DWT) yields superior outcomes for normal cases, whereas the circlet transform outperforms other X-lets for abnormal cases characterized by circles in their retinal structure (due to the accumulation of fluid). As a result, we propose a novel transform named CircWave by concatenating all sub-bands from the 2D-DWT and the circlet transform. The objective is to enhance the per-class accuracy of both normal and abnormal cases simultaneously. Our findings show that classification results based on the CircWave transform outperform those derived from original images or any individual transform. Furthermore, Grad-CAM class activation visualization for B-scans reconstructed from CircWave sub-bands highlights a greater emphasis on circular formations in abnormal cases and straight lines in normal cases, in contrast to the focus on irrelevant regions in original B-scans. To assess the generalizability of our method, we applied it to another dataset obtained from a different imaging system. We achieved promising accuracies of 94.5% and 90% for the first and second datasets, respectively, which are comparable with results from previous studies. The proposed CNN based on CircWave sub-bands (i.e. CircWaveNet) not only produces superior outcomes but also offers more interpretable results with a heightened focus on features crucial for ophthalmologists

    Medical Diagnosis with Multimodal Image Fusion Techniques

    Get PDF
    Image Fusion is an effective approach utilized to draw out all the significant information from the source images, which supports experts in evaluation and quick decision making. Multi modal medical image fusion produces a composite fused image utilizing various sources to improve quality and extract complementary information. It is extremely challenging to gather every piece of information needed using just one imaging method. Therefore, images obtained from different modalities are fused Additional clinical information can be gleaned through the fusion of several types of medical image pairings. This study's main aim is to present a thorough review of medical image fusion techniques which also covers steps in fusion process, levels of fusion, various imaging modalities with their pros and cons, and  the major scientific difficulties encountered in the area of medical image fusion. This paper also summarizes the quality assessments fusion metrics. The various approaches used by image fusion algorithms that are presently available in the literature are classified into four broad categories i) Spatial fusion methods ii) Multiscale Decomposition based methods iii) Neural Network based methods and iv) Fuzzy Logic based methods. the benefits and pitfalls of the existing literature are explored and Future insights are suggested. Moreover, this study is anticipated to create a solid platform for the development of better fusion techniques in medical applications

    A customized VGG19 network with concatenation of deep and handcrafted features for brain tumor detection

    Get PDF
    Brain tumor (BT) is one of the brain abnormalities which arises due to various reasons. The unrecognized and untreated BT will increase the morbidity and mortality rates. The clinical level assessment of BT is normally performed using the bio-imaging technique, and MRI-assisted brain screening is one of the universal techniques. The proposed work aims to develop a deep learning architecture (DLA) to support the automated detection of BT using two-dimensional MRI slices. This work proposes the following DLAs to detect the BT: (i) implementing the pre-trained DLAs, such as AlexNet, VGG16, VGG19, ResNet50 and ResNet101 with the deep-features-based SoftMax classifier; (ii) pre-trained DLAs with deep-features-based classification using decision tree (DT), k nearest neighbor (KNN), SVM-linear and SVM-RBF; and (iii) a customized VGG19 network with serially-fused deep-features and handcrafted-features to improve the BT detection accuracy. The experimental investigation was separately executed using Flair, T2 and T1C modality MRI slices, and a ten-fold cross validation was implemented to substantiate the performance of proposed DLA. The results of this work confirm that the VGG19 with SVM-RBF helped to attain better classification accuracy with Flair (>99%), T2 (>98%), T1C (>97%) and clinical images (>98%)
    corecore