416 research outputs found
Combining deep and handcrafted image features for MRI brain scan classification
Progresses in the areas of artificial intelligence, machine learning, and medical imaging technologies have allowed the development of the medical image processing field with some astonishing results in the last two decades. These innovations enabled the clinicians to view the human body in high-resolution or three-dimensional cross-sectional slices, which resulted in an increase in the accuracy of the diagnosis and the examination of patients in a non-invasive manner. The fundamental step for MRI brain scans classifiers is their ability to extract meaningful features. As a result, many works have proposed different methods for features extraction to classify the abnormal growths in brain MRI scans. More recently, the application of deep learning algorithms to medical imaging lead to impressive performance enhancements in classifying and diagnosing complicated pathologies such as brain tumors. In this study, a deep learning feature extraction algorithm is proposed to extract the relevant features from MRI brain scans. In parallel, handcrafted features are extracted using the modified grey level co-occurrence matrix (MGLCM) method. Subsequently, the extracted relevant features are combined with handcrafted features to improve the classification process of MRI brain scans with SVM used as the classifier. The obtained results proved that the combination of the deep learning approach and the handcrafted features extracted by MGLCM improves the accuracy of classification of the SVM classifier up to 99.30%
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Recommended from our members
Deep learning assisted MRI guided attenuation correction in PET
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonPositron emission tomography (PET) is a unique imaging modality that provides physiological
and functional details of the tissue at the molecular level. However, the acquired PET images
have some limitations such as the attenuation. PET attenuation correction is an essential step to
obtain the full potential of PET quantification. With the wide use of hybrid PET/MR scanners,
magnetic resonance (MR) images are used to address the problem of PET attenuation correction.
The MR images segmentation is one simple and robust approach to create pseudo computed
tomography (CT) images, which are used to generate attenuation coefficient maps to correct the
PET attenuation. Recently, deep learning has been proposed and used as a promising technique
to efficiently perform MR and various medical images segmentation.
In this research work, deep learning guided segmentation approaches have been proposed
to enhance the bone class segmentation of MR brain images in order to generate accurate
pseudo-CT images. The first approach has introduced the combination of handcrafted features
with deep learning features to enrich the set of features. Multiresolution analysis techniques,
which generate multiscale and multidirectional coefficients of an image such as contourlet and
shearlet transforms, are applied and combined with deep convolutional neural network (CNN)
features. Different experiments have been conducted to investigate the number of selected
coefficients and the insertion location of the handcrafted features.
The second approach aims at reducing the segmentation algorithm’s complexity while
maintaining the segmentation performance. An attention based convolutional encode-decoder
network has been proposed to adaptively recalibrate the deep network features. This attention based
network consists of two different squeeze and excitation blocks that excite the features
spatially and channel wise. The two blocks are combined sequentially to decrease the number
of network’s parameters and reduces the model complexity. The third approach has been focuses on the application of transfer learning from different MR sequences such as T1 weighted (T1-w) and T2 weighted (T2-w) images. A
pretrained model with T1-w MR sequences is fine tuned to perform the segmentation of T2-w
images. Multiple fine tuning approaches and experiments have been conducted to study the best
fine tuning mechanism that is able to build an efficient segmentation model for both T1-w and
T2-w segmentation. Clinical datasets of fifty patients with different conditions and diagnosis have been
used to carry an objective evaluation to measure the segmentation performance of the results
obtained by the three proposed methods. The first and second approaches have been validated
with other studies in the literature that applied deep network based segmentation technique to
perform MR based attenuation correction for PET images. The proposed methods have shown
an enhancement in the bone segmentation with an increase of dice similarity coefficient (DSC)
from 0.6179 to 0.6567 using an ensemble of CNNs with an improvement percentage of 6.3%.
The proposed excitation-based CNN has decreased the model complexity by decreasing the
number of trainable parameters by more than 46% where less computing resources are required
to train the model. The proposed hybrid transfer learning method has shown its superiority to
build a multi-sequences (T1-w and T2-w) segmentation approach compared to other applied
transfer learning methods especially with the bone class where the DSC is increased from 0.3841
to 0.5393. Moreover, the hybrid transfer learning approach requires less computing time than
transfer learning using open and conservative fine tuning
A customized VGG19 network with concatenation of deep and handcrafted features for brain tumor detection
Brain tumor (BT) is one of the brain abnormalities which arises due to various reasons. The unrecognized and untreated BT will increase the morbidity and mortality rates. The clinical level assessment of BT is normally performed using the bio-imaging technique, and MRI-assisted brain screening is one of the universal techniques. The proposed work aims to develop a deep learning architecture (DLA) to support the automated detection of BT using two-dimensional MRI slices. This work proposes the following DLAs to detect the BT: (i) implementing the pre-trained DLAs, such as AlexNet, VGG16, VGG19, ResNet50 and ResNet101 with the deep-features-based SoftMax classifier; (ii) pre-trained DLAs with deep-features-based classification using decision tree (DT), k nearest neighbor (KNN), SVM-linear and SVM-RBF; and (iii) a customized VGG19 network with serially-fused deep-features and handcrafted-features to improve the BT detection accuracy. The experimental investigation was separately executed using Flair, T2 and T1C modality MRI slices, and a ten-fold cross validation was implemented to substantiate the performance of proposed DLA. The results of this work confirm that the VGG19 with SVM-RBF helped to attain better classification accuracy with Flair (>99%), T2 (>98%), T1C (>97%) and clinical images (>98%)
Automatic Cancer Tissue Detection Using Multispectral Photoacoustic Imaging
Convolutional neural networks (CNNs) have become increasingly popular in recent years because of their ability to tackle complex learning problems such as object detection, and object localization. They are being used for a variety of tasks, such as tissue abnormalities detection and localization, with an accuracy that comes close to the level of human predictive performance in medical imaging. The success is primarily due to the ability of CNNs to extract the discriminant features at multiple levels of abstraction.
Photoacoustic (PA) imaging is a promising new modality that is gaining significant clinical potential. The availability of a large dataset of three dimensional PA images of ex-vivo human prostate and thyroid specimens has facilitated this current study aimed at evaluating the efficacy of CNN for cancer diagnosis. In PA imaging, a short pulse of near-infrared laser light is sent into the tissue, but the image is created by focusing the ultrasound waves that are photoacoustically generated due to the absorption of light, thereby mapping the optical absorption in the tissue. By choosing multiple wavelengths of laser light, multispectral photoacoustic (MPA) images of the same tissue specimen can be obtained. The objective of this thesis is to implement deep learning architecture for cancer detection using the MPA image dataset.
In this study, we built and examined a fully automated deep learning framework that learns to detect and localize cancer regions in a given specimen entirely from its MPA image dataset. The dataset for this work consisted of samples with size ranging from 12 Ă— 45 Ă— 200 pixels to 64 Ă— 64 Ă— 200 pixels at five wavelengths namely, 760 nm, 800 nm, 850 nm, 930 nm, and 970 nm.
The proposed algorithms first extract features using convolutional kernels and then detect cancer tissue using the softmax function, the last layer of the network. The AUC was calculated to evaluate the performance of the cancer tissue detector with a very promising result. To the best of our knowledge, this is one of the first examples of the application of deep 3D CNN to a large cancer MPA dataset for the prostate and thyroid cancer detection.
While previous efforts using the same dataset involved decision making using mathematically extracted image features, this work demonstrates that this process can be automated without any significant loss in accuracy. Another major contribution of this work has been to demonstrate that both prostate and thyroid datasets can be combined to produce improved results for cancer diagnosis
Lightweight 3D Convolutional Neural Network for Schizophrenia diagnosis using MRI Images and Ensemble Bagging Classifier
Structural alterations have been thoroughly investigated in the brain during
the early onset of schizophrenia (SCZ) with the development of neuroimaging
methods. The objective of the paper is an efficient classification of SCZ in 2
different classes: Cognitive Normal (CN), and SCZ using magnetic resonance
imaging (MRI) images. This paper proposed a lightweight 3D convolutional neural
network (CNN) based framework for SCZ diagnosis using MRI images. In the
proposed model, lightweight 3D CNN is used to extract both spatial and spectral
features simultaneously from 3D volume MRI scans, and classification is done
using an ensemble bagging classifier. Ensemble bagging classifier contributes
to preventing overfitting, reduces variance, and improves the model's accuracy.
The proposed algorithm is tested on datasets taken from three benchmark
databases available as open-source: MCICShare, COBRE, and fBRINPhase-II. These
datasets have undergone preprocessing steps to register all the MRI images to
the standard template and reduce the artifacts. The model achieves the highest
accuracy 92.22%, sensitivity 94.44%, specificity 90%, precision 90.43%, recall
94.44%, F1-score 92.39% and G-mean 92.19% as compared to the current
state-of-the-art techniques. The performance metrics evidenced the use of this
model to assist the clinicians for automatic accurate diagnosis of SCZ
- …