502 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmannâs machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
Recommended from our members
Deep learning for cardiac image segmentation: A review
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US) and major anatomical structures of interest (ventricles, atria and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research
U-Net and its variants for medical image segmentation: theory and applications
U-net is an image segmentation technique developed primarily for medical
image analysis that can precisely segment images using a scarce amount of
training data. These traits provide U-net with a very high utility within the
medical imaging community and have resulted in extensive adoption of U-net as
the primary tool for segmentation tasks in medical imaging. The success of
U-net is evident in its widespread use in all major image modalities from CT
scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a
segmentation tool, there have been instances of the use of U-net in other
applications. As the potential of U-net is still increasing, in this review we
look at the various developments that have been made in the U-net architecture
and provide observations on recent trends. We examine the various innovations
that have been made in deep learning and discuss how these tools facilitate
U-net. Furthermore, we look at image modalities and application areas where
U-net has been applied.Comment: 42 pages, in IEEE Acces
Attention Mechanisms in Medical Image Segmentation: A Survey
Medical image segmentation plays an important role in computer-aided
diagnosis. Attention mechanisms that distinguish important parts from
irrelevant parts have been widely used in medical image segmentation tasks.
This paper systematically reviews the basic principles of attention mechanisms
and their applications in medical image segmentation. First, we review the
basic concepts of attention mechanism and formulation. Second, we surveyed over
300 articles related to medical image segmentation, and divided them into two
groups based on their attention mechanisms, non-Transformer attention and
Transformer attention. In each group, we deeply analyze the attention
mechanisms from three aspects based on the current literature work, i.e., the
principle of the mechanism (what to use), implementation methods (how to use),
and application tasks (where to use). We also thoroughly analyzed the
advantages and limitations of their applications to different tasks. Finally,
we summarize the current state of research and shortcomings in the field, and
discuss the potential challenges in the future, including task specificity,
robustness, standard evaluation, etc. We hope that this review can showcase the
overall research context of traditional and Transformer attention methods,
provide a clear reference for subsequent research, and inspire more advanced
attention research, not only in medical image segmentation, but also in other
image analysis scenarios.Comment: Submitted to Medical Image Analysis, survey paper, 34 pages, over 300
reference
Automatic segmentation of the human thigh muscles in magnetic resonance imaging
Advances in magnetic resonance imaging (MRI) and analysis techniques have improved
diagnosis and patient treatment pathways. Typically, image analysis requires substantial
technical and medical expertise and MR images can suâ”er from artefacts, echo and
intensity inhomogeneity due to gradient pulse eddy currents and inherent eâ”ects of pulse
radiation on MRI radio frequency (RF) coils that complicates the analysis. Processing
and analysing serial sections of MRI scans to measure tissue volume is an additional
challenge as the shapes and the borders between neighbouring tissues change significantly
by anatomical location. Medical imaging solutions are needed to avoid laborious manual
segmentation of specified regions of interest (ROI) and operator errors.
The work set out in this thesis has addressed this challenge with a specific focus on
skeletal muscle segmentation of the thigh. The aim was to develop an MRI segmentation
framework for the quadriceps muscles, femur and bone marrow. Four contributions of
this research include: (1) the development of a semi-automatic segmentation framework
for a single transverse-plane image; (2) automatic segmentation of a single transverseplane
image; (3) the automatic segmentation of multiple contiguous transverse-plane
images from a full MRI thigh scan; and (4) the use of deep learning for MRI thigh
quadriceps segmentation.
Novel image processing, statistical analysis and machine learning algorithms were developed
for all solutions and they were compared against current gold-standard manual
segmentation. Frameworks (1) and (3) require minimal input from the user to delineate
the muscle border. Overall, the frameworks in (1), (2) and (3) oâ”er very good
output performance, with respective frameworkâs mean segmentation accuracy by JSI
and processing time of: (1) 0.95 and 17 sec; (2) 0.85 and 22 sec; and (3) 0.93 and 3 sec.
For the framework in (4), the ImageNet trained model was customized by replacing the
fully-connected layers in its architecture to convolutional layers (hence the name of Fully
Convolutional Network (FCN)) and the pre-trained model was transferred for the ROI
segmentation task. With the implementation of post-processing for image filtering and
morphology to the segmented ROI, we have successfully accomplished a new benchmark
for thigh MRI analysis. The mean accuracy and processing time with this framework
are 0.9502 (by JSI ) and 0.117 sec per image, respectively
- âŠ