60 research outputs found
Scale-aware Super-resolution Network with Dual Affinity Learning for Lesion Segmentation from Medical Images
Convolutional Neural Networks (CNNs) have shown remarkable progress in
medical image segmentation. However, lesion segmentation remains a challenge to
state-of-the-art CNN-based algorithms due to the variance in scales and shapes.
On the one hand, tiny lesions are hard to be delineated precisely from the
medical images which are often of low resolutions. On the other hand,
segmenting large-size lesions requires large receptive fields, which
exacerbates the first challenge. In this paper, we present a scale-aware
super-resolution network to adaptively segment lesions of various sizes from
the low-resolution medical images. Our proposed network contains dual branches
to simultaneously conduct lesion mask super-resolution and lesion image
super-resolution. The image super-resolution branch will provide more detailed
features for the segmentation branch, i.e., the mask super-resolution branch,
for fine-grained segmentation. Meanwhile, we introduce scale-aware dilated
convolution blocks into the multi-task decoders to adaptively adjust the
receptive fields of the convolutional kernels according to the lesion sizes. To
guide the segmentation branch to learn from richer high-resolution features, we
propose a feature affinity module and a scale affinity module to enhance the
multi-task learning of the dual branches. On multiple challenging lesion
segmentation datasets, our proposed network achieved consistent improvements
compared to other state-of-the-art methods.Comment: Journal paper under review. 10 pages. The first two authors
contributed equall
U-Net and its variants for medical image segmentation: theory and applications
U-net is an image segmentation technique developed primarily for medical
image analysis that can precisely segment images using a scarce amount of
training data. These traits provide U-net with a very high utility within the
medical imaging community and have resulted in extensive adoption of U-net as
the primary tool for segmentation tasks in medical imaging. The success of
U-net is evident in its widespread use in all major image modalities from CT
scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a
segmentation tool, there have been instances of the use of U-net in other
applications. As the potential of U-net is still increasing, in this review we
look at the various developments that have been made in the U-net architecture
and provide observations on recent trends. We examine the various innovations
that have been made in deep learning and discuss how these tools facilitate
U-net. Furthermore, we look at image modalities and application areas where
U-net has been applied.Comment: 42 pages, in IEEE Acces
Channel prior convolutional attention for medical image segmentation
Characteristics such as low contrast and significant organ shape variations
are often exhibited in medical images. The improvement of segmentation
performance in medical imaging is limited by the generally insufficient
adaptive capabilities of existing attention mechanisms. An efficient Channel
Prior Convolutional Attention (CPCA) method is proposed in this paper,
supporting the dynamic distribution of attention weights in both channel and
spatial dimensions. Spatial relationships are effectively extracted while
preserving the channel prior by employing a multi-scale depth-wise
convolutional module. The ability to focus on informative channels and
important regions is possessed by CPCA. A segmentation network called CPCANet
for medical image segmentation is proposed based on CPCA. CPCANet is validated
on two publicly available datasets. Improved segmentation performance is
achieved by CPCANet while requiring fewer computational resources through
comparisons with state-of-the-art algorithms. Our code is publicly available at
\url{https://github.com/Cuthbert-Huang/CPCANet}
Deep Semantic Segmentation of Natural and Medical Images: A Review
The semantic image segmentation task consists of classifying each pixel of an
image into an instance, where each instance corresponds to a class. This task
is a part of the concept of scene understanding or better explaining the global
context of an image. In the medical image analysis domain, image segmentation
can be used for image-guided interventions, radiotherapy, or improved
radiological diagnostics. In this review, we categorize the leading deep
learning-based medical and non-medical image segmentation solutions into six
main groups of deep architectural, data synthesis-based, loss function-based,
sequenced models, weakly supervised, and multi-task methods and provide a
comprehensive review of the contributions in each of these groups. Further, for
each group, we analyze each variant of these groups and discuss the limitations
of the current approaches and present potential future research directions for
semantic image segmentation.Comment: 45 pages, 16 figures. Accepted for publication in Springer Artificial
Intelligence Revie
A Review on Skin Disease Classification and Detection Using Deep Learning Techniques
Skin cancer ranks among the most dangerous cancers. Skin cancers are commonly referred to as Melanoma. Melanoma is brought on by genetic faults or mutations on the skin, which are caused by Unrepaired Deoxyribonucleic Acid (DNA) in skin cells. It is essential to detect skin cancer in its infancy phase since it is more curable in its initial phases. Skin cancer typically progresses to other regions of the body. Owing to the disease's increased frequency, high mortality rate, and prohibitively high cost of medical treatments, early diagnosis of skin cancer signs is crucial. Due to the fact that how hazardous these disorders are, scholars have developed a number of early-detection techniques for melanoma. Lesion characteristics such as symmetry, colour, size, shape, and others are often utilised to detect skin cancer and distinguish benign skin cancer from melanoma. An in-depth investigation of deep learning techniques for melanoma's early detection is provided in this study. This study discusses the traditional feature extraction-based machine learning approaches for the segmentation and classification of skin lesions. Comparison-oriented research has been conducted to demonstrate the significance of various deep learning-based segmentation and classification approaches
Medical Image Segmentation Review: The success of U-Net
Automatic medical image segmentation is a crucial topic in the medical domain
and successively a critical counterpart in the computer-aided diagnosis
paradigm. U-Net is the most widespread image segmentation architecture due to
its flexibility, optimized modular design, and success in all medical image
modalities. Over the years, the U-Net model achieved tremendous attention from
academic and industrial researchers. Several extensions of this network have
been proposed to address the scale and complexity created by medical tasks.
Addressing the deficiency of the naive U-Net model is the foremost step for
vendors to utilize the proper U-Net variant model for their business. Having a
compendium of different variants in one place makes it easier for builders to
identify the relevant research. Also, for ML researchers it will help them
understand the challenges of the biological tasks that challenge the model. To
address this, we discuss the practical aspects of the U-Net model and suggest a
taxonomy to categorize each network variant. Moreover, to measure the
performance of these strategies in a clinical application, we propose fair
evaluations of some unique and famous designs on well-known datasets. We
provide a comprehensive implementation library with trained models for future
research. In addition, for ease of future studies, we created an online list of
U-Net papers with their possible official implementation. All information is
gathered in https://github.com/NITR098/Awesome-U-Net repository.Comment: Submitted to the IEEE Transactions on Pattern Analysis and Machine
Intelligence Journa
MFSNet: A Multi Focus Segmentation Network for Skin Lesion Segmentation
Segmentation is essential for medical image analysis to identify and localize
diseases, monitor morphological changes, and extract discriminative features
for further diagnosis. Skin cancer is one of the most common types of cancer
globally, and its early diagnosis is pivotal for the complete elimination of
malignant tumors from the body. This research develops an Artificial
Intelligence (AI) framework for supervised skin lesion segmentation employing
the deep learning approach. The proposed framework, called MFSNet (Multi-Focus
Segmentation Network), uses differently scaled feature maps for computing the
final segmentation mask using raw input RGB images of skin lesions. In doing
so, initially, the images are preprocessed to remove unwanted artifacts and
noises. The MFSNet employs the Res2Net backbone, a recently proposed
convolutional neural network (CNN), for obtaining deep features used in a
Parallel Partial Decoder (PPD) module to get a global map of the segmentation
mask. In different stages of the network, convolution features and multi-scale
maps are used in two boundary attention (BA) modules and two reverse attention
(RA) modules to generate the final segmentation output. MFSNet, when evaluated
on three publicly available datasets: , ISIC 2017, and HAM10000,
outperforms state-of-the-art methods, justifying the reliability of the
framework. The relevant codes for the proposed approach are accessible at
https://github.com/Rohit-Kundu/MFSNe
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
PWD-3DNet: A deep learning-based fully-automated segmentation of multiple structures on temporal bone CT scans
The temporal bone is a part of the lateral skull surface that contains organs responsible for hearing and balance. Mastering surgery of the temporal bone is challenging because of this complex and microscopic three-dimensional anatomy. Segmentation of intra-temporal anatomy based on computed tomography (CT) images is necessary for applications such as surgical training and rehearsal, amongst others. However, temporal bone segmentation is challenging due to the similar intensities and complicated anatomical relationships among crit- ical structures, undetectable small structures on standard clinical CT, and the amount of time required for manual segmentation. This paper describes a single multi-class deep learning-based pipeline as the first fully automated algorithm for segmenting multiple temporal bone structures from CT volumes, including the sigmoid sinus, facial nerve, inner ear, malleus, incus, stapes, internal carotid artery and internal auditory canal. The proposed fully convolutional network, PWD-3DNet, is a patch-wise densely connected (PWD) three-dimensional (3D) network. The accuracy and speed of the proposed algorithm was shown to surpass current manual and semi-automated segmentation techniques. The experimental results yielded significantly high Dice similar- ity scores and low Hausdorff distances for all temporal bone structures with an average of 86% and 0.755 millimeter (mm), respectively. We illustrated that overlapping in the inference sub-volumes improves the segmentation performance. Moreover, we proposed augmentation layers by using samples with various transformations and image artefacts to increase the robustness of PWD-3DNet against image acquisition protocols, such as smoothing caused by soft tissue scanner settings and larger voxel sizes used for radiation reduction. The proposed algorithm was tested on low-resolution CTs acquired by another center with different scanner parameters than the ones used to create the algorithm and shows potential for application beyond the particular training data used in the study
- …