5,838 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Multi-branch Convolutional Neural Network for Multiple Sclerosis Lesion Segmentation
In this paper, we present an automated approach for segmenting multiple
sclerosis (MS) lesions from multi-modal brain magnetic resonance images. Our
method is based on a deep end-to-end 2D convolutional neural network (CNN) for
slice-based segmentation of 3D volumetric data. The proposed CNN includes a
multi-branch downsampling path, which enables the network to encode information
from multiple modalities separately. Multi-scale feature fusion blocks are
proposed to combine feature maps from different modalities at different stages
of the network. Then, multi-scale feature upsampling blocks are introduced to
upsize combined feature maps to leverage information from lesion shape and
location. We trained and tested the proposed model using orthogonal plane
orientations of each 3D modality to exploit the contextual information in all
directions. The proposed pipeline is evaluated on two different datasets: a
private dataset including 37 MS patients and a publicly available dataset known
as the ISBI 2015 longitudinal MS lesion segmentation challenge dataset,
consisting of 14 MS patients. Considering the ISBI challenge, at the time of
submission, our method was amongst the top performing solutions. On the private
dataset, using the same array of performance metrics as in the ISBI challenge,
the proposed approach shows high improvements in MS lesion segmentation
compared with other publicly available tools.Comment: This paper has been accepted for publication in NeuroImag
Deep Unfolding Convolutional Dictionary Model for Multi-Contrast MRI Super-resolution and Reconstruction
Magnetic resonance imaging (MRI) tasks often involve multiple contrasts.
Recently, numerous deep learning-based multi-contrast MRI super-resolution (SR)
and reconstruction methods have been proposed to explore the complementary
information from the multi-contrast images. However, these methods either
construct parameter-sharing networks or manually design fusion rules, failing
to accurately model the correlations between multi-contrast images and lacking
certain interpretations. In this paper, we propose a multi-contrast
convolutional dictionary (MC-CDic) model under the guidance of the optimization
algorithm with a well-designed data fidelity term. Specifically, we bulid an
observation model for the multi-contrast MR images to explicitly model the
multi-contrast images as common features and unique features. In this way, only
the useful information in the reference image can be transferred to the target
image, while the inconsistent information will be ignored. We employ the
proximal gradient algorithm to optimize the model and unroll the iterative
steps into a deep CDic model. Especially, the proximal operators are replaced
by learnable ResNet. In addition, multi-scale dictionaries are introduced to
further improve the model performance. We test our MC-CDic model on
multi-contrast MRI SR and reconstruction tasks. Experimental results
demonstrate the superior performance of the proposed MC-CDic model against
existing SOTA methods. Code is available at
https://github.com/lpcccc-cv/MC-CDic.Comment: Accepted to IJCAI202
Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction
Multi-modal magnetic resonance imaging (MRI) plays a crucial role in
comprehensive disease diagnosis in clinical medicine. However, acquiring
certain modalities, such as T2-weighted images (T2WIs), is time-consuming and
prone to be with motion artifacts. It negatively impacts subsequent multi-modal
image analysis. To address this issue, we propose an end-to-end deep learning
framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to
expedite T2WIs' acquisitions. While image pre-processing is capable of
mitigating misalignment, improper parameter selection leads to adverse
pre-processing effects, requiring iterative experimentation and adjustment. To
overcome this shortage, we employ Optimal Transport (OT) to synthesize T2WIs by
aligning T1WIs and performing cross-modal synthesis, effectively mitigating
spatial misalignment effects. Furthermore, we adopt an alternating iteration
framework between the reconstruction task and the cross-modal synthesis task to
optimize the final results. Then, we prove that the reconstructed T2WIs and the
synthetic T2WIs become closer on the T2 image manifold with iterations
increasing, and further illustrate that the improved reconstruction result
enhances the synthesis process, whereas the enhanced synthesis result improves
the reconstruction process. Finally, experimental results from FastMRI and
internal datasets confirm the effectiveness of our method, demonstrating
significant improvements in image reconstruction quality even at low sampling
rates
- …