4,672 research outputs found
EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging without External Trackers
Ultrasound (US) is the most widely used fetal imaging technique. However, US
images have limited capture range, and suffer from view dependent artefacts
such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a
high-resolution volume can extend the field of view and remove image artefacts,
which is useful for retrospective analysis including population based studies.
However, such volume reconstructions require information about relative
transformations between probe positions from which the individual volumes were
acquired. In prenatal US scans, the fetus can move independently from the
mother, making external trackers such as electromagnetic or optical tracking
unable to track the motion between probe position and the moving fetus. We
provide a novel methodology for image-based tracking and volume reconstruction
by combining recent advances in deep learning and simultaneous localisation and
mapping (SLAM). Tracking semantics are established through the use of a
Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of
concept, experiments are conducted on US volumes taken from a whole body fetal
phantom, and from the heads of real fetuses. For the fetal head segmentation,
we also introduce a novel weak annotation approach to minimise the required
manual effort for ground truth annotation. We evaluate our method
qualitatively, and quantitatively with respect to tissue discrimination
accuracy and tracking robustness.Comment: MICCAI Workshop on Perinatal, Preterm and Paediatric Image analysis
(PIPPI), 201
V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation
Convolutional Neural Networks (CNNs) have been recently employed to solve
problems from both the computer vision and medical image analysis fields.
Despite their popularity, most approaches are only able to process 2D images
while most medical data used in clinical practice consists of 3D volumes. In
this work we propose an approach to 3D image segmentation based on a
volumetric, fully convolutional, neural network. Our CNN is trained end-to-end
on MRI volumes depicting prostate, and learns to predict segmentation for the
whole volume at once. We introduce a novel objective function, that we optimise
during training, based on Dice coefficient. In this way we can deal with
situations where there is a strong imbalance between the number of foreground
and background voxels. To cope with the limited number of annotated volumes
available for training, we augment the data applying random non-linear
transformations and histogram matching. We show in our experimental evaluation
that our approach achieves good performances on challenging test data while
requiring only a fraction of the processing time needed by other previous
methods
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Real-Time Automatic Fetal Brain Extraction in Fetal MRI by Deep Learning
Brain segmentation is a fundamental first step in neuroimage analysis. In the
case of fetal MRI, it is particularly challenging and important due to the
arbitrary orientation of the fetus, organs that surround the fetal head, and
intermittent fetal motion. Several promising methods have been proposed but are
limited in their performance in challenging cases and in real-time
segmentation. We aimed to develop a fully automatic segmentation method that
independently segments sections of the fetal brain in 2D fetal MRI slices in
real-time. To this end, we developed and evaluated a deep fully convolutional
neural network based on 2D U-net and autocontext, and compared it to two
alternative fast methods based on 1) a voxelwise fully convolutional network
and 2) a method based on SIFT features, random forest and conditional random
field. We trained the networks with manual brain masks on 250 stacks of
training images, and tested on 17 stacks of normal fetal brain images as well
as 18 stacks of extremely challenging cases based on extreme motion, noise, and
severely abnormal brain shape. Experimental results show that our U-net
approach outperformed the other methods and achieved average Dice metrics of
96.52% and 78.83% in the normal and challenging test sets, respectively. With
an unprecedented performance and a test run time of about 1 second, our network
can be used to segment the fetal brain in real-time while fetal MRI slices are
being acquired. This can enable real-time motion tracking, motion detection,
and 3D reconstruction of fetal brain MRI.Comment: This work has been submitted to ISBI 201
Attention Gated Networks: Learning to Leverage Salient Regions in Medical Images
We propose a novel attention gate (AG) model for medical image analysis that
automatically learns to focus on target structures of varying shapes and sizes.
Models trained with AGs implicitly learn to suppress irrelevant regions in an
input image while highlighting salient features useful for a specific task.
This enables us to eliminate the necessity of using explicit external
tissue/organ localisation modules when using convolutional neural networks
(CNNs). AGs can be easily integrated into standard CNN models such as VGG or
U-Net architectures with minimal computational overhead while increasing the
model sensitivity and prediction accuracy. The proposed AG models are evaluated
on a variety of tasks, including medical image classification and segmentation.
For classification, we demonstrate the use case of AGs in scan plane detection
for fetal ultrasound screening. We show that the proposed attention mechanism
can provide efficient object localisation while improving the overall
prediction performance by reducing false positives. For segmentation, the
proposed architecture is evaluated on two large 3D CT abdominal datasets with
manual annotations for multiple organs. Experimental results show that AG
models consistently improve the prediction performance of the base
architectures across different datasets and training sizes while preserving
computational efficiency. Moreover, AGs guide the model activations to be
focused around salient regions, which provides better insights into how model
predictions are made. The source code for the proposed AG models is publicly
available.Comment: Accepted for Medical Image Analysis (Special Issue on Medical Imaging
with Deep Learning). arXiv admin note: substantial text overlap with
arXiv:1804.03999, arXiv:1804.0533
- …