3,696 research outputs found
Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images
This paper investigates, using prior shape models and the concept of ball
scale (b-scale), ways of automatically recognizing objects in 3D images without
performing elaborate searches or optimization. That is, the goal is to place
the model in a single shot close to the right pose (position, orientation, and
scale) in a given image so that the model boundaries fall in the close vicinity
of object boundaries in the image. This is achieved via the following set of
key ideas: (a) A semi-automatic way of constructing a multi-object shape model
assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship
between objects in the training images and their intensity patterns captured in
b-scale images. (c) A hierarchical mechanism of positioning the model, in a
one-shot way, in a given image from a knowledge of the learnt pose relationship
and the b-scale image of the given image to be segmented. The evaluation
results on a set of 20 routine clinical abdominal female and male CT data sets
indicate the following: (1) Incorporating a large number of objects improves
the recognition accuracy dramatically. (2) The recognition algorithm can be
thought as a hierarchical framework such that quick replacement of the model
assembly is defined as coarse recognition and delineation itself is known as
finest recognition. (3) Scale yields useful information about the relationship
between the model assembly and any given image such that the recognition
results in a placement of the model close to the actual pose without doing any
elaborate searches or optimization. (4) Effective object recognition can make
delineation most accurate.Comment: This paper was published and presented in SPIE Medical Imaging 201
Recommended from our members
Fully automated convolutional neural network-based affine algorithm improves liver registration and lesion co-localization on hepatobiliary phase T1-weighted MR images.
BackgroundLiver alignment between series/exams is challenged by dynamic morphology or variability in patient positioning or motion. Image registration can improve image interpretation and lesion co-localization. We assessed the performance of a convolutional neural network algorithm to register cross-sectional liver imaging series and compared its performance to manual image registration.MethodsThree hundred fourteen patients, including internal and external datasets, who underwent gadoxetate disodium-enhanced magnetic resonance imaging for clinical care from 2011 to 2018, were retrospectively selected. Automated registration was applied to all 2,663 within-patient series pairs derived from these datasets. Additionally, 100 within-patient series pairs from the internal dataset were independently manually registered by expert readers. Liver overlap, image correlation, and intra-observation distances for manual versus automated registrations were compared using paired t tests. Influence of patient demographics, imaging characteristics, and liver uptake function was evaluated using univariate and multivariate mixed models.ResultsCompared to the manual, automated registration produced significantly lower intra-observation distance (p < 0.001) and higher liver overlap and image correlation (p < 0.001). Intra-exam automated registration achieved 0.88 mean liver overlap and 0.44 mean image correlation for the internal dataset and 0.91 and 0.41, respectively, for the external dataset. For inter-exam registration, mean overlap was 0.81 and image correlation 0.41. Older age, female sex, greater inter-series time interval, differing uptake, and greater voxel size differences independently reduced automated registration performance (p ≤ 0.020).ConclusionA fully automated algorithm accurately registered the liver within and between examinations, yielding better liver and focal observation co-localization compared to manual registration
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
3D Anisotropic Hybrid Network: Transferring Convolutional Features from 2D Images to 3D Anisotropic Volumes
While deep convolutional neural networks (CNN) have been successfully applied
for 2D image analysis, it is still challenging to apply them to 3D anisotropic
volumes, especially when the within-slice resolution is much higher than the
between-slice resolution and when the amount of 3D volumes is relatively small.
On one hand, direct learning of CNN with 3D convolution kernels suffers from
the lack of data and likely ends up with poor generalization; insufficient GPU
memory limits the model size or representational power. On the other hand,
applying 2D CNN with generalizable features to 2D slices ignores between-slice
information. Coupling 2D network with LSTM to further handle the between-slice
information is not optimal due to the difficulty in LSTM learning. To overcome
the above challenges, we propose a 3D Anisotropic Hybrid Network (AH-Net) that
transfers convolutional features learned from 2D images to 3D anisotropic
volumes. Such a transfer inherits the desired strong generalization capability
for within-slice information while naturally exploiting between-slice
information for more effective modelling. The focal loss is further utilized
for more effective end-to-end learning. We experiment with the proposed 3D
AH-Net on two different medical image analysis tasks, namely lesion detection
from a Digital Breast Tomosynthesis volume, and liver and liver tumor
segmentation from a Computed Tomography volume and obtain the state-of-the-art
results
Liver segmentation using 3D CT scans.
Master of Science in Computer Science. University of KwaZulu-Natal, Durban, 2018.Abstract available in PDF file
An Adaptive Sampling Scheme to Efficiently Train Fully Convolutional Networks for Semantic Segmentation
Deep convolutional neural networks (CNNs) have shown excellent performance in
object recognition tasks and dense classification problems such as semantic
segmentation. However, training deep neural networks on large and sparse
datasets is still challenging and can require large amounts of computation and
memory. In this work, we address the task of performing semantic segmentation
on large data sets, such as three-dimensional medical images. We propose an
adaptive sampling scheme that uses a-posterior error maps, generated throughout
training, to focus sampling on difficult regions, resulting in improved
learning. Our contribution is threefold: 1) We give a detailed description of
the proposed sampling algorithm to speed up and improve learning performance on
large images. We propose a deep dual path CNN that captures information at fine
and coarse scales, resulting in a network with a large field of view and high
resolution outputs. We show that our method is able to attain new
state-of-the-art results on the VISCERAL Anatomy benchmark
Detection-aided medical image segmentation using deep learning
The details of the work will be defined once the student reaches the destination institution.A fully automatic technique for segmenting the liver and localizing its unhealthy tissues is a convenient tool in order to diagnose hepatic diseases and also to assess the response to the according treatments. In this thesis we propose a method to segment the liver and its lesions from Computed Tomography (CT) scans, as well as other anatomical structures and organs of the human body. We have used Convolutional Neural Networks (CNNs), that have proven good results in a variety of tasks, including medical imaging. The network to segment the lesions consists of a cascaded architecture, which first focuses on the liver region in order to segment the lesion. Moreover, we train a detector to localize the lesions and just keep those pixels from the output of the segmentation network where a lesion is detected. The segmentation architecture is based on DRIU (Maninis, 2016), a Fully Convolutional Network (FCN) with side outputs that work at feature maps of different resolutions, to finally benefit from the multi-scale information learned by different stages of the network. Our pipeline is 2.5D, as the input of the network is a stack of consecutive slices of the CT scans. We also study different methods to benefit from the liver segmentation in order to delineate the lesion. The main focus of this work is to use the detector to localize the lesions, as we demonstrate that it helps to remove false positives triggered by the segmentation network. The benefits of using a detector on top of the segmentation is that the detector acquires a more global insight of the healthiness of a liver tissue compared to the segmentation network, whose final output is pixel-wise and is not forced to take a global decision over a whole liver patch. We show experiments with the LiTS dataset for the lesion and liver segmentation. In order to prove the generality of the segmentation network, we also segment several anatomical structures from the Visceral dataset
Anatomy-specific classification of medical images using deep convolutional nets
Automated classification of human anatomy is an important prerequisite for
many computer-aided diagnosis systems. The spatial complexity and variability
of anatomy throughout the human body makes classification difficult. "Deep
learning" methods such as convolutional networks (ConvNets) outperform other
state-of-the-art methods in image classification tasks. In this work, we
present a method for organ- or body-part-specific anatomical classification of
medical images acquired using computed tomography (CT) with ConvNets. We train
a ConvNet, using 4,298 separate axial 2D key-images to learn 5 anatomical
classes. Key-images were mined from a hospital PACS archive, using a set of
1,675 patients. We show that a data augmentation approach can help to enrich
the data set and improve classification performance. Using ConvNets and data
augmentation, we achieve anatomy-specific classification error of 5.9 % and
area-under-the-curve (AUC) values of an average of 0.998 in testing. We
demonstrate that deep learning can be used to train very reliable and accurate
classifiers that could initialize further computer-aided diagnosis.Comment: Presented at: 2015 IEEE International Symposium on Biomedical
Imaging, April 16-19, 2015, New York Marriott at Brooklyn Bridge, NY, US
- …