24,013 research outputs found
Segmentation in 2D and 3D image using Tissue-Like P System
Membrane Computing is a biologically inspired computational model. Its devices are called P systems and they perform computations by applying a finite set of rules in a synchronous, maximally parallel way. In this paper, we open a new research line: P systems are used in Computational Topology within the context of the Digital Image. We choose for this a variant of P systems, called tissue-like P systems, to obtain in a general maximally parallel manner the segmentation of 2D and 3D images in a constant number of steps. Finally, we use a software called Tissue Simulator to check these systems with some examples
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
3D Convolutional Neural Networks for Brain Tumor Segmentation: A Comparison of Multi-resolution Architectures
This paper analyzes the use of 3D Convolutional Neural Networks for brain
tumor segmentation in MR images. We address the problem using three different
architectures that combine fine and coarse features to obtain the final
segmentation. We compare three different networks that use multi-resolution
features in terms of both design and performance and we show that they improve
their single-resolution counterparts
EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging without External Trackers
Ultrasound (US) is the most widely used fetal imaging technique. However, US
images have limited capture range, and suffer from view dependent artefacts
such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a
high-resolution volume can extend the field of view and remove image artefacts,
which is useful for retrospective analysis including population based studies.
However, such volume reconstructions require information about relative
transformations between probe positions from which the individual volumes were
acquired. In prenatal US scans, the fetus can move independently from the
mother, making external trackers such as electromagnetic or optical tracking
unable to track the motion between probe position and the moving fetus. We
provide a novel methodology for image-based tracking and volume reconstruction
by combining recent advances in deep learning and simultaneous localisation and
mapping (SLAM). Tracking semantics are established through the use of a
Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of
concept, experiments are conducted on US volumes taken from a whole body fetal
phantom, and from the heads of real fetuses. For the fetal head segmentation,
we also introduce a novel weak annotation approach to minimise the required
manual effort for ground truth annotation. We evaluate our method
qualitatively, and quantitatively with respect to tissue discrimination
accuracy and tracking robustness.Comment: MICCAI Workshop on Perinatal, Preterm and Paediatric Image analysis
(PIPPI), 201
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
Three-Dimensional GPU-Accelerated Active Contours for Automated Localization of Cells in Large Images
Cell segmentation in microscopy is a challenging problem, since cells are
often asymmetric and densely packed. This becomes particularly challenging for
extremely large images, since manual intervention and processing time can make
segmentation intractable. In this paper, we present an efficient and highly
parallel formulation for symmetric three-dimensional (3D) contour evolution
that extends previous work on fast two-dimensional active contours. We provide
a formulation for optimization on 3D images, as well as a strategy for
accelerating computation on consumer graphics hardware. The proposed software
takes advantage of Monte-Carlo sampling schemes in order to speed up
convergence and reduce thread divergence. Experimental results show that this
method provides superior performance for large 2D and 3D cell segmentation
tasks when compared to existing methods on large 3D brain images
Template-Cut: A Pattern-Based Segmentation Paradigm
We present a scale-invariant, template-based segmentation paradigm that sets
up a graph and performs a graph cut to separate an object from the background.
Typically graph-based schemes distribute the nodes of the graph uniformly and
equidistantly on the image, and use a regularizer to bias the cut towards a
particular shape. The strategy of uniform and equidistant nodes does not allow
the cut to prefer more complex structures, especially when areas of the object
are indistinguishable from the background. We propose a solution by introducing
the concept of a "template shape" of the target object in which the nodes are
sampled non-uniformly and non-equidistantly on the image. We evaluate it on
2D-images where the object's textures and backgrounds are similar, and large
areas of the object have the same gray level appearance as the background. We
also evaluate it in 3D on 60 brain tumor datasets for neurosurgical planning
purposes.Comment: 8 pages, 6 figures, 3 tables, 6 equations, 51 reference
Automated detection of brain abnormalities in neonatal hypoxia ischemic injury from MR images.
We compared the efficacy of three automated brain injury detection methods, namely symmetry-integrated region growing (SIRG), hierarchical region splitting (HRS) and modified watershed segmentation (MWS) in human and animal magnetic resonance imaging (MRI) datasets for the detection of hypoxic ischemic injuries (HIIs). Diffusion weighted imaging (DWI, 1.5T) data from neonatal arterial ischemic stroke (AIS) patients, as well as T2-weighted imaging (T2WI, 11.7T, 4.7T) at seven different time-points (1, 4, 7, 10, 17, 24 and 31 days post HII) in rat-pup model of hypoxic ischemic injury were used to assess the temporal efficacy of our computational approaches. Sensitivity, specificity, and similarity were used as performance metrics based on manual ('gold standard') injury detection to quantify comparisons. When compared to the manual gold standard, automated injury location results from SIRG performed the best in 62% of the data, while 29% for HRS and 9% for MWS. Injury severity detection revealed that SIRG performed the best in 67% cases while 33% for HRS. Prior information is required by HRS and MWS, but not by SIRG. However, SIRG is sensitive to parameter-tuning, while HRS and MWS are not. Among these methods, SIRG performs the best in detecting lesion volumes; HRS is the most robust, while MWS lags behind in both respects
An open environment CT-US fusion for tissue segmentation during interventional guidance.
Therapeutic ultrasound (US) can be noninvasively focused to activate drugs, ablate tumors and deliver drugs beyond the blood brain barrier. However, well-controlled guidance of US therapy requires fusion with a navigational modality, such as magnetic resonance imaging (MRI) or X-ray computed tomography (CT). Here, we developed and validated tissue characterization using a fusion between US and CT. The performance of the CT/US fusion was quantified by the calibration error, target registration error and fiducial registration error. Met-1 tumors in the fat pads of 12 female FVB mice provided a model of developing breast cancer with which to evaluate CT-based tissue segmentation. Hounsfield units (HU) within the tumor and surrounding fat pad were quantified, validated with histology and segmented for parametric analysis (fat: -300 to 0 HU, protein-rich: 1 to 300 HU, and bone: HU>300). Our open source CT/US fusion system differentiated soft tissue, bone and fat with a spatial accuracy of ∼1 mm. Region of interest (ROI) analysis of the tumor and surrounding fat pad using a 1 mm(2) ROI resulted in mean HU of 68±44 within the tumor and -97±52 within the fat pad adjacent to the tumor (p<0.005). The tumor area measured by CT and histology was correlated (r(2) = 0.92), while the area designated as fat decreased with increasing tumor size (r(2) = 0.51). Analysis of CT and histology images of the tumor and surrounding fat pad revealed an average percentage of fat of 65.3% vs. 75.2%, 36.5% vs. 48.4%, and 31.6% vs. 38.5% for tumors <75 mm(3), 75-150 mm(3) and >150 mm(3), respectively. Further, CT mapped bone-soft tissue interfaces near the acoustic beam during real-time imaging. Combined CT/US is a feasible method for guiding interventions by tracking the acoustic focus within a pre-acquired CT image volume and characterizing tissues proximal to and surrounding the acoustic focus
- …