64,248 research outputs found
A novel model-based 3D+time left ventricular segmentation technique
A common approach to model-based segmentation is to assume a top-down modelling strategy. However, this is not feasible for complex 3D+time structures such as the cardiac left ventricle due to increased training requirements, aligning difficulties and local minima in resulting models. As our main contribution, we present an alternate bottom-up modelling approach. By combining the variation captured in multiple dimensionally-targeted models at segmentation-time we create a scalable segmentation framework that does not suffer from the ’curse of dimensionality’. Our second contribution involves a flexible contour coupling technique that allows our segmentation method to adapt to unseen contour configurations outside the training set. This is used to identify the endo- and epi-cardium contours of the left ventricle by coupling them at segmentationtime, instead of at model-time. We apply our approach to 33 3D+time MRI cardiac datasets and perform comprehensive evaluation against several state-of-the-art works. Quantitative evaluation illustrates that our method requires significantly less training than state-of-the-art model-based methods, while maintaining or improving segmentation accuracy
OSC-CO2: coattention and cosegmentation framework for plant state change with multiple features
Cosegmentation and coattention are extensions of traditional segmentation methods aimed at detecting a common object (or objects) in a group of images. Current cosegmentation and coattention methods are ineffective for objects, such as plants, that change their morphological state while being captured in different modalities and views. The Object State Change using Coattention-Cosegmentation (OSC-CO2) is an end-to-end unsupervised deep-learning framework that enhances traditional segmentation techniques, processing, analyzing, selecting, and combining suitable segmentation results that may contain most of our target object’s pixels, and then displaying a final segmented image. The framework leverages coattention-based convolutional neural networks (CNNs) and cosegmentation-based dense Conditional Random Fields (CRFs) to address segmentation accuracy in high-dimensional plant imagery with evolving plant objects. The efficacy of OSC-CO2 is demonstrated using plant growth sequences imaged with infrared, visible, and fluorescence cameras in multiple views using a remote sensing, high-throughput phenotyping platform, and is evaluated using Jaccard index and precision measures. We also introduce CosegPP+, a dataset that is structured and can provide quantitative information on the efficacy of our framework. Results show that OSC-CO2 out performed state-of-the art segmentation and cosegmentation methods by improving segementation accuracy by 3% to 45%
The International Workshop on Osteoarthritis Imaging Knee MRI Segmentation Challenge: A Multi-Institute Evaluation and Analysis Framework on a Standardized Dataset
Purpose: To organize a knee MRI segmentation challenge for characterizing the
semantic and clinical efficacy of automatic segmentation methods relevant for
monitoring osteoarthritis progression.
Methods: A dataset partition consisting of 3D knee MRI from 88 subjects at
two timepoints with ground-truth articular (femoral, tibial, patellar)
cartilage and meniscus segmentations was standardized. Challenge submissions
and a majority-vote ensemble were evaluated using Dice score, average symmetric
surface distance, volumetric overlap error, and coefficient of variation on a
hold-out test set. Similarities in network segmentations were evaluated using
pairwise Dice correlations. Articular cartilage thickness was computed per-scan
and longitudinally. Correlation between thickness error and segmentation
metrics was measured using Pearson's coefficient. Two empirical upper bounds
for ensemble performance were computed using combinations of model outputs that
consolidated true positives and true negatives.
Results: Six teams (T1-T6) submitted entries for the challenge. No
significant differences were observed across all segmentation metrics for all
tissues (p=1.0) among the four top-performing networks (T2, T3, T4, T6). Dice
correlations between network pairs were high (>0.85). Per-scan thickness errors
were negligible among T1-T4 (p=0.99) and longitudinal changes showed minimal
bias (<0.03mm). Low correlations (<0.41) were observed between segmentation
metrics and thickness error. The majority-vote ensemble was comparable to top
performing networks (p=1.0). Empirical upper bound performances were similar
for both combinations (p=1.0).
Conclusion: Diverse networks learned to segment the knee similarly where high
segmentation accuracy did not correlate to cartilage thickness accuracy. Voting
ensembles did not outperform individual networks but may help regularize
individual models.Comment: Submitted to Radiology: Artificial Intelligence; Fixed typo
Shallow vs deep learning architectures for white matter lesion segmentation in the early stages of multiple sclerosis
In this work, we present a comparison of a shallow and a deep learning
architecture for the automated segmentation of white matter lesions in MR
images of multiple sclerosis patients. In particular, we train and test both
methods on early stage disease patients, to verify their performance in
challenging conditions, more similar to a clinical setting than what is
typically provided in multiple sclerosis segmentation challenges. Furthermore,
we evaluate a prototype naive combination of the two methods, which refines the
final segmentation. All methods were trained on 32 patients, and the evaluation
was performed on a pure test set of 73 cases. Results show low lesion-wise
false positives (30%) for the deep learning architecture, whereas the shallow
architecture yields the best Dice coefficient (63%) and volume difference
(19%). Combining both shallow and deep architectures further improves the
lesion-wise metrics (69% and 26% lesion-wise true and false positive rate,
respectively).Comment: Accepted to the MICCAI 2018 Brain Lesion (BrainLes) worksho
Lifting GIS Maps into Strong Geometric Context for Scene Understanding
Contextual information can have a substantial impact on the performance of
visual tasks such as semantic segmentation, object detection, and geometric
estimation. Data stored in Geographic Information Systems (GIS) offers a rich
source of contextual information that has been largely untapped by computer
vision. We propose to leverage such information for scene understanding by
combining GIS resources with large sets of unorganized photographs using
Structure from Motion (SfM) techniques. We present a pipeline to quickly
generate strong 3D geometric priors from 2D GIS data using SfM models aligned
with minimal user input. Given an image resectioned against this model, we
generate robust predictions of depth, surface normals, and semantic labels. We
show that the precision of the predicted geometry is substantially more
accurate other single-image depth estimation methods. We then demonstrate the
utility of these contextual constraints for re-scoring pedestrian detections,
and use these GIS contextual features alongside object detection score maps to
improve a CRF-based semantic segmentation framework, boosting accuracy over
baseline models
RADNET: Radiologist Level Accuracy using Deep Learning for HEMORRHAGE detection in CT Scans
We describe a deep learning approach for automated brain hemorrhage detection
from computed tomography (CT) scans. Our model emulates the procedure followed
by radiologists to analyse a 3D CT scan in real-world. Similar to radiologists,
the model sifts through 2D cross-sectional slices while paying close attention
to potential hemorrhagic regions. Further, the model utilizes 3D context from
neighboring slices to improve predictions at each slice and subsequently,
aggregates the slice-level predictions to provide diagnosis at CT level. We
refer to our proposed approach as Recurrent Attention DenseNet (RADnet) as it
employs original DenseNet architecture along with adding the components of
attention for slice level predictions and recurrent neural network layer for
incorporating 3D context. The real-world performance of RADnet has been
benchmarked against independent analysis performed by three senior radiologists
for 77 brain CTs. RADnet demonstrates 81.82% hemorrhage prediction accuracy at
CT level that is comparable to radiologists. Further, RADnet achieves higher
recall than two of the three radiologists, which is remarkable.Comment: Accepted at IEEE Symposium on Biomedical Imaging (ISBI) 2018 as
conference pape
ParseNet: Looking Wider to See Better
We present a technique for adding global context to deep convolutional
networks for semantic segmentation. The approach is simple, using the average
feature for a layer to augment the features at each location. In addition, we
study several idiosyncrasies of training, significantly increasing the
performance of baseline networks (e.g. from FCN). When we add our proposed
global feature, and a technique for learning normalization parameters, accuracy
increases consistently even over our improved versions of the baselines. Our
proposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow
and PASCAL-Context with small additional computational cost over baselines, and
near current state-of-the-art performance on PASCAL VOC 2012 semantic
segmentation with a simple approach. Code is available at
https://github.com/weiliu89/caffe/tree/fcn .Comment: ICLR 2016 submissio
- …