2,531 research outputs found
DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation
Automatic organ segmentation is an important yet challenging problem for
medical image analysis. The pancreas is an abdominal organ with very high
anatomical variability. This inhibits previous segmentation methods from
achieving high accuracies, especially compared to other organs such as the
liver, heart or kidneys. In this paper, we present a probabilistic bottom-up
approach for pancreas segmentation in abdominal computed tomography (CT) scans,
using multi-level deep convolutional networks (ConvNets). We propose and
evaluate several variations of deep ConvNets in the context of hierarchical,
coarse-to-fine classification on image patches and regions, i.e. superpixels.
We first present a dense labeling of local image patches via
and nearest neighbor fusion. Then we describe a regional
ConvNet () that samples a set of bounding boxes around
each image superpixel at different scales of contexts in a "zoom-out" fashion.
Our ConvNets learn to assign class probabilities for each superpixel region of
being pancreas. Last, we study a stacked leveraging
the joint space of CT intensities and the dense
probability maps. Both 3D Gaussian smoothing and 2D conditional random fields
are exploited as structured predictions for post-processing. We evaluate on CT
images of 82 patients in 4-fold cross-validation. We achieve a Dice Similarity
Coefficient of 83.66.3% in training and 71.810.7% in testing.Comment: To be presented at MICCAI 2015 - 18th International Conference on
Medical Computing and Computer Assisted Interventions, Munich, German
Structured Light-Based 3D Reconstruction System for Plants.
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance
Enrichment of the NLST and NSCLC-Radiomics computed tomography collections with AI-derived annotations
Public imaging datasets are critical for the development and evaluation of
automated tools in cancer imaging. Unfortunately, many do not include
annotations or image-derived features, complicating their downstream analysis.
Artificial intelligence-based annotation tools have been shown to achieve
acceptable performance and thus can be used to automatically annotate large
datasets. As part of the effort to enrich public data available within NCI
Imaging Data Commons (IDC), here we introduce AI-generated annotations for two
collections of computed tomography images of the chest, NSCLC-Radiomics, and
the National Lung Screening Trial. Using publicly available AI algorithms we
derived volumetric annotations of thoracic organs at risk, their corresponding
radiomics features, and slice-level annotations of anatomical landmarks and
regions. The resulting annotations are publicly available within IDC, where the
DICOM format is used to harmonize the data and achieve FAIR principles. The
annotations are accompanied by cloud-enabled notebooks demonstrating their use.
This study reinforces the need for large, publicly accessible curated datasets
and demonstrates how AI can be used to aid in cancer imaging
Cohort-based T-SSIM Visual Computing for Radiation Therapy Prediction and Exploration
We describe a visual computing approach to radiation therapy (RT) planning,
based on spatial similarity within a patient cohort. In radiotherapy for head
and neck cancer treatment, dosage to organs at risk surrounding a tumor is a
large cause of treatment toxicity. Along with the availability of patient
repositories, this situation has lead to clinician interest in understanding
and predicting RT outcomes based on previously treated similar patients. To
enable this type of analysis, we introduce a novel topology-based spatial
similarity measure, T-SSIM, and a predictive algorithm based on this similarity
measure. We couple the algorithm with a visual steering interface that
intertwines visual encodings for the spatial data and statistical results,
including a novel parallel-marker encoding that is spatially aware. We report
quantitative results on a cohort of 165 patients, as well as a qualitative
evaluation with domain experts in radiation oncology, data management,
biostatistics, and medical imaging, who are collaborating remotely.Comment: IEEE VIS (SciVis) 201
- …