10,221 research outputs found
Graph Refinement based Airway Extraction using Mean-Field Networks and Graph Neural Networks
Graph refinement, or the task of obtaining subgraphs of interest from
over-complete graphs, can have many varied applications. In this work, we
extract trees or collection of sub-trees from image data by, first deriving a
graph-based representation of the volumetric data and then, posing the tree
extraction as a graph refinement task. We present two methods to perform graph
refinement. First, we use mean-field approximation (MFA) to approximate the
posterior density over the subgraphs from which the optimal subgraph of
interest can be estimated. Mean field networks (MFNs) are used for inference
based on the interpretation that iterations of MFA can be seen as feed-forward
operations in a neural network. This allows us to learn the model parameters
using gradient descent. Second, we present a supervised learning approach using
graph neural networks (GNNs) which can be seen as generalisations of MFNs.
Subgraphs are obtained by training a GNN-based graph refinement model to
directly predict edge probabilities. We discuss connections between the two
classes of methods and compare them for the task of extracting airways from 3D,
low-dose, chest CT data. We show that both the MFN and GNN models show
significant improvement when compared to one baseline method, that is similar
to a top performing method in the EXACT'09 Challenge, and a 3D U-Net based
airway segmentation model, in detecting more branches with fewer false
positives.Comment: Accepted for publication at Medical Image Analysis. 14 page
Mean Field Network based Graph Refinement with application to Airway Tree Extraction
We present tree extraction in 3D images as a graph refinement task, of
obtaining a subgraph from an over-complete input graph. To this end, we
formulate an approximate Bayesian inference framework on undirected graphs
using mean field approximation (MFA). Mean field networks are used for
inference based on the interpretation that iterations of MFA can be seen as
feed-forward operations in a neural network. This allows us to learn the model
parameters from training data using back-propagation algorithm. We demonstrate
usefulness of the model to extract airway trees from 3D chest CT data. We first
obtain probability images using a voxel classifier that distinguishes airways
from background and use Bayesian smoothing to model individual airway branches.
This yields us joint Gaussian density estimates of position, orientation and
scale as node features of the input graph. Performance of the method is
compared with two methods: the first uses probability images from a trained
voxel classifier with region growing, which is similar to one of the best
performing methods at EXACT'09 airway challenge, and the second method is based
on Bayesian smoothing on these probability images. Using centerline distance as
error measure the presented method shows significant improvement compared to
these two methods.Comment: 10 pages. Preprin
Extraction of Airways using Graph Neural Networks
We present extraction of tree structures, such as airways, from image data as
a graph refinement task. To this end, we propose a graph auto-encoder model
that uses an encoder based on graph neural networks (GNNs) to learn embeddings
from input node features and a decoder to predict connections between nodes.
Performance of the GNN model is compared with mean-field networks in their
ability to extract airways from 3D chest CT scans.Comment: Extended Abstract submitted to MIDL, 2018. 3 page
Visual Saliency Based on Multiscale Deep Features
Visual saliency is a fundamental problem in both cognitive and computational
sciences, including computer vision. In this CVPR 2015 paper, we discover that
a high-quality visual saliency model can be trained with multiscale features
extracted using a popular deep learning architecture, convolutional neural
networks (CNNs), which have had many successes in visual recognition tasks. For
learning such saliency models, we introduce a neural network architecture,
which has fully connected layers on top of CNNs responsible for extracting
features at three different scales. We then propose a refinement method to
enhance the spatial coherence of our saliency results. Finally, aggregating
multiple saliency maps computed for different levels of image segmentation can
further boost the performance, yielding saliency maps better than those
generated from a single segmentation. To promote further research and
evaluation of visual saliency models, we also construct a new large database of
4447 challenging images and their pixelwise saliency annotation. Experimental
results demonstrate that our proposed method is capable of achieving
state-of-the-art performance on all public benchmarks, improving the F-Measure
by 5.0% and 13.2% respectively on the MSRA-B dataset and our new dataset
(HKU-IS), and lowering the mean absolute error by 5.7% and 35.1% respectively
on these two datasets.Comment: To appear in CVPR 201
- …