14,951 research outputs found
Unsupervised Lesion Detection via Image Restoration with a Normative Prior
Unsupervised lesion detection is a challenging problem that requires
accurately estimating normative distributions of healthy anatomy and detecting
lesions as outliers without training examples. Recently, this problem has
received increased attention from the research community following the advances
in unsupervised learning with deep learning. Such advances allow the estimation
of high-dimensional distributions, such as normative distributions, with higher
accuracy than previous methods.The main approach of the recently proposed
methods is to learn a latent-variable model parameterized with networks to
approximate the normative distribution using example images showing healthy
anatomy, perform prior-projection, i.e. reconstruct the image with lesions
using the latent-variable model, and determine lesions based on the differences
between the reconstructed and original images. While being promising, the
prior-projection step often leads to a large number of false positives. In this
work, we approach unsupervised lesion detection as an image restoration problem
and propose a probabilistic model that uses a network-based prior as the
normative distribution and detect lesions pixel-wise using MAP estimation. The
probabilistic model punishes large deviations between restored and original
images, reducing false positives in pixel-wise detections. Experiments with
gliomas and stroke lesions in brain MRI using publicly available datasets show
that the proposed approach outperforms the state-of-the-art unsupervised
methods by a substantial margin, +0.13 (AUC), for both glioma and stroke
detection. Extensive model analysis confirms the effectiveness of MAP-based
image restoration.Comment: Extended version of 'Unsupervised Lesion Detection via Image
Restoration with a Normative Prior' (MIDL2019
Semantic labeling of places using information extracted from laser and vision sensor data
Indoor environments can typically be divided into places with different functionalities like corridors, kitchens,
offices, or seminar rooms. The ability to learn such semantic categories from sensor data enables a mobile robot to extend the representation of the environment facilitating the interaction withhumans. As an example, natural language terms like corridor or room can be used to communicate the position of the robot in a map in a more intuitive way. In this work, we firrst propose an approach based on supervised learning to classify the pose of a mobile robot into semantic classes. Our method uses AdaBoost to boost simple features extracted from range data and vision into a strong classifier. We present two main applications of this approach. Firstly, we show how our approach can be utilized by a moving robot for an online classification of the poses traversed along its path using a hidden Markov model. Secondly,
we introduce an approach to learn topological maps from geometric maps by applying our semantic classification procedure in combination with a probabilistic relaxation procedure. We finally show how to apply associative Markov networks (AMNs) together with AdaBoost for classifying complete geometric maps. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various indoor
environments
Graph Refinement based Airway Extraction using Mean-Field Networks and Graph Neural Networks
Graph refinement, or the task of obtaining subgraphs of interest from
over-complete graphs, can have many varied applications. In this work, we
extract trees or collection of sub-trees from image data by, first deriving a
graph-based representation of the volumetric data and then, posing the tree
extraction as a graph refinement task. We present two methods to perform graph
refinement. First, we use mean-field approximation (MFA) to approximate the
posterior density over the subgraphs from which the optimal subgraph of
interest can be estimated. Mean field networks (MFNs) are used for inference
based on the interpretation that iterations of MFA can be seen as feed-forward
operations in a neural network. This allows us to learn the model parameters
using gradient descent. Second, we present a supervised learning approach using
graph neural networks (GNNs) which can be seen as generalisations of MFNs.
Subgraphs are obtained by training a GNN-based graph refinement model to
directly predict edge probabilities. We discuss connections between the two
classes of methods and compare them for the task of extracting airways from 3D,
low-dose, chest CT data. We show that both the MFN and GNN models show
significant improvement when compared to one baseline method, that is similar
to a top performing method in the EXACT'09 Challenge, and a 3D U-Net based
airway segmentation model, in detecting more branches with fewer false
positives.Comment: Accepted for publication at Medical Image Analysis. 14 page
Unsupervised robust nonparametric learning of hidden community properties
We consider learning of fundamental properties of communities in large noisy
networks, in the prototypical situation where the nodes or users are split into
two classes according to a binary property, e.g., according to their opinions
or preferences on a topic. For learning these properties, we propose a
nonparametric, unsupervised, and scalable graph scan procedure that is, in
addition, robust against a class of powerful adversaries. In our setup, one of
the communities can fall under the influence of a knowledgeable adversarial
leader, who knows the full network structure, has unlimited computational
resources and can completely foresee our planned actions on the network. We
prove strong consistency of our results in this setup with minimal assumptions.
In particular, the learning procedure estimates the baseline activity of normal
users asymptotically correctly with probability 1; the only assumption being
the existence of a single implicit community of asymptotically negligible
logarithmic size. We provide experiments on real and synthetic data to
illustrate the performance of our method, including examples with adversaries.Comment: Experiments with new types of adversaries adde
Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation
We aim at segmenting small organs (e.g., the pancreas) from abdominal CT
scans. As the target often occupies a relatively small region in the input
image, deep neural networks can be easily confused by the complex and variable
background. To alleviate this, researchers proposed a coarse-to-fine approach,
which used prediction from the first (coarse) stage to indicate a smaller input
region for the second (fine) stage. Despite its effectiveness, this algorithm
dealt with two stages individually, which lacked optimizing a global energy
function, and limited its ability to incorporate multi-stage visual cues.
Missing contextual information led to unsatisfying convergence in iterations,
and that the fine stage sometimes produced even lower segmentation accuracy
than the coarse stage.
This paper presents a Recurrent Saliency Transformation Network. The key
innovation is a saliency transformation module, which repeatedly converts the
segmentation probability map from the previous iteration as spatial weights and
applies these weights to the current iteration. This brings us two-fold
benefits. In training, it allows joint optimization over the deep networks
dealing with different input scales. In testing, it propagates multi-stage
visual information throughout iterations to improve segmentation accuracy.
Experiments in the NIH pancreas segmentation dataset demonstrate the
state-of-the-art accuracy, which outperforms the previous best by an average of
over 2%. Much higher accuracies are also reported on several small organs in a
larger dataset collected by ourselves. In addition, our approach enjoys better
convergence properties, making it more efficient and reliable in practice.Comment: Accepted to CVPR 2018 (10 pages, 6 figures
- …