18,695 research outputs found
V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation
Convolutional Neural Networks (CNNs) have been recently employed to solve
problems from both the computer vision and medical image analysis fields.
Despite their popularity, most approaches are only able to process 2D images
while most medical data used in clinical practice consists of 3D volumes. In
this work we propose an approach to 3D image segmentation based on a
volumetric, fully convolutional, neural network. Our CNN is trained end-to-end
on MRI volumes depicting prostate, and learns to predict segmentation for the
whole volume at once. We introduce a novel objective function, that we optimise
during training, based on Dice coefficient. In this way we can deal with
situations where there is a strong imbalance between the number of foreground
and background voxels. To cope with the limited number of annotated volumes
available for training, we augment the data applying random non-linear
transformations and histogram matching. We show in our experimental evaluation
that our approach achieves good performances on challenging test data while
requiring only a fraction of the processing time needed by other previous
methods
Unsupervised Domain Adaptation with Similarity Learning
The objective of unsupervised domain adaptation is to leverage features from
a labeled source domain and learn a classifier for an unlabeled target domain,
with a similar but different data distribution. Most deep learning approaches
to domain adaptation consist of two steps: (i) learn features that preserve a
low risk on labeled samples (source domain) and (ii) make the features from
both domains to be as indistinguishable as possible, so that a classifier
trained on the source can also be applied on the target domain. In general, the
classifiers in step (i) consist of fully-connected layers applied directly on
the indistinguishable features learned in (ii). In this paper, we propose a
different way to do the classification, using similarity learning. The proposed
method learns a pairwise similarity function in which classification can be
performed by computing similarity between prototype representations of each
category. The domain-invariant features and the categorical prototype
representations are learned jointly and in an end-to-end fashion. At inference
time, images from the target domain are compared to the prototypes and the
label associated with the one that best matches the image is outputed. The
approach is simple, scalable and effective. We show that our model achieves
state-of-the-art performance in different unsupervised domain adaptation
scenarios
Segmentation-Aware Convolutional Networks Using Local Attention Masks
We introduce an approach to integrate segmentation information within a
convolutional neural network (CNN). This counter-acts the tendency of CNNs to
smooth information across regions and increases their spatial precision. To
obtain segmentation information, we set up a CNN to provide an embedding space
where region co-membership can be estimated based on Euclidean distance. We use
these embeddings to compute a local attention mask relative to every neuron
position. We incorporate such masks in CNNs and replace the convolution
operation with a "segmentation-aware" variant that allows a neuron to
selectively attend to inputs coming from its own region. We call the resulting
network a segmentation-aware CNN because it adapts its filters at each image
point according to local segmentation cues. We demonstrate the merit of our
method on two widely different dense prediction tasks, that involve
classification (semantic segmentation) and regression (optical flow). Our
results show that in semantic segmentation we can match the performance of
DenseCRFs while being faster and simpler, and in optical flow we obtain clearly
sharper responses than networks that do not use local attention masks. In both
cases, segmentation-aware convolution yields systematic improvements over
strong baselines. Source code for this work is available online at
http://cs.cmu.edu/~aharley/segaware
ADVISE: Symbolism and External Knowledge for Decoding Advertisements
In order to convey the most content in their limited space, advertisements
embed references to outside knowledge via symbolism. For example, a motorcycle
stands for adventure (a positive property the ad wants associated with the
product being sold), and a gun stands for danger (a negative property to
dissuade viewers from undesirable behaviors). We show how to use symbolic
references to better understand the meaning of an ad. We further show how
anchoring ad understanding in general-purpose object recognition and image
captioning improves results. We formulate the ad understanding task as matching
the ad image to human-generated statements that describe the action that the ad
prompts, and the rationale it provides for taking this action. Our proposed
method outperforms the state of the art on this task, and on an alternative
formulation of question-answering on ads. We show additional applications of
our learned representations for matching ads to slogans, and clustering ads
according to their topic, without extra training.Comment: To appear, Proceedings of the European Conference on Computer Vision
(ECCV
- …