252 research outputs found
A Fast Learning Algorithm for Image Segmentation with Max-Pooling Convolutional Networks
We present a fast algorithm for training MaxPooling Convolutional Networks to
segment images. This type of network yields record-breaking performance in a
variety of tasks, but is normally trained on a computationally expensive
patch-by-patch basis. Our new method processes each training image in a single
pass, which is vastly more efficient.
We validate the approach in different scenarios and report a 1500-fold
speed-up. In an application to automated steel defect detection and
segmentation, we obtain excellent performance with short training times
Factorised spatial representation learning: application in semi-supervised myocardial segmentation
The success and generalisation of deep learning algorithms heavily depend on
learning good feature representations. In medical imaging this entails
representing anatomical information, as well as properties related to the
specific imaging setting. Anatomical information is required to perform further
analysis, whereas imaging information is key to disentangle scanner variability
and potential artefacts. The ability to factorise these would allow for
training algorithms only on the relevant information according to the task. To
date, such factorisation has not been attempted. In this paper, we propose a
methodology of latent space factorisation relying on the cycle-consistency
principle. As an example application, we consider cardiac MR segmentation,
where we separate information related to the myocardium from other features
related to imaging and surrounding substructures. We demonstrate the proposed
method's utility in a semi-supervised setting: we use very few labelled images
together with many unlabelled images to train a myocardium segmentation neural
network. Specifically, we achieve comparable performance to fully supervised
networks using a fraction of labelled images in experiments on ACDC and a
dataset from Edinburgh Imaging Facility QMRI. Code will be made available at
https://github.com/agis85/spatial_factorisation.Comment: Accepted in MICCAI 201
Automatic Liver Segmentation Using an Adversarial Image-to-Image Network
Automatic liver segmentation in 3D medical images is essential in many
clinical applications, such as pathological diagnosis of hepatic diseases,
surgical planning, and postoperative assessment. However, it is still a very
challenging task due to the complex background, fuzzy boundary, and various
appearance of liver. In this paper, we propose an automatic and efficient
algorithm to segment liver from 3D CT volumes. A deep image-to-image network
(DI2IN) is first deployed to generate the liver segmentation, employing a
convolutional encoder-decoder architecture combined with multi-level feature
concatenation and deep supervision. Then an adversarial network is utilized
during training process to discriminate the output of DI2IN from ground truth,
which further boosts the performance of DI2IN. The proposed method is trained
on an annotated dataset of 1000 CT volumes with various different scanning
protocols (e.g., contrast and non-contrast, various resolution and position)
and large variations in populations (e.g., ages and pathology). Our approach
outperforms the state-of-the-art solutions in terms of segmentation accuracy
and computing efficiency.Comment: Accepted by MICCAI 201
Measuring the Values for Time
Most economic models for time allocation ignore constraints on what people can actually do with their time. Economists recently have emphasized the importance of considering prior consumption commitments that constrain behavior. This research develops a new model for time valuation that uses time commitments to distinguish consumers' choice margins and the different values of time these imply. The model is estimated using a new survey that elicits revealed and stated preference data on household time allocation. The empirical results support the framework and find an increasing marginal opportunity cost of time as longer time blocks are used.
Quantum generative adversarial learning
Generative adversarial networks (GANs) represent a powerful tool for
classical machine learning: a generator tries to create statistics for data
that mimics those of a true data set, while a discriminator tries to
discriminate between the true and fake data. The learning process for generator
and discriminator can be thought of as an adversarial game, and under
reasonable assumptions, the game converges to the point where the generator
generates the same statistics as the true data and the discriminator is unable
to discriminate between the true and the generated data. This paper introduces
the notion of quantum generative adversarial networks (QuGANs), where the data
consists either of quantum states, or of classical data, and the generator and
discriminator are equipped with quantum information processors. We show that
the unique fixed point of the quantum adversarial game also occurs when the
generator produces the same statistics as the data. Since quantum systems are
intrinsically probabilistic the proof of the quantum case is different from -
and simpler than - the classical case. We show that when the data consists of
samples of measurements made on high-dimensional spaces, quantum adversarial
networks may exhibit an exponential advantage over classical adversarial
networks.Comment: 5 pages, 1 figur
- …