215 research outputs found
Adversarial Training for Adverse Conditions: Robust Metric Localisation using Appearance Transfer
We present a method of improving visual place recognition and metric
localisation under very strong appear- ance change. We learn an invertable
generator that can trans- form the conditions of images, e.g. from day to
night, summer to winter etc. This image transforming filter is explicitly
designed to aid and abet feature-matching using a new loss based on SURF
detector and dense descriptor maps. A network is trained to output synthetic
images optimised for feature matching given only an input RGB image, and these
generated images are used to localize the robot against a previously built map
using traditional sparse matching approaches. We benchmark our results using
multiple traversals of the Oxford RobotCar Dataset over a year-long period,
using one traversal as a map and the other to localise. We show that this
method significantly improves place recognition and localisation under changing
and adverse conditions, while reducing the number of mapping runs needed to
successfully achieve reliable localisation.Comment: Accepted at ICRA201
Using CycleGANs for effectively reducing image variability across OCT devices and improving retinal fluid segmentation
Optical coherence tomography (OCT) has become the most important imaging
modality in ophthalmology. A substantial amount of research has recently been
devoted to the development of machine learning (ML) models for the
identification and quantification of pathological features in OCT images. Among
the several sources of variability the ML models have to deal with, a major
factor is the acquisition device, which can limit the ML model's
generalizability. In this paper, we propose to reduce the image variability
across different OCT devices (Spectralis and Cirrus) by using CycleGAN, an
unsupervised unpaired image transformation algorithm. The usefulness of this
approach is evaluated in the setting of retinal fluid segmentation, namely
intraretinal cystoid fluid (IRC) and subretinal fluid (SRF). First, we train a
segmentation model on images acquired with a source OCT device. Then we
evaluate the model on (1) source, (2) target and (3) transformed versions of
the target OCT images. The presented transformation strategy shows an F1 score
of 0.4 (0.51) for IRC (SRF) segmentations. Compared with traditional
transformation approaches, this means an F1 score gain of 0.2 (0.12).Comment: * Contributed equally (order was defined by flipping a coin)
--------------- Accepted for publication in the "IEEE International Symposium
on Biomedical Imaging (ISBI) 2019
Generating Diffusion MRI scalar maps from T1 weighted images using generative adversarial networks
Diffusion magnetic resonance imaging (diffusion MRI) is a non-invasive
microstructure assessment technique. Scalar measures, such as FA (fractional
anisotropy) and MD (mean diffusivity), quantifying micro-structural tissue
properties can be obtained using diffusion models and data processing
pipelines. However, it is costly and time consuming to collect high quality
diffusion data. Here, we therefore demonstrate how Generative Adversarial
Networks (GANs) can be used to generate synthetic diffusion scalar measures
from structural T1-weighted images in a single optimized step. Specifically, we
train the popular CycleGAN model to learn to map a T1 image to FA or MD, and
vice versa. As an application, we show that synthetic FA images can be used as
a target for non-linear registration, to correct for geometric distortions
common in diffusion MRI
METGAN: Generative Tumour Inpainting and Modality Synthesis in Light Sheet Microscopy
Novel multimodal imaging methods are capable of generating extensive, super high resolution datasets for preclinical research. Yet, a massive lack of annotations prevents the broad use of deep learning to analyze such data. So far, existing generative models fail to mitigate this problem because of frequent labeling errors. In this paper, we introduce a novel generative method which leverages real anatomical information to generate realistic image-label pairs of tumours. We construct a dual-pathway generator, for the anatomical image and label, trained in a cycle-consistent setup, constrained by an independent, pretrained segmentor. The generated images yield significant quantitative improvement compared to existing methods. To validate the quality of synthesis, we train segmentation networks on a dataset augmented with the synthetic data, substantially improving the segmentation over baseline
A Generative Adversarial Approach for Zero-Shot Learning from Noisy Texts
Most existing zero-shot learning methods consider the problem as a visual
semantic embedding one. Given the demonstrated capability of Generative
Adversarial Networks(GANs) to generate images, we instead leverage GANs to
imagine unseen categories from text descriptions and hence recognize novel
classes with no examples being seen. Specifically, we propose a simple yet
effective generative model that takes as input noisy text descriptions about an
unseen class (e.g.Wikipedia articles) and generates synthesized visual features
for this class. With added pseudo data, zero-shot learning is naturally
converted to a traditional classification problem. Additionally, to preserve
the inter-class discrimination of the generated features, a visual pivot
regularization is proposed as an explicit supervision. Unlike previous methods
using complex engineered regularizers, our approach can suppress the noise well
without additional regularization. Empirically, we show that our method
consistently outperforms the state of the art on the largest available
benchmarks on Text-based Zero-shot Learning.Comment: To appear in CVPR1
- …