142 research outputs found
Generative Cooperative Net for Image Generation and Data Augmentation
How to build a good model for image generation given an abstract concept is a
fundamental problem in computer vision. In this paper, we explore a generative
model for the task of generating unseen images with desired features. We
propose the Generative Cooperative Net (GCN) for image generation. The idea is
similar to generative adversarial networks except that the generators and
discriminators are trained to work accordingly. Our experiments on hand-written
digit generation and facial expression generation show that GCN's two
cooperative counterparts (the generator and the classifier) can work together
nicely and achieve promising results. We also discovered a usage of such
generative model as an data-augmentation tool. Our experiment of applying this
method on a recognition task shows that it is very effective comparing to other
existing methods. It is easy to set up and could help generate a very large
synthesized dataset.Comment: 12 pages, 8 figure
ARIGAN: Synthetic Arabidopsis Plants using Generative Adversarial Network
In recent years, there has been an increasing interest in image-based plant
phenotyping, applying state-of-the-art machine learning approaches to tackle
challenging problems, such as leaf segmentation (a multi-instance problem) and
counting. Most of these algorithms need labelled data to learn a model for the
task at hand. Despite the recent release of a few plant phenotyping datasets,
large annotated plant image datasets for the purpose of training deep learning
algorithms are lacking. One common approach to alleviate the lack of training
data is dataset augmentation. Herein, we propose an alternative solution to
dataset augmentation for plant phenotyping, creating artificial images of
plants using generative neural networks. We propose the Arabidopsis Rosette
Image Generator (through) Adversarial Network: a deep convolutional network
that is able to generate synthetic rosette-shaped plants, inspired by DCGAN (a
recent adversarial network model using convolutional layers). Specifically, we
trained the network using A1, A2, and A4 of the CVPPP 2017 LCC dataset,
containing Arabidopsis Thaliana plants. We show that our model is able to
generate realistic 128x128 colour images of plants. We train our network
conditioning on leaf count, such that it is possible to generate plants with a
given number of leaves suitable, among others, for training regression based
models. We propose a new Ax dataset of artificial plants images, obtained by
our ARIGAN. We evaluate this new dataset using a state-of-the-art leaf counting
algorithm, showing that the testing error is reduced when Ax is used as part of
the training data.Comment: 8 pages, 6 figures, 1 table, ICCV CVPPP Workshop 201
- …