45 research outputs found
ARIGAN: Synthetic Arabidopsis Plants using Generative Adversarial Network
In recent years, there has been an increasing interest in image-based plant
phenotyping, applying state-of-the-art machine learning approaches to tackle
challenging problems, such as leaf segmentation (a multi-instance problem) and
counting. Most of these algorithms need labelled data to learn a model for the
task at hand. Despite the recent release of a few plant phenotyping datasets,
large annotated plant image datasets for the purpose of training deep learning
algorithms are lacking. One common approach to alleviate the lack of training
data is dataset augmentation. Herein, we propose an alternative solution to
dataset augmentation for plant phenotyping, creating artificial images of
plants using generative neural networks. We propose the Arabidopsis Rosette
Image Generator (through) Adversarial Network: a deep convolutional network
that is able to generate synthetic rosette-shaped plants, inspired by DCGAN (a
recent adversarial network model using convolutional layers). Specifically, we
trained the network using A1, A2, and A4 of the CVPPP 2017 LCC dataset,
containing Arabidopsis Thaliana plants. We show that our model is able to
generate realistic 128x128 colour images of plants. We train our network
conditioning on leaf count, such that it is possible to generate plants with a
given number of leaves suitable, among others, for training regression based
models. We propose a new Ax dataset of artificial plants images, obtained by
our ARIGAN. We evaluate this new dataset using a state-of-the-art leaf counting
algorithm, showing that the testing error is reduced when Ax is used as part of
the training data.Comment: 8 pages, 6 figures, 1 table, ICCV CVPPP Workshop 201
Convolutional Neural Net-Based Cassava Storage Root Counting Using Real and Synthetic Images
© Copyright © 2019 Atanbori, Montoya-P, Selvaraj, French and Pridmore. Cassava roots are complex structures comprising several distinct types of root. The number and size of the storage roots are two potential phenotypic traits reflecting crop yield and quality. Counting and measuring the size of cassava storage roots are usually done manually, or semi-automatically by first segmenting cassava root images. However, occlusion of both storage and fibrous roots makes the process both time-consuming and error-prone. While Convolutional Neural Nets have shown performance above the state-of-the-art in many image processing and analysis tasks, there are currently a limited number of Convolutional Neural Net-based methods for counting plant features. This is due to the limited availability of data, annotated by expert plant biologists, which represents all possible measurement outcomes. Existing works in this area either learn a direct image-to-count regressor model by regressing to a count value, or perform a count after segmenting the image. We, however, address the problem using a direct image-to-count prediction model. This is made possible by generating synthetic images, using a conditional Generative Adversarial Network (GAN), to provide training data for missing classes. We automatically form cassava storage root masks for any missing classes using existing ground-truth masks, and input them as a condition to our GAN model to generate synthetic root images. We combine the resulting synthetic images with real images to learn a direct image-to-count prediction model capable of counting the number of storage roots in real cassava images taken from a low cost aeroponic growth system. These models are used to develop a system that counts cassava storage roots in real images. Our system first predicts age group ('young' and 'old' roots; pertinent to our image capture regime) in a given image, and then, based on this prediction, selects an appropriate model to predict the number of storage roots. We achieve 91% accuracy on predicting ages of storage roots, and 86% and 71% overall percentage agreement on counting 'old' and 'young' storage roots respectively. Thus we are able to demonstrate that synthetically generated cassava root images can be used to supplement missing root classes, turning the counting problem into a direct image-to-count prediction task
Harnessing the Power of AI based Image Generation Model DALLE 2 in Agricultural Settings
This study investigates the potential impact of artificial intelligence (AI)
on the enhancement of visualization processes in the agricultural sector, using
the advanced AI image generator, DALLE 2, developed by OpenAI. By
synergistically utilizing the natural language processing proficiency of
chatGPT and the generative prowess of the DALLE 2 model, which employs a
Generative Adversarial Networks (GANs) framework, our research offers an
innovative method to transform textual descriptors into realistic visual
content. Our rigorously assembled datasets include a broad spectrum of
agricultural elements such as fruits, plants, and scenarios differentiating
crops from weeds, maintained for AI-generated versus original images. The
quality and accuracy of the AI-generated images were evaluated via established
metrics including mean squared error (MSE), peak signal-to-noise ratio (PSNR),
and feature similarity index (FSIM). The results underline the significant role
of the DALLE 2 model in enhancing visualization processes in agriculture,
aiding in more informed decision-making, and improving resource distribution.
The outcomes of this research highlight the imminent rise of an AI-led
transformation in the realm of precision agriculture.Comment: 22 pages, 13 figures, 2 table
Predicting Plant Growth from Time-Series Data Using Deep Learning
Phenotyping involves the quantitative assessment of the anatomical, biochemical, and physiological plant traits. Natural plant growth cycles can be extremely slow, hindering the experimental processes of phenotyping. Deep learning offers a great deal of support for automating and addressing key plant phenotyping research issues. Machine learning-based high-throughput phenotyping is a potential solution to the phenotyping bottleneck, promising to accelerate the experimental cycles within phenomic research. This research presents a study of deep networks’ potential to predict plants’ expected growth, by generating segmentation masks of root and shoot systems into the future. We adapt an existing generative adversarial predictive network into this new domain. The results show an efficient plant leaf and root segmentation network that provides predictive segmentation of what a leaf and root system will look like at a future time, based on time-series data of plant growth. We present benchmark results on two public datasets of Arabidopsis (A. thaliana) and Brassica rapa (Komatsuna) plants. The experimental results show strong performance, and the capability of proposed methods to match expert annotation. The proposed method is highly adaptable, trainable (transfer learning/domain adaptation) on different plant species and mutations
Inside Out: Transforming Images of Lab-Grown Plants for Machine Learning Applications in Agriculture
Machine learning tasks often require a significant amount of training data
for the resultant network to perform suitably for a given problem in any
domain. In agriculture, dataset sizes are further limited by phenotypical
differences between two plants of the same genotype, often as a result of
differing growing conditions. Synthetically-augmented datasets have shown
promise in improving existing models when real data is not available. In this
paper, we employ a contrastive unpaired translation (CUT) generative
adversarial network (GAN) and simple image processing techniques to translate
indoor plant images to appear as field images. While we train our network to
translate an image containing only a single plant, we show that our method is
easily extendable to produce multiple-plant field images. Furthermore, we use
our synthetic multi-plant images to train several YoloV5 nano object detection
models to perform the task of plant detection and measure the accuracy of the
model on real field data images. Including training data generated by the
CUT-GAN leads to better plant detection performance compared to a network
trained solely on real data.Comment: 35 pages, 23 figure
AI Radar Sensor: Creating Radar Depth Sounder Images Based on Generative Adversarial Network
This work is licensed under a Creative Commons Attribution 4.0 International License.Significant resources have been spent in collecting and storing large and heterogeneous radar datasets during expensive Arctic and Antarctic fieldwork. The vast majority of data available is unlabeled, and the labeling process is both time-consuming and expensive. One possible alternative to the labeling process is the use of synthetically generated data with artificial intelligence. Instead of labeling real images, we can generate synthetic data based on arbitrary labels. In this way, training data can be quickly augmented with additional images. In this research, we evaluated the performance of synthetically generated radar images based on modified cycle-consistent adversarial networks. We conducted several experiments to test the quality of the generated radar imagery. We also tested the quality of a state-of-the-art contour detection algorithm on synthetic data and different combinations of real and synthetic data. Our experiments show that synthetic radar images generated by generative adversarial network (GAN) can be used in combination with real images for data augmentation and training of deep neural networks. However, the synthetic images generated by GANs cannot be used solely for training a neural network (training on synthetic and testing on real) as they cannot simulate all of the radar characteristics such as noise or Doppler effects. To the best of our knowledge, this is the first work in creating radar sounder imagery based on generative adversarial network
Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision Farming
An effective perception system is a fundamental component for farming robots,
as it enables them to properly perceive the surrounding environment and to
carry out targeted operations. The most recent approaches make use of
state-of-the-art machine learning techniques to learn an effective model for
the target task. However, those methods need a large amount of labelled data
for training. A recent approach to deal with this issue is data augmentation
through Generative Adversarial Networks (GANs), where entire synthetic scenes
are added to the training data, thus enlarging and diversifying their
informative content. In this work, we propose an alternative solution with
respect to the common data augmentation techniques, applying it to the
fundamental problem of crop/weed segmentation in precision farming. Starting
from real images, we create semi-artificial samples by replacing the most
relevant object classes (i.e., crop and weeds) with their synthesized
counterparts. To do that, we employ a conditional GAN (cGAN), where the
generative model is trained by conditioning the shape of the generated object.
Moreover, in addition to RGB data, we take into account also near-infrared
(NIR) information, generating four channel multi-spectral synthetic images.
Quantitative experiments, carried out on three publicly available datasets,
show that (i) our model is capable of generating realistic multi-spectral
images of plants and (ii) the usage of such synthetic images in the training
process improves the segmentation performance of state-of-the-art semantic
segmentation Convolutional Networks.Comment: Submitted to Robotics and Autonomous System