2,733 research outputs found
Deep Learning for Multi-task Plant Phenotyping
Plant phenotyping has continued to pose a challenge to computer vision for many years. There is a particular demand to accurately quantify images of crops, and the natural variability and structure of these plants presents unique difficulties. Recently, machine learning approaches have shown impressive results in many areas of computer vision, but these rely on large datasets that are at present not available for crops. We present a new dataset, called ACID, that provides hundreds of accurately annotated images of wheat spikes and spikelets, along with image level class annotation. We then present a deep learning approach capable of accurately localising wheat spikes and spikelets, despite the varied nature of this dataset. As well as locating features, our network offers near perfect counting accuracy for spikes (95.91%) and spikelets (99.66%). We also extend the network to perform simultaneous classification of images, demonstrating the power of multi-task deep architectures for plant phenotyping. We hope that our dataset will be useful to researchers in continued improvement of plant and crop phenotyping. With this in mind, alongside the dataset we will make all code and trained models available online
ARIGAN: Synthetic Arabidopsis Plants using Generative Adversarial Network
In recent years, there has been an increasing interest in image-based plant
phenotyping, applying state-of-the-art machine learning approaches to tackle
challenging problems, such as leaf segmentation (a multi-instance problem) and
counting. Most of these algorithms need labelled data to learn a model for the
task at hand. Despite the recent release of a few plant phenotyping datasets,
large annotated plant image datasets for the purpose of training deep learning
algorithms are lacking. One common approach to alleviate the lack of training
data is dataset augmentation. Herein, we propose an alternative solution to
dataset augmentation for plant phenotyping, creating artificial images of
plants using generative neural networks. We propose the Arabidopsis Rosette
Image Generator (through) Adversarial Network: a deep convolutional network
that is able to generate synthetic rosette-shaped plants, inspired by DCGAN (a
recent adversarial network model using convolutional layers). Specifically, we
trained the network using A1, A2, and A4 of the CVPPP 2017 LCC dataset,
containing Arabidopsis Thaliana plants. We show that our model is able to
generate realistic 128x128 colour images of plants. We train our network
conditioning on leaf count, such that it is possible to generate plants with a
given number of leaves suitable, among others, for training regression based
models. We propose a new Ax dataset of artificial plants images, obtained by
our ARIGAN. We evaluate this new dataset using a state-of-the-art leaf counting
algorithm, showing that the testing error is reduced when Ax is used as part of
the training data.Comment: 8 pages, 6 figures, 1 table, ICCV CVPPP Workshop 201
TasselNet: Counting maize tassels in the wild via local counts regression network
Accurately counting maize tassels is important for monitoring the growth
status of maize plants. This tedious task, however, is still mainly done by
manual efforts. In the context of modern plant phenotyping, automating this
task is required to meet the need of large-scale analysis of genotype and
phenotype. In recent years, computer vision technologies have experienced a
significant breakthrough due to the emergence of large-scale datasets and
increased computational resources. Naturally image-based approaches have also
received much attention in plant-related studies. Yet a fact is that most
image-based systems for plant phenotyping are deployed under controlled
laboratory environment. When transferring the application scenario to
unconstrained in-field conditions, intrinsic and extrinsic variations in the
wild pose great challenges for accurate counting of maize tassels, which goes
beyond the ability of conventional image processing techniques. This calls for
further robust computer vision approaches to address in-field variations. This
paper studies the in-field counting problem of maize tassels. To our knowledge,
this is the first time that a plant-related counting problem is considered
using computer vision technologies under unconstrained field-based environment.Comment: 14 page
A Mixed Data-Based Deep Neural Network to Estimate Leaf Area Index in Wheat Breeding Trials
Remote and non-destructive estimation of leaf area index (LAI) has been a challenge in
the last few decades as the direct and indirect methods available are laborious and
time-consuming. The recent emergence of high-throughput plant phenotyping platforms has
increased the need to develop new phenotyping tools for better decision-making by breeders. In
this paper, a novel model based on artificial intelligence algorithms and nadir-view red green blue
(RGB) images taken from a terrestrial high throughput phenotyping platform is presented. The
model mixes numerical data collected in a wheat breeding field and visual features extracted from
the images to make rapid and accurate LAI estimations. Model-based LAI estimations were
validated against LAI measurements determined non-destructively using an allometric
relationship obtained in this study. The model performance was also compared with LAI estimates
obtained by other classical indirect methods based on bottom-up hemispherical images and gaps
fraction theory. Model-based LAI estimations were highly correlated with ground-truth LAI. The
model performance was slightly better than that of the hemispherical image-based method, which
tended to underestimate LAI. These results show the great potential of the developed model for
near real-time LAI estimation, which can be further improved in the future by increasing the
dataset used to train the model
- …