2,881 research outputs found
ARIGAN: Synthetic Arabidopsis Plants using Generative Adversarial Network
In recent years, there has been an increasing interest in image-based plant
phenotyping, applying state-of-the-art machine learning approaches to tackle
challenging problems, such as leaf segmentation (a multi-instance problem) and
counting. Most of these algorithms need labelled data to learn a model for the
task at hand. Despite the recent release of a few plant phenotyping datasets,
large annotated plant image datasets for the purpose of training deep learning
algorithms are lacking. One common approach to alleviate the lack of training
data is dataset augmentation. Herein, we propose an alternative solution to
dataset augmentation for plant phenotyping, creating artificial images of
plants using generative neural networks. We propose the Arabidopsis Rosette
Image Generator (through) Adversarial Network: a deep convolutional network
that is able to generate synthetic rosette-shaped plants, inspired by DCGAN (a
recent adversarial network model using convolutional layers). Specifically, we
trained the network using A1, A2, and A4 of the CVPPP 2017 LCC dataset,
containing Arabidopsis Thaliana plants. We show that our model is able to
generate realistic 128x128 colour images of plants. We train our network
conditioning on leaf count, such that it is possible to generate plants with a
given number of leaves suitable, among others, for training regression based
models. We propose a new Ax dataset of artificial plants images, obtained by
our ARIGAN. We evaluate this new dataset using a state-of-the-art leaf counting
algorithm, showing that the testing error is reduced when Ax is used as part of
the training data.Comment: 8 pages, 6 figures, 1 table, ICCV CVPPP Workshop 201
Learning to Count Leaves in Rosette Plants
Counting the number of leaves in plants is important for plant phenotyping, since it can be used to assess plant growth stages. We propose a learning-based approach for counting leaves in rosette (model) plants. We relate image-based descriptors learned in an unsupervised fashion to leaf counts using a supervised regression model. To take advantage of the circular and coplanar arrangement of leaves and also to introduce scale and rotation invariance, we learn features in a log-polar representation. Image patches extracted in this log-polar domain are provided to K-means, which builds a codebook in a unsupervised manner. Feature codes are obtained by projecting patches on the codebook using the triangle encoding, introducing both sparsity and specifically designed representation. A global, per-plant image descriptor is obtained by pooling local features in specific regions of the image. Finally, we provide the global descriptors to a support vector regression framework to estimate the number of leaves in a plant. We evaluate our method on datasets of the \textit{Leaf Counting Challenge} (LCC), containing images of Arabidopsis and tobacco plants. Experimental results show that on average we reduce absolute counting error by 40% w.r.t. the winner of the 2014 edition of the challenge -a counting via segmentation method. When compared to state-of-the-art density-based approaches to counting, on Arabidopsis image data ~75% less counting errors are observed. Our findings suggest that it is possible to treat leaf counting as a regression problem, requiring as input only the total leaf count per training image
A Rapidly Deployable Classification System using Visual Data for the Application of Precision Weed Management
In this work we demonstrate a rapidly deployable weed classification system
that uses visual data to enable autonomous precision weeding without making
prior assumptions about which weed species are present in a given field.
Previous work in this area relies on having prior knowledge of the weed species
present in the field. This assumption cannot always hold true for every field,
and thus limits the use of weed classification systems based on this
assumption. In this work, we obviate this assumption and introduce a rapidly
deployable approach able to operate on any field without any weed species
assumptions prior to deployment. We present a three stage pipeline for the
implementation of our weed classification system consisting of initial field
surveillance, offline processing and selective labelling, and automated
precision weeding. The key characteristic of our approach is the combination of
plant clustering and selective labelling which is what enables our system to
operate without prior weed species knowledge. Testing using field data we are
able to label 12.3 times fewer images than traditional full labelling whilst
reducing classification accuracy by only 14%.Comment: 36 pages, 14 figures, published Computers and Electronics in
Agriculture Vol. 14
Unsupervised image segmentation with neural networks
The segmentation of colour images (RGB), distinguishing clusters of image points, representing for example background, leaves and flowers, is performed in a multi-dimensional environment. Considering a two dimensional environment, clusters can be divided by lines. In a three dimensional environment by planes and in an n-dimensional environment by n-1 dimensional structures. Starting with a complete data set the first neural network, represents an n-1 dimensional structure to divide the data set into two subsets. Each subset is once more divided by an additional neural network: recursive partitioning. This results in a tree structure with a neural network in each branching point. Partitioning stops as soon as a partitioning criterium cannot be fulfilled. After the unsupervised training the neural system can be used for the segmentation of images
Image analysis and statistical modelling for measurement and quality assessment of ornamental horticulture crops in glasshouses
Image analysis for ornamental crops is discussed with examples from the bedding plant industry. Feed-forward artificial neural networks are used to segment top and side view images of three contrasting species of bedding plants. The segmented images provide objective measurements of leaf and flower cover, colour, uniformity and leaf canopy height. On each imaging occasion, each pack was scored for quality by an assessor panel and it is shown that image analysis can explain 88.5%, 81.7% and 70.4% of the panel quality scores for the three species, respectively. Stereoscopy for crop height and uniformity is outlined briefly. The methods discussed here could be used for crop grading at marketing or for monitoring and assessment of growing crops within a glasshouse during all stages of production
- …