2,887 research outputs found
DeepWheat: Estimating Phenotypic Traits from Crop Images with Deep Learning
In this paper, we investigate estimating emergence and biomass traits from
color images and elevation maps of wheat field plots. We employ a
state-of-the-art deconvolutional network for segmentation and convolutional
architectures, with residual and Inception-like layers, to estimate traits via
high dimensional nonlinear regression. Evaluation was performed on two
different species of wheat, grown in field plots for an experimental plant
breeding study. Our framework achieves satisfactory performance with mean and
standard deviation of absolute difference of 1.05 and 1.40 counts for emergence
and 1.45 and 2.05 for biomass estimation. Our results for counting wheat plants
from field images are better than the accuracy reported for the similar, but
arguably less difficult, task of counting leaves from indoor images of rosette
plants. Our results for biomass estimation, even with a very small dataset,
improve upon all previously proposed approaches in the literature.Comment: WACV 2018 (Code repository:
https://github.com/p2irc/deepwheat_WACV-2018
ARIGAN: Synthetic Arabidopsis Plants using Generative Adversarial Network
In recent years, there has been an increasing interest in image-based plant
phenotyping, applying state-of-the-art machine learning approaches to tackle
challenging problems, such as leaf segmentation (a multi-instance problem) and
counting. Most of these algorithms need labelled data to learn a model for the
task at hand. Despite the recent release of a few plant phenotyping datasets,
large annotated plant image datasets for the purpose of training deep learning
algorithms are lacking. One common approach to alleviate the lack of training
data is dataset augmentation. Herein, we propose an alternative solution to
dataset augmentation for plant phenotyping, creating artificial images of
plants using generative neural networks. We propose the Arabidopsis Rosette
Image Generator (through) Adversarial Network: a deep convolutional network
that is able to generate synthetic rosette-shaped plants, inspired by DCGAN (a
recent adversarial network model using convolutional layers). Specifically, we
trained the network using A1, A2, and A4 of the CVPPP 2017 LCC dataset,
containing Arabidopsis Thaliana plants. We show that our model is able to
generate realistic 128x128 colour images of plants. We train our network
conditioning on leaf count, such that it is possible to generate plants with a
given number of leaves suitable, among others, for training regression based
models. We propose a new Ax dataset of artificial plants images, obtained by
our ARIGAN. We evaluate this new dataset using a state-of-the-art leaf counting
algorithm, showing that the testing error is reduced when Ax is used as part of
the training data.Comment: 8 pages, 6 figures, 1 table, ICCV CVPPP Workshop 201
Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection
Selective weeding is one of the key challenges in the field of agriculture
robotics. To accomplish this task, a farm robot should be able to accurately
detect plants and to distinguish them between crop and weeds. Most of the
promising state-of-the-art approaches make use of appearance-based models
trained on large annotated datasets. Unfortunately, creating large agricultural
datasets with pixel-level annotations is an extremely time consuming task,
actually penalizing the usage of data-driven techniques. In this paper, we face
this problem by proposing a novel and effective approach that aims to
dramatically minimize the human intervention needed to train the detection and
classification algorithms. The idea is to procedurally generate large synthetic
training datasets randomizing the key features of the target environment (i.e.,
crop and weed species, type of soil, light conditions). More specifically, by
tuning these model parameters, and exploiting a few real-world textures, it is
possible to render a large amount of realistic views of an artificial
agricultural scenario with no effort. The generated data can be directly used
to train the model or to supplement real-world images. We validate the proposed
methodology by using as testbed a modern deep learning based image segmentation
architecture. We compare the classification results obtained using both real
and synthetic images as training data. The reported results confirm the
effectiveness and the potentiality of our approach.Comment: To appear in IEEE/RSJ IROS 201
DeepSolarEye: Power Loss Prediction and Weakly Supervised Soiling Localization via Fully Convolutional Networks for Solar Panels
The impact of soiling on solar panels is an important and well-studied
problem in renewable energy sector. In this paper, we present the first
convolutional neural network (CNN) based approach for solar panel soiling and
defect analysis. Our approach takes an RGB image of solar panel and
environmental factors as inputs to predict power loss, soiling localization,
and soiling type. In computer vision, localization is a complex task which
typically requires manually labeled training data such as bounding boxes or
segmentation masks. Our proposed approach consists of specialized four stages
which completely avoids localization ground truth and only needs panel images
with power loss labels for training. The region of impact area obtained from
the predicted localization masks are classified into soiling types using the
webly supervised learning. For improving localization capabilities of CNNs, we
introduce a novel bi-directional input-aware fusion (BiDIAF) block that
reinforces the input at different levels of CNN to learn input-specific feature
maps. Our empirical study shows that BiDIAF improves the power loss prediction
accuracy by about 3% and localization accuracy by about 4%. Our end-to-end
model yields further improvement of about 24% on localization when learned in a
weakly supervised manner. Our approach is generalizable and showed promising
results on web crawled solar panel images. Our system has a frame rate of 22
fps (including all steps) on a NVIDIA TitanX GPU. Additionally, we collected
first of it's kind dataset for solar panel image analysis consisting 45,000+
images.Comment: Accepted for publication at WACV 201
PLANT CLASSIFICATION BASED ON LEAF EDGES AND LEAF MORPHOLOGICAL VEINS USING WAVELET CONVOLUTIONAL NEURAL NETWORK
The leaf is one of the plant organs, contains chlorophyll, and functions as a catcher of energy from sunlight which is used for photosynthesis. Perfect leaves are composed of three parts, namely midrib, stalk, and leaf blade. The way to identify the type of plant is to look at the shape of the leaf edges. The shape, color, and texture of a plant's leaf margins may influence its leaf veins, which in this vein morphology carry information useful for plant classification when shape, color, and texture are not noticeable. Humans, on the other hand, may fail to recognize this feature because they prefer to see plants solely based on leaf form rather than leaf margins and veins. This research uses the Wavelet method to denoise existing images in the dataset and the Convolutional Neural Network classifies through images. The results obtained using the Wavelet Convolutional Neural Network method are equal to 97.13%. 
- …