2,545 research outputs found
DeepWheat: Estimating Phenotypic Traits from Crop Images with Deep Learning
In this paper, we investigate estimating emergence and biomass traits from
color images and elevation maps of wheat field plots. We employ a
state-of-the-art deconvolutional network for segmentation and convolutional
architectures, with residual and Inception-like layers, to estimate traits via
high dimensional nonlinear regression. Evaluation was performed on two
different species of wheat, grown in field plots for an experimental plant
breeding study. Our framework achieves satisfactory performance with mean and
standard deviation of absolute difference of 1.05 and 1.40 counts for emergence
and 1.45 and 2.05 for biomass estimation. Our results for counting wheat plants
from field images are better than the accuracy reported for the similar, but
arguably less difficult, task of counting leaves from indoor images of rosette
plants. Our results for biomass estimation, even with a very small dataset,
improve upon all previously proposed approaches in the literature.Comment: WACV 2018 (Code repository:
https://github.com/p2irc/deepwheat_WACV-2018
DeepSolarEye: Power Loss Prediction and Weakly Supervised Soiling Localization via Fully Convolutional Networks for Solar Panels
The impact of soiling on solar panels is an important and well-studied
problem in renewable energy sector. In this paper, we present the first
convolutional neural network (CNN) based approach for solar panel soiling and
defect analysis. Our approach takes an RGB image of solar panel and
environmental factors as inputs to predict power loss, soiling localization,
and soiling type. In computer vision, localization is a complex task which
typically requires manually labeled training data such as bounding boxes or
segmentation masks. Our proposed approach consists of specialized four stages
which completely avoids localization ground truth and only needs panel images
with power loss labels for training. The region of impact area obtained from
the predicted localization masks are classified into soiling types using the
webly supervised learning. For improving localization capabilities of CNNs, we
introduce a novel bi-directional input-aware fusion (BiDIAF) block that
reinforces the input at different levels of CNN to learn input-specific feature
maps. Our empirical study shows that BiDIAF improves the power loss prediction
accuracy by about 3% and localization accuracy by about 4%. Our end-to-end
model yields further improvement of about 24% on localization when learned in a
weakly supervised manner. Our approach is generalizable and showed promising
results on web crawled solar panel images. Our system has a frame rate of 22
fps (including all steps) on a NVIDIA TitanX GPU. Additionally, we collected
first of it's kind dataset for solar panel image analysis consisting 45,000+
images.Comment: Accepted for publication at WACV 201
Real-time Semantic Segmentation of Crop and Weed for Precision Agriculture Robots Leveraging Background Knowledge in CNNs
Precision farming robots, which target to reduce the amount of herbicides
that need to be brought out in the fields, must have the ability to identify
crops and weeds in real time to trigger weeding actions. In this paper, we
address the problem of CNN-based semantic segmentation of crop fields
separating sugar beet plants, weeds, and background solely based on RGB data.
We propose a CNN that exploits existing vegetation indexes and provides a
classification in real time. Furthermore, it can be effectively re-trained to
so far unseen fields with a comparably small amount of training data. We
implemented and thoroughly evaluated our system on a real agricultural robot
operating in different fields in Germany and Switzerland. The results show that
our system generalizes well, can operate at around 20Hz, and is suitable for
online operation in the fields.Comment: Accepted for publication at IEEE International Conference on Robotics
and Automation 2018 (ICRA 2018
- …