5,775 research outputs found
Flowers, leaves or both? How to obtain suitable images for automated plant identification
Background: Deep learning algorithms for automated plant identification need large quantities of precisely labelled images in order to produce reliable classification results. Here, we explore what kind of perspectives and their combinations contain more characteristic information and therefore allow for higher identification accuracy. Results: We developed an image-capturing scheme to create observations of flowering plants. Each observation comprises five in-situ images of the same individual from predefined perspectives (entire plant, flower frontal- and lateral view, leaf top- and back side view). We collected a completely balanced dataset comprising 100 observations for each of 101 species with an emphasis on groups of conspecific and visually similar species including twelve Poaceae species. We used this dataset to train convolutional neural networks and determine the prediction accuracy for each single perspective and their combinations via score level fusion. Top-1 accuracies ranged between 77% (entire plant) and 97% (fusion of all perspectives) when averaged across species. Flower frontal view achieved the highest accuracy (88%). Fusing flower frontal, flower lateral and leaf top views yields the most reasonable compromise with respect to acquisition effort and accuracy (96%). The perspective achieving the highest accuracy was species dependent. Conclusions: We argue that image databases of herbaceous plants would benefit from multi organ observations, comprising at least the front and lateral perspective of flowers and the leaf top view
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection
Selective weeding is one of the key challenges in the field of agriculture
robotics. To accomplish this task, a farm robot should be able to accurately
detect plants and to distinguish them between crop and weeds. Most of the
promising state-of-the-art approaches make use of appearance-based models
trained on large annotated datasets. Unfortunately, creating large agricultural
datasets with pixel-level annotations is an extremely time consuming task,
actually penalizing the usage of data-driven techniques. In this paper, we face
this problem by proposing a novel and effective approach that aims to
dramatically minimize the human intervention needed to train the detection and
classification algorithms. The idea is to procedurally generate large synthetic
training datasets randomizing the key features of the target environment (i.e.,
crop and weed species, type of soil, light conditions). More specifically, by
tuning these model parameters, and exploiting a few real-world textures, it is
possible to render a large amount of realistic views of an artificial
agricultural scenario with no effort. The generated data can be directly used
to train the model or to supplement real-world images. We validate the proposed
methodology by using as testbed a modern deep learning based image segmentation
architecture. We compare the classification results obtained using both real
and synthetic images as training data. The reported results confirm the
effectiveness and the potentiality of our approach.Comment: To appear in IEEE/RSJ IROS 201
Interpretable Deep Learning applied to Plant Stress Phenotyping
Availability of an explainable deep learning model that can be applied to
practical real world scenarios and in turn, can consistently, rapidly and
accurately identify specific and minute traits in applicable fields of
biological sciences, is scarce. Here we consider one such real world example
viz., accurate identification, classification and quantification of biotic and
abiotic stresses in crop research and production. Up until now, this has been
predominantly done manually by visual inspection and require specialized
training. However, such techniques are hindered by subjectivity resulting from
inter- and intra-rater cognitive variability. Here, we demonstrate the ability
of a machine learning framework to identify and classify a diverse set of
foliar stresses in the soybean plant with remarkable accuracy. We also present
an explanation mechanism using gradient-weighted class activation mapping that
isolates the visual symptoms used by the model to make predictions. This
unsupervised identification of unique visual symptoms for each stress provides
a quantitative measure of stress severity, allowing for identification,
classification and quantification in one framework. The learnt model appears to
be agnostic to species and make good predictions for other (non-soybean)
species, demonstrating an ability of transfer learning
Lemon Classification Using Deep Learning
Abstract : Background: Vegetable agriculture is very important to human continued existence and remains a key driver of
many economies worldwide, especially in underdeveloped and developing economies. Objectives: There is an increasing
demand for food and cash crops, due to the increasing in world population and the challenges enforced by climate
modifications, there is an urgent need to increase plant production while reducing costs. Methods: In this paper, Lemon
classification approach is presented with a dataset that contains approximately 2,000 images belong to 3 species at a few
developing phases. Convolutional Neural Network (CNN) algorithms, a deep learning technique extensively applied to
image recognition was used, for this task. The results: found that CNN-driven lemon classification applications when used
in farming automation have the latent to enhance crop harvest and improve output and productivity when designed
properly. The trained model achieved an accuracy of 99.48% on a held-out test set, demonstrating the feasibility of this
approach
Programmable Spectrometry -- Per-pixel Classification of Materials using Learned Spectral Filters
Many materials have distinct spectral profiles. This facilitates estimation
of the material composition of a scene at each pixel by first acquiring its
hyperspectral image, and subsequently filtering it using a bank of spectral
profiles. This process is inherently wasteful since only a set of linear
projections of the acquired measurements contribute to the classification task.
We propose a novel programmable camera that is capable of producing images of a
scene with an arbitrary spectral filter. We use this camera to optically
implement the spectral filtering of the scene's hyperspectral image with the
bank of spectral profiles needed to perform per-pixel material classification.
This provides gains both in terms of acquisition speed --- since only the
relevant measurements are acquired --- and in signal-to-noise ratio --- since
we invariably avoid narrowband filters that are light inefficient. Given
training data, we use a range of classical and modern techniques including SVMs
and neural networks to identify the bank of spectral profiles that facilitate
material classification. We verify the method in simulations on standard
datasets as well as real data using a lab prototype of the camera
TasselNet: Counting maize tassels in the wild via local counts regression network
Accurately counting maize tassels is important for monitoring the growth
status of maize plants. This tedious task, however, is still mainly done by
manual efforts. In the context of modern plant phenotyping, automating this
task is required to meet the need of large-scale analysis of genotype and
phenotype. In recent years, computer vision technologies have experienced a
significant breakthrough due to the emergence of large-scale datasets and
increased computational resources. Naturally image-based approaches have also
received much attention in plant-related studies. Yet a fact is that most
image-based systems for plant phenotyping are deployed under controlled
laboratory environment. When transferring the application scenario to
unconstrained in-field conditions, intrinsic and extrinsic variations in the
wild pose great challenges for accurate counting of maize tassels, which goes
beyond the ability of conventional image processing techniques. This calls for
further robust computer vision approaches to address in-field variations. This
paper studies the in-field counting problem of maize tassels. To our knowledge,
this is the first time that a plant-related counting problem is considered
using computer vision technologies under unconstrained field-based environment.Comment: 14 page
- …