9,117 research outputs found
Simultaneous Corn and Soybean Yield Prediction from Remote Sensing Data Using Deep Transfer Learning
Large-scale crop yield estimation is, in part, made possible due to the
availability of remote sensing data allowing for the continuous monitoring of
crops throughout their growth cycle. Having this information allows
stakeholders the ability to make real-time decisions to maximize yield
potential. Although various models exist that predict yield from remote sensing
data, there currently does not exist an approach that can estimate yield for
multiple crops simultaneously, and thus leads to more accurate predictions. A
model that predicts the yield of multiple crops and concurrently considers the
interaction between multiple crop yields. We propose a new convolutional neural
network model called YieldNet which utilizes a novel deep learning framework
that uses transfer learning between corn and soybean yield predictions by
sharing the weights of the backbone feature extractor. Additionally, to
consider the multi-target response variable, we propose a new loss function. We
conduct our experiment using data from 1,132 counties for corn and 1,076
counties for soybean across the United States. Numerical results demonstrate
that our proposed method accurately predicts corn and soybean yield from one to
four months before the harvest with a MAE being 8.74% and 8.70% of the average
yield, respectively, and is competitive to other state-of-the-art approaches.Comment: 14 pages, 8 figures, 7 table
Local Motion Planner for Autonomous Navigation in Vineyards with a RGB-D Camera-Based Algorithm and Deep Learning Synergy
With the advent of agriculture 3.0 and 4.0, researchers are increasingly
focusing on the development of innovative smart farming and precision
agriculture technologies by introducing automation and robotics into the
agricultural processes. Autonomous agricultural field machines have been
gaining significant attention from farmers and industries to reduce costs,
human workload, and required resources. Nevertheless, achieving sufficient
autonomous navigation capabilities requires the simultaneous cooperation of
different processes; localization, mapping, and path planning are just some of
the steps that aim at providing to the machine the right set of skills to
operate in semi-structured and unstructured environments. In this context, this
study presents a low-cost local motion planner for autonomous navigation in
vineyards based only on an RGB-D camera, low range hardware, and a dual layer
control algorithm. The first algorithm exploits the disparity map and its depth
representation to generate a proportional control for the robotic platform.
Concurrently, a second back-up algorithm, based on representations learning and
resilient to illumination variations, can take control of the machine in case
of a momentaneous failure of the first block. Moreover, due to the double
nature of the system, after initial training of the deep learning model with an
initial dataset, the strict synergy between the two algorithms opens the
possibility of exploiting new automatically labeled data, coming from the
field, to extend the existing model knowledge. The machine learning algorithm
has been trained and tested, using transfer learning, with acquired images
during different field surveys in the North region of Italy and then optimized
for on-device inference with model pruning and quantization. Finally, the
overall system has been validated with a customized robot platform in the
relevant environment
Tile2Vec: Unsupervised representation learning for spatially distributed data
Geospatial analysis lacks methods like the word vector representations and
pre-trained networks that significantly boost performance across a wide range
of natural language and computer vision tasks. To fill this gap, we introduce
Tile2Vec, an unsupervised representation learning algorithm that extends the
distributional hypothesis from natural language -- words appearing in similar
contexts tend to have similar meanings -- to spatially distributed data. We
demonstrate empirically that Tile2Vec learns semantically meaningful
representations on three datasets. Our learned representations significantly
improve performance in downstream classification tasks and, similar to word
vectors, visual analogies can be obtained via simple arithmetic in the latent
space.Comment: 8 pages, 4 figures in main text; 9 pages, 11 figures in appendi
Common Practices and Taxonomy in Deep Multi-view Fusion for Remote Sensing Applications
The advances in remote sensing technologies have boosted applications for
Earth observation. These technologies provide multiple observations or views
with different levels of information. They might contain static or temporary
views with different levels of resolution, in addition to having different
types and amounts of noise due to sensor calibration or deterioration. A great
variety of deep learning models have been applied to fuse the information from
these multiple views, known as deep multi-view or multi-modal fusion learning.
However, the approaches in the literature vary greatly since different
terminology is used to refer to similar concepts or different illustrations are
given to similar techniques. This article gathers works on multi-view fusion
for Earth observation by focusing on the common practices and approaches used
in the literature. We summarize and structure insights from several different
publications concentrating on unifying points and ideas. In this manuscript, we
provide a harmonized terminology while at the same time mentioning the various
alternative terms that are used in literature. The topics covered by the works
reviewed focus on supervised learning with the use of neural network models. We
hope this review, with a long list of recent references, can support future
research and lead to a unified advance in the area.Comment: appendix with additional tables. Preprint submitted to journa
A systematic review of the use of Deep Learning in Satellite Imagery for Agriculture
Agricultural research is essential for increasing food production to meet the
requirements of an increasing population in the coming decades. Recently,
satellite technology has been improving rapidly and deep learning has seen much
success in generic computer vision tasks and many application areas which
presents an important opportunity to improve analysis of agricultural land.
Here we present a systematic review of 150 studies to find the current uses of
deep learning on satellite imagery for agricultural research. Although we
identify 5 categories of agricultural monitoring tasks, the majority of the
research interest is in crop segmentation and yield prediction. We found that,
when used, modern deep learning methods consistently outperformed traditional
machine learning across most tasks; the only exception was that Long Short-Term
Memory (LSTM) Recurrent Neural Networks did not consistently outperform Random
Forests (RF) for yield prediction. The reviewed studies have largely adopted
methodologies from generic computer vision, except for one major omission:
benchmark datasets are not utilised to evaluate models across studies, making
it difficult to compare results. Additionally, some studies have specifically
utilised the extra spectral resolution available in satellite imagery, but
other divergent properties of satellite images - such as the hugely different
scales of spatial patterns - are not being taken advantage of in the reviewed
studies.Comment: 25 pages, 2 figures and lots of large tables. Supplementary materials
section included here in main pd
- …