1,165 research outputs found
Local Motion Planner for Autonomous Navigation in Vineyards with a RGB-D Camera-Based Algorithm and Deep Learning Synergy
With the advent of agriculture 3.0 and 4.0, researchers are increasingly
focusing on the development of innovative smart farming and precision
agriculture technologies by introducing automation and robotics into the
agricultural processes. Autonomous agricultural field machines have been
gaining significant attention from farmers and industries to reduce costs,
human workload, and required resources. Nevertheless, achieving sufficient
autonomous navigation capabilities requires the simultaneous cooperation of
different processes; localization, mapping, and path planning are just some of
the steps that aim at providing to the machine the right set of skills to
operate in semi-structured and unstructured environments. In this context, this
study presents a low-cost local motion planner for autonomous navigation in
vineyards based only on an RGB-D camera, low range hardware, and a dual layer
control algorithm. The first algorithm exploits the disparity map and its depth
representation to generate a proportional control for the robotic platform.
Concurrently, a second back-up algorithm, based on representations learning and
resilient to illumination variations, can take control of the machine in case
of a momentaneous failure of the first block. Moreover, due to the double
nature of the system, after initial training of the deep learning model with an
initial dataset, the strict synergy between the two algorithms opens the
possibility of exploiting new automatically labeled data, coming from the
field, to extend the existing model knowledge. The machine learning algorithm
has been trained and tested, using transfer learning, with acquired images
during different field surveys in the North region of Italy and then optimized
for on-device inference with model pruning and quantization. Finally, the
overall system has been validated with a customized robot platform in the
relevant environment
Proximal sensing mapping method to generate field maps in vineyards
[EN] An innovative methodology to generate vegetative vigor maps in vineyards (Vitis vinifera L.) has been developed
and pre-validated. The architecture proposed implements a Global Positioning System (GPS) receiver and a computer vision
unit comprising a monocular charge-coupled device (CCD) camera equipped with an 8-mm lens and a pass-band near-infrared
(NIR) filter. Both sensors are mounted on a medium-size conventional agricultural tractor. The synchronization of
perception (camera) and localization (GPS) sensors allowed the creation of globally-referenced regular grids, denominated
universal grids, whose cells were filled with the estimated vegetative vigor of the monitored vines. Vine vigor was quantified
as the relative percentage of vegetation automatically estimated by the onboard algorithm through the images captured with the
camera. Validation tests compared spatial differences in vine vigor with yield differentials along the rows. The positive
correlation between vigor and yield variations showed the potential of proximal sensing and the advantages of acquiring top
view images from conventional vehicles.Sáiz Rubio, V.; Rovira Más, F. (2013). Proximal sensing mapping method to generate field maps in vineyards. Agricultural Engineering International: CIGR Journal. 15(2):47-59. http://hdl.handle.net/10251/102750S475915
Grapevine yield prediction using image analysis - improving the estimation of non-visible bunches
Yield forecast is an issue of utmost importance for the entire grape and wine sectors. There are several methods for vineyard yield estimation. The ones based on estimating yield components are the most commonly used in commercial vineyards. Those methods are generally destructive and very labor intensive and can provide inaccurate results as they are based on the assessment of a small sample of bunches. Recently, several attempts have been made to apply image analysis technologies for bunch and/or berries recognition in digital images. Nonetheless, the effectiveness of image analysis in predicting yield is strongly dependent of grape bunch visibility, which is dependent on canopy density at fruiting zone and on bunch number, density and dimensions. In this work data on bunch occlusion obtained in a field experiment is presented. This work is set-up in the frame of a research project aimed at the development of an unmanned ground vehicle to scout vineyards for non-intrusive estimation of canopy features and grape yield. The objective is to evaluate the use of explanatory variables to estimate the fraction of non-visible bunches (bunches occluded by leaves). In the future, this estimation can potentially improve the accuracy of a computer vision algorithm used by the robot to estimate total yield.
In two vineyard plots with Encruzado (white) and Syrah (red) varieties, several canopy segments of 1 meter length were photographed with a RGB camera and a blue background, close to harvest date. Out of these images, canopy gaps (porosity) and bunches’ region of interest (ROI) files were computed in order to estimate the corresponding projected area. Vines were then defoliated at fruiting zone, in two steps and new images were obtained before each step.
Overall the area of bunches occluded by leaves achieved mean values between 67% and 73%, with Syrah presenting the larger variation. A polynomial regression was fitted between canopy porosity (independent variable) and percentage of bunches not occluded by leaves which showed significant R2 values of 0.83 and 0.82 for the Encruzado and Syrah varieties, respectively.
Our results show that the fraction of non-visible bunches can be estimated indirectly using canopy porosity as explanatory variable, a trait that can be automatically obtained in the future using a laser range finder deployed on the mobile platforminfo:eu-repo/semantics/publishedVersio
Enhancing Navigation Benchmarking and Perception Data Generation for Row-based Crops in Simulation
Service robotics is recently enhancing precision agriculture enabling many
automated processes based on efficient autonomous navigation solutions.
However, data generation and infield validation campaigns hinder the progress
of large-scale autonomous platforms. Simulated environments and deep visual
perception are spreading as successful tools to speed up the development of
robust navigation with low-cost RGB-D cameras. In this context, the
contribution of this work is twofold: a synthetic dataset to train deep
semantic segmentation networks together with a collection of virtual scenarios
for a fast evaluation of navigation algorithms. Moreover, an automatic
parametric approach is developed to explore different field geometries and
features. The simulation framework and the dataset have been evaluated by
training a deep segmentation network on different crops and benchmarking the
resulting navigation.Comment: Accepted at the 14th European Conference on Precision Agriculture
(ECPA) 202
FotogrametrÃa de rango cercano aplicada a la IngenierÃa Agroforestal
Tesis por compendio de publicaciones[EN]Since the late twentieth century, Geotechnologies are being applied in different research
lines in Agroforestry Engineering aimed at advancing in the modeling of biophysical
parameters in order to improve the productivity. In this study, low-cost and close range
photogrammetry has been used in different agroforestry scenarios to solve identified gaps
in the results and improve procedures and technology hitherto practiced in this field.
Photogrammetry offers the advantage of being a non-destructive and non-invasive
technique, never changing physical properties of the studied element, providing rigor and
completeness to the captured information.
In this PhD dissertation, the following contributions are presented divided into three
research papers:
• A methodological proposal to acquire georeferenced multispectral data of high
spatial resolution using a low-cost manned aerial platform, to monitor and
sustainably manage extensive áreas of crops.
The vicarious calibration is exposed as radiometric calibration method of the
multispectral sensor embarked on a paraglider. Low-cost surfaces are performed
as control coverages.
• The development of a method able to determine crop productivity under field
conditions, from the combination of close range photogrammetry and computer
vision, providing a constant operational improvement and a proactive
management in the crop monitoring.
An innovate methodology in the sector is proposed, ensuring flexibility and
simplicity in the data collection by non-invasive technologies, automation in
processing and quality results with low associated cost.
• A low cost, efficient and accurate methodology to obtain Digital Height Models of
vegatal cover intended for forestry inventories by integrating public data from
LiDAR into photogrammetric point clouds coming from low cost flights.
This methodology includes the potentiality of LiDAR to register ground points in
areas with high density of vegetation and the better spatial, radiometric and
temporal resolution from photogrammetry for the top of vegetal covers.[ES]Desde finales del siglo XX se están aplicando GeotecnologÃas en diferentes lÃneas de
investigación en IngenierÃa Agroforestal orientadas a avanzar en la modelización de
parámetros biofÃsicos con el propósito de mejorar la productividad. En este estudio se ha
empleado fotogrametrÃa de bajo coste y rango cercano en distintos escenarios
agroforestales para solventar carencias detectadas en los resultados obtenidos y mejorar
los procedimientos y la tecnologÃa hasta ahora usados en este campo. La fotogrametrÃa
ofrece como ventaja el ser una técnica no invasiva y no destructiva, por lo que no altera
en ningún momento las propiedades fÃsicas del elemento estudiado, dotando de rigor y
exhaustividad a la información capturada.
En esta Tesis Doctoral se presentan las siguientes contribuciones, divididas en tres
artÃculos de investigación:
• Una propuesta metodológica de adquisición de datos multiespectrales
georreferenciados de alta resolución espacial mediante una plataforma aérea
tripulada de bajo coste, para monitorizar y gestionar sosteniblemente amplias
extensiones de cultivos.
Se expone la calibración vicaria como método de calibración radiométrico del
sensor multiespectral embarcado en un paramotor empleando como coberturas de
control superficies de bajo coste.
• El desarrollo de un método capaz de determinar la productividad del cultivo en
condiciones de campo, a partir de la combinación de fotogrametrÃa de rango
cercano y visión computacional, facilitando una mejora operativa constante asÃ
como una gestión proactiva en la monitorización del cultivo.
Se propone una metodologÃa totalmente novedosa en el sector, garantizando
flexibilidad y sencillez en la toma de datos mediante tecnologÃas no invasivas,
automatismo en el procesado, calidad en los resultados y un bajo coste asociado.
• Una metodologÃa de bajo coste, eficiente y precisa para la obtención de Modelos
Digitales de Altura de Cubierta Vegetal destinados al inventario forestal mediante
la integración de datos públicos procedentes del LiDAR en las nubes de puntos
fotogramétricas obtenidas con un vuelo de bajo coste.
Esta metodologÃa engloba la potencialidad del LiDAR para registrar el terreno en
zonas con alta densidad de vegetación y una mejor resolución espacial,
radiométrica y temporal procedente de la fotogrametrÃa para la parte superior de
las cubiertas vegetales
A Review of the Challenges of Using Deep Learning Algorithms to Support Decision-Making in Agricultural Activities
Deep Learning has been successfully applied to image recognition, speech recognition,
and natural language processing in recent years. Therefore, there has been an incentive to apply
it in other fields as well. The field of agriculture is one of the most important fields in which the
application of deep learning still needs to be explored, as it has a direct impact on human well-being.
In particular, there is a need to explore how deep learning models can be used as a tool for optimal
planting, land use, yield improvement, production/disease/pest control, and other activities. The
vast amount of data received from sensors in smart farms makes it possible to use deep learning as a
model for decision-making in this field. In agriculture, no two environments are exactly alike, which
makes testing, validating, and successfully implementing such technologies much more complex
than in most other industries. This paper reviews some recent scientific developments in the field of
deep learning that have been applied to agriculture, and highlights some challenges and potential
solutions using deep learning algorithms in agriculture. The results in this paper indicate that by
employing new methods from deep learning, higher performance in terms of accuracy and lower
inference time can be achieved, and the models can be made useful in real-world applications. Finally,
some opportunities for future research in this area are suggested.This work is supported by the R&D Project BioDAgro—Sistema operacional inteligente de
informação e suporte á decisão em AgroBiodiversidade, project PD20-00011, promoted by Fundação
La Caixa and Fundação para a Ciência e a Tecnologia, taking place at the C-MAST-Centre for
Mechanical and Aerospace Sciences and Technology, Department of Electromechanical Engineering
of the University of Beira Interior, Covilhã, Portugal.info:eu-repo/semantics/publishedVersio
Advancements in Multi-temporal Remote Sensing Data Analysis Techniques for Precision Agriculture
L'abstract è presente nell'allegato / the abstract is in the attachmen
Augmented Perception for Agricultural Robots Navigation
[EN] Producing food in a sustainable way is becoming very challenging today due to the lack of skilled labor, the unaffordable costs of labor when available, and the limited returns for growers as a result of low produce prices demanded by big supermarket chains in contrast to ever-increasing costs of inputs such as fuel, chemicals, seeds, or water. Robotics emerges as a technological advance that can counterweight some of these challenges, mainly in industrialized countries. However, the deployment of autonomous machines in open environments exposed to uncertainty and harsh ambient conditions poses an important defiance to reliability and safety. Consequently, a deep parametrization of the working environment in real time is necessary to achieve autonomous navigation. This article proposes a navigation strategy for guiding a robot along vineyard rows for field monitoring. Given that global positioning cannot be granted permanently in any vineyard, the strategy is based on local perception, and results from fusing three complementary technologies: 3D vision, lidar, and ultrasonics. Several perception-based navigation algorithms were developed between 2015 and 2019. After their comparison in real environments and conditions, results showed that the augmented perception derived from combining these three technologies provides a consistent basis for outlining the intelligent behavior of agricultural robots operating within orchards.This work was supported by the European Union Research and Innovation Programs under Grant N. 737669 and Grant N. 610953. The associate editor coordinating the review of this article and approving it for publication was Dr. Oleg Sergiyenko.Rovira Más, F.; Sáiz Rubio, V.; Cuenca-Cuenca, A. (2021). Augmented Perception for Agricultural Robots Navigation. IEEE Sensors Journal. 21(10):11712-11727. https://doi.org/10.1109/JSEN.2020.3016081S1171211727211
- …