17 research outputs found
Recuperación, modelado y recreación utilizando fotogrametría, del Patrimonio Oleícola Industrial de la Hacienda de Quinto
Se puede decir que el Patrimonio cultural está compuesto por todas aquellas
manifestaciones y elementos de la actividad humana a los que se les asocia un valor primordial,
ya que forman parte de la historia e identidad de una región. Actualmente, los ciudadanos están
tomando una mayor conciencia sobre la importancia que tiene preservar aquellos elementos de
carácter histórico y cultural, ya que les permite conocer su pasado como sociedad y al mismo
tiempo que las generaciones futuras aprendan el legado de sus antepasados.
La fotogrametría, una ciencia que hace pocos años estaba en manos de unos cuantos
profesionales, los cuales debían tener una formación específica, se ha convertido en una
herramienta barata, útil y al alcance de cualquier usuario que disponga de una cámara
fotográfica con prestaciones normales y sin llegar a ser profesionales. Esto ha sido posible
gracias al desarrollo de los equipos informáticos y a una serie de programas fotogramétricos,
que han permitido obtener modelos 3D a partir de una nube de puntos que ha sido obtenida
tomando como base una serie de fotografías solapadas.
Con esas premisas este Trabajo Final de Grado (TFG) titulado “Recuperación, modelado
y recreación utilizando fotogrametría, del Patrimonio Oleícola Industrial de la Hacienda de
Quinto”, tiene como objetivo principal realizar la recuperación, utilizando fotogrametría, de
un Molino troncocónico y de una Prensa de viga y quintal que se encuentran en la Hacienda
de Quinto situada en el Municipio de Dos Hermanas.
Para realizar el objetivo, anteriormente planteado, se hará uso del software fotogramétrico
Agisoft PhotoScan. Este software está basado en una tecnología denominada SFM (Structure
From Motion), que permite obtener modelos 3D a partir de una colección de fotografías
solapadas (dicho solape suele ser generalmente de un 60% como mínimo). La ventaja de esta
herramienta radica en que para obtener los modelos no es necesario disponer de una cámara
calibrada, sino que basta con una cámara doméstica o incluso un teléfono móvil.
Por otra parte, dado que la tecnología SFM aplicada a la fotogrametría es bastante reciente
en su utilización, se plantea en este TFG el análisis de una serie de parámetros como son:
tiempo de trabajo, forma de proceder, ventajas y desventajas, recomendaciones operacionales,
etc., todo ello con el objetivo de valorar la metodología de trabajo establecida, el rendimiento
de la misma y los resultados obtenidos. Con respecto a los materiales utilizados, se ha optado por emplear dos cámaras fotográficas:
la Nikon D3300 de 24 megapixeles y la Fujifilm Finepix de 8 megapixeles. La elección de las
dos cámaras se hace con la intención de comparar los resultados y ver si la elección de una
cámara u otra es un factor a tener en cuenta. Además, se hará uso de un portátil HP modelo
ENVY (16 GB de RAM), de un Servidor Virtual (32 GB de RAM) y de otros elementos
auxiliares.
Finalmente, los resultados obtenidos se han publicado en formato digital empleando
diversos mecanismos y herramientas.Universidad de Sevilla. Grado en Ingeniería Agrícol
A Mixed Data-Based Deep Neural Network to Estimate Leaf Area Index in Wheat Breeding Trials
Remote and non-destructive estimation of leaf area index (LAI) has been a challenge in
the last few decades as the direct and indirect methods available are laborious and
time-consuming. The recent emergence of high-throughput plant phenotyping platforms has
increased the need to develop new phenotyping tools for better decision-making by breeders. In
this paper, a novel model based on artificial intelligence algorithms and nadir-view red green blue
(RGB) images taken from a terrestrial high throughput phenotyping platform is presented. The
model mixes numerical data collected in a wheat breeding field and visual features extracted from
the images to make rapid and accurate LAI estimations. Model-based LAI estimations were
validated against LAI measurements determined non-destructively using an allometric
relationship obtained in this study. The model performance was also compared with LAI estimates
obtained by other classical indirect methods based on bottom-up hemispherical images and gaps
fraction theory. Model-based LAI estimations were highly correlated with ground-truth LAI. The
model performance was slightly better than that of the hemispherical image-based method, which
tended to underestimate LAI. These results show the great potential of the developed model for
near real-time LAI estimation, which can be further improved in the future by increasing the
dataset used to train the model
Leaf area index estimations by deep learning models using RGB images and data fusion in maize
The leaf area index (LAI) is a biophysical crop parameter of great interest for agronomists
and plant breeders. Direct methods for measuring LAI are normally destructive, while indi rect methods are either costly or require long pre- and post-processing times. In this study,
a novel deep learning-based (DL) model was developed using RGB nadir-view images
taken from a high-throughput plant phenotyping platform for LAI estimation of maize.
The study took place in a commercial maize breeding trial during two consecutive grow ing seasons. Ground-truth LAI values were obtained non-destructively using an allometric
relationship that was derived to calculate the leaf area of individual leaves from their main
leaf dimensions (length and maximum width). Three convolutional neural network (CNN)-
based DL model approaches were proposed using RGB images as input. One of the models
tested is a classifcation model trained with a set of RGB images tagged with previously
measured LAI values (classes). The second model provides LAI estimates from CNN based linear regression and the third one uses a combination of RGB images and numeri cal data as input of the CNN-based model (multi-input model). The results obtained from
the three approaches were compared against ground-truth data and LAI estimations from a
classic indirect method based on nadir-view image analysis and gap fraction theory. All DL
approaches outperformed the classic indirect method. The multi-input_model showed the
least error and explained the highest proportion of the observed LAI variance. This work
represents a major advance for LAI estimation in maize breeding plots as compared to pre vious methods, in terms of processing time and equipment costs
Deep learning techniques for estimation of the yield and size of citrus fruits using a UAV
Accurate and early estimation of citrus yields is important for both producers and agricultural cooperatives to be
competitive and make informed decisions when selling their products. Yield estimation is key for predicting
stock volumes, avoiding stock ruptures and planning harvesting operations. Visual yield estimations have tra ditionally been employed, resulting in inaccurate and misleading information. The main goal of this study was to
develop an automated image processing methodology to detect, count and estimate the size of citrus fruits on
individual trees using deep learning techniques. During 3 consecutive annual campaigns, a total of 20 trees from
a commercial citrus grove were monitored using images captured from an unmanned aerial vehicle (UAV). These
trees were harvested manually, and fruit sizes were measured. A Faster R-CNN Deep Learning model was trained
using a custom dataset to detect oranges in the obtained images. An average standard error (SE) of 6.59 % was
obtained between visual counting and the model’s fruit detection. Using the detected fruits, fruit size estimation
was also performed. The promising results obtained indicate that this size estimation method can be employed
for size discrimination prior to harvest. A model based on Long Short-term Memory (LSTM) was trained for yield
estimation per tree and for a total yield estimation. The actual and estimated yields per tree were compared,
resulting in an approximate error of SE = 4.53 % and a standard deviation of SD = 0.97 Kg. The actual total
yield, the estimated total yield and the total yield estimated by an expert technician were compared. The error in
the estimation by the technician was SE = 13.74 %, while the errors in the model were SE = 7.22 % and SD =
4083.58 Kg. These promising results demonstrate the potential of the present technique to provide yield esti mates for citrus fruits or even other types of fruit
Development and assessment of AI models based on deep learning algorithms to determine agronomic traits in fruit tree orchards and field crops
It’s been claimed by the scientific community that we will need fifty percent more food by 2050 for a world population of close to 10 billion, resulting in a global crisis which raises the question of whether the global food production system is prepared for these changes. To face these challenges, new higher yielding crop varieties with resistance or tolerance to a wide spectrum of environmental stresses (e.g., drought) are desirable. But at the same time, crop management practices in actual farming systems also need to be improved. Agricultural stakeholders consider that these new challenges can be addressed, at least partly, by the adoption of new technologies, especially those related with remote sensing and data management (e.g., big data and artificial intelligence). Advanced sensors and algorithms may provide accurate and more consistent predictions on plant status and quality than those provided by the human eye. Implementing autonomous techniques in production systems may provide accurate field data to breeders and growers that can potentially increase yield and quality through well-considered management choices.
In this thesis, it was stablished as main objective the development and assessment of AI models based on deep learning algorithms to determine agronomic traits in fruit tree orchards and field crops that can provide support to breeders and growers to meet the abovementioned challenges. Three papers published in scientific journals (Q1) were included as the main part of the research. In Chapter 4, Faster R-CNN, a pre-trained model widely used for object detection (e.g., fruits), was trained over a custom dataset labeled to detect oranges in images acquired from UAV (Unmanned Aerial Vehicle) flights. The evaluation of the model in terms of accuracy showed an average standard error (SE) of 6.59 % between visual counting and the model’s fruit detection. The detected fruits were converted to a binary mask using color thresholding to perform fruit size estimations. The promising results obtained indicate that this size estimation method can be employed for size discrimination prior to harvest. Also, a model based on Long Short-term Memory (LSTM) was trained for yield estimation of individual trees and for orchard yield estimation. The actual and estimated yields per tree were compared, resulting in an approximate standard error of SE = 4.53 % and a standard deviation of SD = 0.97 kg. The actual orchard yield and the orchard yield estimated by the model and by a trained technician were compared. The error in the estimation made by the technician was SE = 13.74 %, while the model errors were SE = 7.22 % and SD = 4083.58 kg. In Chapter 5, a Region-Convolutional Neural Network was trained to detect and count the number of apple fruits on individual trees located on the orthomosaic built from RGB images taken from an UAV. The results obtained with the proposed approach were compared with the apple counts made in situ by an agrotechnician, and an R2 value of 0.86 was obtained (MAE: 10.35 and RMSE: 13.56). As only parts of the tree fruits were visible in the top-view images, linear regression was used to estimate the number of total apples on each tree. An R2 value of 0.80 (MAE: 128.56 and RMSE: 130.56) was achieved. With the number of fruits detected and tree position coordinates two shapefiles were generated using a Python script implemented in Google Colab. The point shapefile layer was used to display two yield maps: one with the number of fruits per tree and another with the total number of fruits per tree row. Finally, in Chapter 6, a novel model based on artificial intelligence algorithms and nadir-view red-green-blue (RGB) images acquired with a terrestrial High-Throughput Field Phenotyping Platform (HTFPP) is presented. The model mixes numerical data collected in a wheat breeding field and visual features extracted from the images to make rapid and accurate leaf area index (LAI) estimations. Model-based LAI estimations were validated against LAI measurements determined non-destructively using an allometric relationship obtained in this study. The model performance was also compared with LAI estimates obtained by other classical indirect method based on bottom up digital hemispheric photographs (DHPs) which performs LAI estimations based on gap fraction theory. Model-based LAI estimations were highly correlated with ground-truth LAI. The model performance was slightly better than that of the hemispherical image based method, which tended to underestimate LAI.
The results obtained in the three crops showed great potential in terms of estimating yield, fruit size and LAI. These results allow us to affirm that fruit growers and plant breeders can benefit from the implementation of these technologies in their commercial and experimental fields to maximize outputs via optimized orchard and breeding cycle management.La comunidad científica afirma que para el 2050 se necesitará un 50% más de alimentos para una población mundial que estará cerca de los 10 billones de personas, lo que provocará una crisis global, la cual plantea la cuestión de si el sistema mundial de producción de alimentos está preparado para estos cambios. Para hacer frente a estos desafíos, es deseable contar con nuevas variedades de cultivos de mayor rendimiento con resistencia o tolerancia a un amplio espectro de estreses ambientales (por ejemplo, la sequía). Pero, al mismo tiempo, también es necesario mejorar las prácticas de gestión de los cultivos en los sistemas agrícolas actuales. Los agentes involucrados en el sector agrícola consideran que estos nuevos retos pueden abordarse, al menos en parte, mediante la adopción de las nuevas tecnologías, especialmente las relacionadas con la teledetección y el manejo de datos (por ejemplo, big data e inteligencia artificial (IA)). Los sensores y algoritmos avanzados pueden proporcionar predicciones precisas y más consistentes, sobre el estado y la calidad de las plantas, que las proporcionadas por el ojo humano. Así mismo, la implementación de técnicas autónomas en los sistemas de producción puede proporcionar datos de campo precisos a los agricultores y mejoradores que pueden aumentar potencialmente el rendimiento y la calidad de los cultivos como resultado directo de una una buena toma de decisiones.
En esta tesis, se estableció como objetivo principal el desarrollo y la evaluación de modelos de IA basados en algoritmos de aprendizaje profundo para determinar rasgos agronómicos en frutales y cultivos herbáceos que puedan proporcionar apoyo a los mejoradores y agricultores para hacer frente a los desafíos mencionados anteriormente. Se han incluído tres artículos publicados en revistas científicas (Q1) como parte principal de la investigación. En el Capítulo 4, se entrenó Faster R-CNN, un modelo preentrenado ampliamente utilizado para la detección de objetos (por ejemplo, frutas), sobre un conjunto de datos personalizado y etiquetado para detectar naranjas en imágenes adquiridas de vuelos de UAV (del inglés Unmanned Aerial Vehicle). La evaluación del modelo en términos de precisión mostró un error estándar (SE) medio del 6.59 % entre el recuento visual y la detección de frutas del modelo. Los frutos detectados se transformaron en una máscara binaria utilizando un umbral de color con el objetivo de realizar estimaciones del calibre de la fruta. Los resultados obtenidos indican que este método de estimación del calibre puede emplearse para la selección de los calibres antes de la cosecha. Además, se entrenó un modelo de red neuronal de tipo memoria larga-corto plazo (LSTM por sus siglas en inglés) para la estimación del rendimiento a nivel de árboles individuales y en el total de la parcela. Se compararon los rendimientos reales y los estimados para cada árbol, dando como resultado un error estándar aproximado de SE = 4.53 % y una desviación estándar de SD = 0.97 kg. Se compararon el rendimiento real de la parcela, el rendimiento estimado por el modelo y el proporcionado el técnico especializado. El error en la estimación realizada por el técnico fue de SE = 13.74 %, mientras que los errores del modelo fueron de SE = 7.22 % y SD = 4083.58 kg. En el Capítulo 5, se entrenó una Red Neural Region-Convolucional para detectar y contar el número de manzanas en árboles individuales localizados en el ortomosaico construido a partir de imágenes RGB tomadas desde un UAV. Los resultados obtenidos con el enfoque propuesto se compararon con los recuentos de manzanas realizados in situ por un técnico agrícola, y se obtuvo un valor R2 de 0.86 (MAE: 10.35 y RMSE: 13.56). Dado que en las imágenes de vista superior sólo era visible una parte de los frutos del árbol, se utilizó una regresión lineal para estimar el número total de manzanas en cada árbol. Se obtuvo un valor R2 de 0.80 (MAE: 128.56 y RMSE: 130.56). Con el número de frutos detectados y las coordenadas de geolocalización de cada árbol se generaron dos archivos vectoriales mediante un script de Python implementado en Google Colab. La capa vectorial de puntos se utilizó para mostrar dos mapas de rendimiento: uno con el número de frutos por árbol y otro con el número total de frutos por fila de árboles. Por último, en el Capítulo 6, se presenta un nuevo modelo basado en algoritmos de inteligencia artificial e imágenes RGB de vista nadir adquiridas con un HTFPP terrestre. El modelo fusiona datos numéricos acquiridos en un campo de cultivo de trigo y características visuales extraídas de las imágenes para realizar estimaciones rápidas y precisas del índice de área foliar (IAF). Las estimaciones del IAF basadas en el modelo se validaron frente a las mediciones del IAF determinadas de forma no destructiva utilizando una relación alométrica obtenida en este estudio. El rendimiento del modelo también se comparó con las estimaciones del IAF obtenidas por otro método indirecto clásico basado en imágenes hemisféricas tomadas de abajo hacia arriba que realiza estimaciones del IAF basadas en la teoría de la fracción de huecos. Las estimaciones del IAF basadas en el modelo mostraron una fuerte correlación con el IAF real. El rendimiento del modelo fue ligeramente mejor que el del método basado en imágenes hemisféricas, que tendía a subestimar el IAF.
Los resultados obtenidos en los tres cultivos mostraron un gran potencial en cuanto a la estimación del rendimiento, el calibre del fruto y el IAF. Estos resultados nos permiten afirmar que los fruticultores y fitomejoradores pueden beneficiarse de la implementación de estas tecnologías en sus campos comerciales y experimentales para maximizar los rendimientos mediante una gestión optimizada del campo decultivo y del ciclo ciclo de mejora
Intelligent Fruit Yield Estimation for Orchards Using Deep Learning Based Semantic Segmentation Techniques—A Review
Article number 684328Smart farming employs intelligent systems for every domain of agriculture to obtain sustainable economic growth with the available resources using advanced technologies. Deep Learning (DL) is a sophisticated artificial neural network architecture that provides state-of-the-art results in smart farming applications. One of the main tasks in this domain is yield estimation. Manual yield estimation undergoes many hurdles such as labor-intensive, time-consuming, imprecise results, etc. These issues motivate the development of an intelligent fruit yield estimation system that offers more benefits to the farmers in deciding harvesting, marketing, etc. Semantic segmentation combined with DL adds promising results in fruit detection and localization by performing pixel-based prediction. This paper reviews the different literature employing various techniques for fruit yield estimation using DL-based semantic segmentation architectures. It also discusses the challenging issues that occur during intelligent fruit yield estimation such as sampling, collection, annotation and data augmentation, fruit detection, and counting. Results show that the fruit yield estimation employing DL-based semantic segmentation techniques yields better performance than earlier techniques because of human cognition incorporated into the architecture. Future directions like customization of DL architecture for smart-phone applications to predict the yield, development of more comprehensive model encompassing challenging situations like occlusion, overlapping and illumination variation, etc., were also discussed.Ministerio de Economía y Competitividad ( España) CEI-15-AGR278, US-126367
A Cloud-Based Environment for Generating Yield Estimation Maps From Apple Orchards Using UAV Imagery and a Deep Learning Technique
Farmers require accurate yield estimates, since they are key to predicting the volume of
stock needed at supermarkets and to organizing harvesting operations. In many cases,
the yield is visually estimated by the crop producer, but this approach is not accurate or
time efficient. This study presents a rapid sensing and yield estimation scheme using offthe-shelf aerial imagery and deep learning. A Region-Convolutional Neural Network was
trained to detect and count the number of apple fruit on individual trees located on the
orthomosaic built from images taken by the unmanned aerial vehicle (UAV). The results
obtained with the proposed approach were compared with apple counts made in situ by
an agrotechnician, and an R2 value of 0.86 was acquired (MAE: 10.35 and RMSE: 13.56).
As only parts of the tree fruits were visible in the top-view images, linear regression was
used to estimate the number of total apples on each tree. An R2 value of 0.80 (MAE:
128.56 and RMSE: 130.56) was obtained. With the number of fruits detected and tree
coordinates two shapefile using Python script in Google Colab were generated. With the
previous information two yield maps were displayed: one with information per tree and
another with information per tree row. We are confident that these results will help to
maximize the crop producers' outputs via optimized orchard management
Multilayer Data and Artificial Intelligence for the Delineation of Homogeneous Management Zones in Maize Cultivation
Variable rate application (VRA) is a crucial tool in precision agriculture, utilizing platforms such as Google Earth Engine (GEE) to access vast satellite image datasets and employ machine learning (ML) techniques for data processing. This research investigates the feasibility of implementing supervised ML models (random forest (RF), the support vector machine (SVM), gradient boosting trees (GBT), classification and regression trees (CART)) and unsupervised k-means clustering in GEE to generate accurate management zones (MZs). By leveraging Sentinel-2 satellite imagery and yielding monitor data, these models calculate vegetation indices to monitor crop health and reveal hidden patterns. The achieved classification accuracy values (0.67 to 0.99) highlight the potential of GEE and ML models for creating precise MZs, enabling subsequent VRA implementation. This leads to enhanced farm profitability, improved natural resource efficiency, and reduced environmental impact