48 research outputs found

    Drone and sensor technology for sustainable weed management: a review

    Get PDF
    Weeds are amongst the most impacting abiotic factors in agriculture, causing important yield loss worldwide. Integrated Weed Management coupled with the use of Unmanned Aerial Vehicles (drones), allows for Site-Specific Weed Management, which is a highly efficient methodology as well as beneficial to the environment. The identification of weed patches in a cultivated field can be achieved by combining image acquisition by drones and further processing by machine learning techniques. Specific algorithms can be trained to manage weeds removal by Autonomous Weeding Robot systems via herbicide spray or mechanical procedures. However, scientific and technical understanding of the specific goals and available technology is necessary to rapidly advance in this field. In this review, we provide an overview of precision weed control with a focus on the potential and practical use of the most advanced sensors available in the market. Much effort is needed to fully understand weed population dynamics and their competition with crops so as to implement this approach in real agricultural contexts

    Semantic Segmentation of Weeds and Crops in Multispectral Images by Using a Convolutional Neural Networks Based on U-Net

    Get PDF
    A first step in the process of automating weed removal in precision agriculture is the semantic segmentation of crops, weeds and soil. Deep learning techniques based on convolutional neural networks are successfully applied today and one of the most popular network architectures in semantic segmentation problems is U-Net. In this article, the variants in the U-Net architecture were evaluated based on the aggregation of residual and recurring blocks to improve their performance. For training and testing, a set of data available on the Internet was used, consisting of 60 multispectral images with unbalanced pixels, so techniques were applied to increase and balance the data. Experimental results show a slight increase in quality metrics compared to the classic U-Net architecture

    Early corn stand count of different cropping systems using UAV-imagery and deep learning

    Get PDF
    Optimum plant stand density and uniformity is vital in order to maximize corn (Zea mays L.) yield potential. Assessment of stand density can occur shortly after seedlings begin to emerge, allowing for timely replant decisions. The conventional methods for evaluating an early plant stand rely on manual measurement and visual observation, which are time consuming, subjective because of the small sampling areas used, and unable to capture field-scale spatial variability. This study aimed to evaluate the feasibility of an unmanned aerial vehicle (UAV)-based imaging system for estimating early corn stand count in three cropping systems (CS) with different tillage and crop rotation practices. A UAV equipped with an on-board RGB camera was used to collect imagery of corn seedlings (~14 days after planting) of CS, i.e., minimum-till corn-soybean rotation (MTCS), no-till corn-soybean rotation (NTCS), and no-till corn-corn rotation with cover crop implementation (NTCC). An image processing workflow based on a deep learning (DL) model, U-Net, was developed for plant segmentation and stand count estimation. Results showed that the DL model performed best in segmenting seedlings in MTCS, followed by NTCS and NTCC. Similarly, accuracy for stand count estimation was highest in MTCS (R2 = 0.95), followed by NTCS (0.94) and NTCC (0.92). Differences by CS were related to amount and distribution of soil surface residue cover, with increasing residue generally reducing the performance of the proposed method in stand count estimation. Thus, the feasibility of using UAV imagery and DL modeling for estimating early corn stand count is qualified influenced by soil and crop management practices

    Quantifying corn emergence using UAV imagery and machine learning

    Get PDF
    Corn (Zea mays L.) is one of the important crops in the United States for animal feed, ethanol production, and human consumption. To maximize the final corn yield, one of the critical factors to consider is to improve the corn emergence uniformity temporally (emergence date) and spatially (plant spacing). Conventionally, the assessment of emergence uniformity usually is performed through visual observation by farmers at selected small plots to represent the whole field, but this is limited by time and labor needed. With the advance of unmanned aerial vehicle (UAV)-based imaging technology and advanced image processing techniques powered by machine learning (ML) and deep learning (DL), a more automatic, non-subjective, precise, and accurate field-scale assessment of emergence uniformity becomes possible. Previous studies had demonstrated the success of crop emergence uniformity using UAV imagery, specifically at fields with simple soil background. There is no research having investigated the feasibility of UAV imagery in the corn emergence assessment at fields of conservation agriculture that are covered with cover crops or residues to improve soil health and sustainability. The overall goal of this research was to develop a fast and accurate method for the assessment of corn emergence using UAV imagery, ML and DL techniques. The pertinent information is essential for corn production early and in-season decision making as well as agronomy research. The research comprised three main studies, including Study 1: quantifying corn emergence date using UAV imagery and a ML model; Study 2: estimating corn stand count in different cropping systems (CS) using UAV images and DL; and Study 3: estimating and mapping corn emergence under different planting depths. Two case studies extended Study 3 to field-scale applications by relating emergence uniformity derived from the developed method to planting depths treatments and estimating final yield. For all studies, the primary imagery data were collected using a consumer-grade UAV equipped with a red-green-blue (RGB) camera at a flight height of approximate 10 m above ground level. The imagery data had a ground sampling distance (GSD) of 0.55 - 3.00 mm pixel-1 that was sufficient to detect small size seedlings. In addition, a UAV multispectral camera was used to capture corn plants at early growth stages (V4, V6, and V7) in case studies to extract plant reflectance (vegetation indices, VIs) as plant growth variation indicators. Random forest (RF) ML models were used to classify the corn emergence date based on the days after emergence (DAE) to time of assessment and estimate yield. The DL models, U-Net and ResNet18, were used to segment corn seedlings from UAV images and estimate emergence parameters, including plant density, average DAE (DAEmean), and plant spacing standard deviation (PSstd), respectively. Results from Study 1 indicated that individual corn plant quantification using UAV imagery and a RF ML model achieved moderate classification accuracies of 0.20 - 0.49 that increased to 0.55 - 0.88 when DAE classification was expanded to be within a 3-day window. In Study 2, the precision for image segmentation by the U-Net model was [greater than or equal to] 0.81 for all CS, resulting in high accuracies in estimating plant density (R2 [greater than or equal to] 0.92; RMSE [less than or equal to] 0.48 plants m-1). Then, the ResNet18 model in Study 3 was able to estimate emergence parameters with high accuracies (0.97, 0.95, and 0.73 for plant density, DAEmean, and PSstd, respectively). Case studies showed that crop emergence maps and evaluation in field conditions indicated an expected trend of decreasing plant density and DAEmean with increasing planting depths and opposite results for PSstd. However, mixed trends were found for emergence parameters among planting depths at different replications and across the N-S direction of the fields. For yield estimation, emergence data alone did not show any relation with final yield (R2 = 0.01, RMSE = 720 kg ha-1). The combination of VIs from all the growth stages was only able to estimate yield with R2 of 0.34 and RMSE of 560 kg ha-1. In summary, this research demonstrated the success of UAV imagery and ML/DL techniques in assessing and mapping corn emergence at fields practicing all or some components of conservation agriculture. The findings give more insights for future agronomic and breeding studies in providing field-scale crop emergence evaluations as affected by treatments and management as well as relating emergence assessment to final yield. In addition, these emergence evaluations may be useful for commercial companies when needing justification for developing new technologies relating to precision planting to crop performance. For commercial crop production, more comprehensive emergence maps (in terms of temporal and spatial uniformity) will help to make better replanting or early management decisions. Further enhancement of the methods such as more validation studies in different locations and years as well as development of interactive frameworks will establish a more automatic, robust, precise, accurate, and 'ready-to-use' approach in estimating and mapping crop emergence uniformity.Includes bibliographical references

    A Review on Deep Learning in UAV Remote Sensing

    Full text link
    Deep Neural Networks (DNNs) learn representation from data with an impressive capability, and brought important breakthroughs for processing images, time-series, natural language, audio, video, and many others. In the remote sensing field, surveys and literature revisions specifically involving DNNs algorithms' applications have been conducted in an attempt to summarize the amount of information produced in its subfields. Recently, Unmanned Aerial Vehicles (UAV) based applications have dominated aerial sensing research. However, a literature revision that combines both "deep learning" and "UAV remote sensing" thematics has not yet been conducted. The motivation for our work was to present a comprehensive review of the fundamentals of Deep Learning (DL) applied in UAV-based imagery. We focused mainly on describing classification and regression techniques used in recent applications with UAV-acquired data. For that, a total of 232 papers published in international scientific journal databases was examined. We gathered the published material and evaluated their characteristics regarding application, sensor, and technique used. We relate how DL presents promising results and has the potential for processing tasks associated with UAV-based image data. Lastly, we project future perspectives, commentating on prominent DL paths to be explored in the UAV remote sensing field. Our revision consists of a friendly-approach to introduce, commentate, and summarize the state-of-the-art in UAV-based image applications with DNNs algorithms in diverse subfields of remote sensing, grouping it in the environmental, urban, and agricultural contexts.Comment: 38 pages, 10 figure

    Sustainable Agriculture and Advances of Remote Sensing (Volume 2)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publication of the results, among others

    TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture

    Get PDF
    Agricultural vehicles such as tractors and harvesters have for decades been able to navigate automatically and more efficiently using commercially available products such as auto-steering and tractor-guidance systems. However, a human operator is still required inside the vehicle to ensure the safety of vehicle and especially surroundings such as humans and animals. To get fully autonomous vehicles certified for farming, computer vision algorithms and sensor technologies must detect obstacles with equivalent or better than human-level performance. Furthermore, detections must run in real-time to allow vehicles to actuate and avoid collision.This thesis proposes a detection system (TractorEYE), a dataset (FieldSAFE), and procedures to fuse information from multiple sensor technologies to improve detection of obstacles and to generate a map. TractorEYE is a multi-sensor detection system for autonomous vehicles in agriculture. The multi-sensor system consists of three hardware synchronized and registered sensors (stereo camera, thermal camera and multi-beam lidar) mounted on/in a ruggedized and water-resistant casing. Algorithms have been developed to run a total of six detection algorithms (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is -- compared to a state-of-the-art object detector Faster R-CNN -- for an agricultural use-case able to detect humans better and at longer ranges (45-90m) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset includes synchronized recordings from a rgb camera, stereo camera, thermal camera, 360-degree camera, lidar and radar. Precise localization and pose is provided using IMU and GPS. Ground truth of static and moving obstacles (humans, mannequin dolls, barrels, buildings, vehicles, and vegetation) are available as an annotated orthophoto and GPS coordinates for moving obstacles. Detection information from multiple detection algorithms and sensors are fused into a map using Inverse Sensor Models and occupancy grid maps. This thesis presented many scientific contribution and state-of-the-art within perception for autonomous tractors; this includes a dataset, sensor platform, detection algorithms and procedures to perform multi-sensor fusion. Furthermore, important engineering contributions to autonomous farming vehicles are presented such as easily applicable, open-source software packages and algorithms that have been demonstrated in an end-to-end real-time detection system. The contributions of this thesis have demonstrated, addressed and solved critical issues to utilize camera-based perception systems that are essential to make autonomous vehicles in agriculture a reality

    Artificial Neural Networks in Agriculture

    Get PDF
    Modern agriculture needs to have high production efficiency combined with a high quality of obtained products. This applies to both crop and livestock production. To meet these requirements, advanced methods of data analysis are more and more frequently used, including those derived from artificial intelligence methods. Artificial neural networks (ANNs) are one of the most popular tools of this kind. They are widely used in solving various classification and prediction tasks, for some time also in the broadly defined field of agriculture. They can form part of precision farming and decision support systems. Artificial neural networks can replace the classical methods of modelling many issues, and are one of the main alternatives to classical mathematical models. The spectrum of applications of artificial neural networks is very wide. For a long time now, researchers from all over the world have been using these tools to support agricultural production, making it more efficient and providing the highest-quality products possible

    Multi-Projective Camera-Calibration, Modeling, and Integration in Mobile-Mapping Systems

    Get PDF
    Optical systems are vital parts of most modern systems such as mobile mapping systems, autonomous cars, unmanned aerial vehicles (UAV), and game consoles. Multi-camera systems (MCS) are commonly employed for precise mapping including aerial and close-range applications. In the first part of this thesis a simple and practical calibration model and a calibration scheme for multi-projective cameras (MPC) is presented. The calibration scheme is enabled by implementing a camera test field equipped with a customized coded target as FGI’s camera calibration room. The first hypothesis was that a test field is necessary to calibrate an MPC. Two commercially available MPCs with 6 and 36 cameras were successfully calibrated in FGI’s calibration room. The calibration results suggest that the proposed model is able to estimate parameters of the MPCs with high geometric accuracy, and reveals the internal structure of the MPCs. In the second part, the applicability of an MPC calibrated by the proposed approach was investigated in a mobile mapping system (MMS). The second hypothesis was that a system calibration is necessary to achieve high geometric accuracies in a multi-camera MMS. The MPC model was updated to consider mounting parameters with respect to GNSS and IMU. A system calibration scheme for an MMS was proposed. The results showed that the proposed system calibration approach was able to produce accurate results by direct georeferencing of multi-images in an MMS. Results of geometric assessments suggested that a centimeter-level accuracy is achievable by employing the proposed approach. A novel correspondence map is demonstrated for MPCs that helps to create metric panoramas. In the third part, the problem of real-time trajectory estimation of a UAV equipped with a projective camera was studied. The main objective of this part was to address the problem of real-time monocular simultaneous localization and mapping (SLAM) of a UAV. An angular framework was discussed to address the gimbal lock singular situation. The results suggest that the proposed solution is an effective and rigorous monocular SLAM for aerial cases where the object is near-planar. In the last part, the problem of tree-species classification by a UAV equipped with two hyper-spectral an RGB cameras was studied. The objective of this study was to investigate different aspects of a precise tree-species classification problem by employing state-of-art methods. A 3D convolutional neural-network (3D-CNN) and a multi-layered perceptron (MLP) were proposed and compared. Both classifiers were highly successful in their tasks, while the 3D-CNN was superior in performance. The classification result was the most accurate results published in comparison to other works.Optiset kuvauslaitteet ovat keskeisessä roolissa moderneissa konenäköön perustuvissa järjestelmissä kuten autonomiset autot, miehittämättömät lentolaitteet (UAV) ja pelikonsolit. Tällaisissa sovelluksissa hyödynnetään tyypillisesti monikamerajärjestelmiä. Väitöskirjan ensimmäisessä osassa kehitetään yksinkertainen ja käytännöllinen matemaattinen malli ja kalibrointimenetelmä monikamerajärjestelmille. Koodatut kohteet ovat keinotekoisia kuvia, joita voidaan tulostaa esimerkiksi A4-paperiarkeille ja jotka voidaan mitata automaattisesti tietokonealgoritmeillä. Matemaattinen malli määritetään hyödyntämällä 3-ulotteista kamerakalibrointihuonetta, johon kehitetyt koodatut kohteet asennetaan. Kaksi kaupallista monikamerajärjestelmää, jotka muodostuvat 6 ja 36 erillisestä kamerasta, kalibroitiin onnistuneesti ehdotetulla menetelmällä. Tulokset osoittivat, että menetelmä tuotti tarkat estimaatit monikamerajärjestelmän geometrisille parametreille ja että estimoidut parametrit vastasivat hyvin kameran sisäistä rakennetta. Työn toisessa osassa tutkittiin ehdotetulla menetelmällä kalibroidun monikamerajärjestelmän mittauskäyttöä liikkuvassa kartoitusjärjestelmässä (MMS). Tavoitteena oli kehittää ja tutkia korkean geometrisen tarkkuuden kartoitusmittauksia. Monikameramallia laajennettiin navigointilaitteiston paikannus ja kallistussensoreihin (GNSS/IMU) liittyvillä parametreillä ja ehdotettiin järjestelmäkalibrointimenetelmää liikkuvalle kartoitusjärjestelmälle. Kalibroidulla järjestelmällä saavutettiin senttimetritarkkuus suorapaikannusmittauksissa. Työssä myös esitettiin monikuville vastaavuuskartta, joka mahdollistaa metristen panoraamojen luonnin monikamarajärjestelmän kuvista. Kolmannessa osassa tutkittiin UAV:​​n liikeradan reaaliaikaista estimointia hyödyntäen yhteen kameraan perustuvaa menetelmää. Päätavoitteena oli kehittää monokulaariseen kuvaamiseen perustuva reaaliaikaisen samanaikaisen paikannuksen ja kartoituksen (SLAM) menetelmä. Työssä ehdotettiin moniresoluutioisiin kuvapyramideihin ja eteneviin suorakulmaisiin alueisiin perustuvaa sovitusmenetelmää. Ehdotetulla lähestymistavalla pystyttiin alentamaan yhteensovittamisen kustannuksia sovituksen tarkkuuden säilyessä muuttumattomana. Kardaanilukko (gimbal lock) tilanteen käsittelemiseksi toteutettiin uusi kulmajärjestelmä. Tulokset osoittivat, että ehdotettu ratkaisu oli tehokas ja tarkka tilanteissa joissa kohde on lähes tasomainen. Suorituskyvyn arviointi osoitti, että kehitetty menetelmä täytti UAV:n reaaliaikaiselle reitinestimoinnille annetut aika- ja tarkkuustavoitteet. Työn viimeisessä osassa tutkittiin puulajiluokitusta käyttäen hyperspektri- ja RGB-kameralla varustettua UAV-järjestelmää. Tavoitteena oli tutkia uusien koneoppimismenetelmien käyttöä tarkassa puulajiluokituksessa ja lisäksi vertailla hyperspektri ja RGB-aineistojen suorituskykyä. Työssä verrattiin 3D-konvoluutiohermoverkkoa (3D-CNN) ja monikerroksista perceptronia (MLP). Molemmat luokittelijat tuottivat hyvän luokittelutarkkuuden, mutta 3D-CNN tuotti tarkimmat tulokset. Saavutettu tarkkuus oli parempi kuin aikaisemmat julkaistut tulokset vastaavilla aineistoilla. Hyperspektrisen ja RGB-datan yhdistelmä tuotti parhaan tarkkuuden, mutta myös RGB-kamera yksin tuotti tarkan tuloksen ja on edullinen ja tehokas aineisto monille luokittelusovelluksille
    corecore