26 research outputs found

    ANALIZA RGB I MULTISPEKTRALNE KAMERE NA BESPILOTNOME ZRAKOPLOVU ZA KLASIFIKACIJU KUKURUZA STROJNIM UČENJEM

    Get PDF
    This study investigated a crop and soil classification applying the Random Forest machine learning algorithm based on the red-green-blue (RGB) and multispectral sensor imaging deploying an unmanned aerial vehicle (UAV). The study area covered two 10 x 10 m subsets of a maize-sown agricultural parcel near Koška. The highest overall accuracy was obtained in the combination of the red edge (RE), near-infrared (NIR), and normalized difference vegetation index (NDVI) in both subsets, with a 99.8% and 91.8% overall accuracy, respectively. The conducted analysis proved that the RGB camera obtained sufficient accuracy and was an acceptable solution to the soil and vegetation classification. Additionally, a multispectral camera and spectral analysis allowed for a more detailed analysis, primarily of the spectrally similar areas. Thus, this procedure represents a basis for both the crop density calculation and weed detection while deploying an unmanned aerial vehicle. To ensure crop classification effectiveness in practical application, it is necessary to further integrate the weed classes in the current vegetation class and separate them into crop and weed classes.U ovoj studiji istražena je klasifikacija usjeva i tla korištenjem algoritma strojnoga učenja Random Forest, temeljenoga na crveno-zeleno-plavoj (RGB) i multispektralnoj kameri integriranoj na bespilotnome zrakoplovu. Područje istraživanja obuhvaćalo je dva podskupa poljoprivredne čestice kukuruza dimenzija 10 x 10 m u blizini Koške. Najveća ukupna točnost klasifikacije postignuta je u kombinaciji rubnoga crvenog (RE), bliskoga infracrvenog (NIR) kanala i indeksa normalizirane vegetacijske razlike (NDVI) u oba podskupa, s ukupnom točnošću od 99,8 %, odnosno 91,8 %. Provedena analiza pokazala je da je RGB kamera postigla dovoljnu točnost i da je prihvatljivo rješenje za klasifikaciju tla i vegetacije. Međutim, multispektralna kamera i spektralna analiza omogućile su detaljniju analizu, prvenstveno za spektralno slična područja. Ovaj je postupak temelj i za izračun gustoće usjeva i za otkrivanje korova s pomoću bespilotnih zrakoplova. Kako bi se osigurala učinkovitost klasifikacije usjeva u praktičnoj primjeni, potrebno je dodatno uključiti klase korova u trenutačnu klasu vegetacije i podijeliti ih na klase usjeva i korova

    Cork oak woodland land-cover types classification: a comparison between UAV sensed imagery and field survey

    Get PDF
    This work assesses the use of aerial imagery for the vegetation cover characterization in cork oak woodlands. The study was conducted in a cork oak woodland in central Portugal during the summer of 2017. Two supervised classification methods, pixel-based and object-based image analysis (OBIA), were tested using a high spatial resolution image mosaic. Images were captured by an unmanned aerial vehicle (UAV) equipped with a red, green, blue (RGB) camera. Four different vegetation covers were distinguished: cork oak, shrubs, grass and other (bare soil and tree shadow). Results have been compared with field data obtained by the point-intercept (PI) method. Data comparison reveals the reliability of aerial imagery classification methods in cork oak woodlands. Results show that cork oak was accurately classified at a level of 82.7% with pixel-based method and 79.5% with OBIA . 96.7% of shrubs were identified by OBIA, whereas there was an overestimation of 21.7% with pixel approach. Grass presents an overestimation of 22.7% with OBIA and 12.0% with pixel-based method. Limitations rise from using only spectral information in the visible range. Thus, further research with the use of additional bands (vegetation indices or height information) could result in better land-cover type classification.info:eu-repo/semantics/acceptedVersio

    Coupling UAV and satellite data for tree species identification to map the distribution of Caspian poplar

    Get PDF
    Context Mapping the distribution of species, especially those that are endemic and endangered like certain tree species, is a vital step in the effective planning and execution of conservation programs and monitoring efforts. This task gains even more significance as it directly contributes to forest conservation by highlighting the importance of species diversity. Objectives Our study objective was to assess the detection accuracy of a specific tree using different remote sensing sources and approaches. Methods Initially, individual trees were identified and classified using a canopy height model derived from UAV data. Next, we carried out the classification of satellite data within the Google Earth Engine. Lastly, we scaled the UAV-RGB dataset to match the spatial resolution of Sentinel-2, which was then employed to train random forest models using the multispectral data from Sentinel-2. Results For the UAV data, we achieved overall accuracies of 56% for automatically delineated tree crowns and 83% for manually delineated ones. Regarding the second approach using Sentinel-2 data, the classification in the Noor forest yielded an overall accuracy of 74% and a Kappa coefficient of 0.57, while in the Safrabasteh forest, the accuracy was 80% with a Kappa of 0.61. In the third approach, our findings indicate an improvement compared to the second approach, with the overall accuracy and Kappa coefficient of the classification rising to 82% and 0.68, respectively. Conclusions In this study, it was found that according to the purpose and available facilities, satellite and UAV data can be successfully used to identify a specific tree species

    Automatic Identification and Monitoring of Plant Diseases Using Unmanned Aerial Vehicles: A Review

    Get PDF
    Disease diagnosis is one of the major tasks for increasing food production in agriculture. Although precision agriculture (PA) takes less time and provides a more precise application of agricultural activities, the detection of disease using an Unmanned Aerial System (UAS) is a challenging task. Several Unmanned Aerial Vehicles (UAVs) and sensors have been used for this purpose. The UAVs’ platforms and their peripherals have their own limitations in accurately diagnosing plant diseases. Several types of image processing software are available for vignetting and orthorectification. The training and validation of datasets are important characteristics of data analysis. Currently, different algorithms and architectures of machine learning models are used to classify and detect plant diseases. These models help in image segmentation and feature extractions to interpret results. Researchers also use the values of vegetative indices, such as Normalized Difference Vegetative Index (NDVI), Crop Water Stress Index (CWSI), etc., acquired from different multispectral and hyperspectral sensors to fit into the statistical models to deliver results. There are still various drifts in the automatic detection of plant diseases as imaging sensors are limited by their own spectral bandwidth, resolution, background noise of the image, etc. The future of crop health monitoring using UAVs should include a gimble consisting of multiple sensors, large datasets for training and validation, the development of site-specific irradiance systems, and so on. This review briefly highlights the advantages of automatic detection of plant diseases to the growers

    Using automated vegetation cover estimation from close-range photogrammetric point clouds to compare vegetation location properties in mountain terrain

    Get PDF
    In this paper we present a low-cost approach to mapping vegetation cover by means of high-resolution close-range terrestrial photogrammetry. A total of 249 clusters of nine 1 m2 plots each, arranged in a 3 × 3 grid, were set up on 18 summits in Mediterranean mountain regions and in the Alps to capture images for photogrammetric processing and in-situ vegetation cover estimates. This was done with a hand-held pole-mounted digital single-lens reflex (DSLR) camera. Low-growing vegetation was automatically segmented using high-resolution point clouds. For classifying vegetation we used a two-step semi-supervised Random Forest approach. First, we applied an expert-based rule set using the Excess Green index (ExG) to predefine non-vegetation and vegetation points. Second, we applied a Random Forest classifier to further enhance the classification of vegetation points using selected topographic parameters (elevation, slope, aspect, roughness, potential solar irradiation) and additional vegetation indices (Excess Green Minus Excess Red (ExGR) and the vegetation index VEG). For ground cover estimation the photogrammetric point clouds were meshed using Screened Poisson Reconstruction. The relative influence of the topographic parameters on the vegetation cover was determined with linear mixed-effects models (LMMs). Analysis of the LMMs revealed a high impact of elevation, aspect, solar irradiation, and standard deviation of slope. The presented approach goes beyond vegetation cover values based on conventional orthoimages and in-situ vegetation cover estimates from field surveys in that it is able to differentiate complete 3D surface areas, including overhangs, and can distinguish between vegetation-covered and other surfaces in an automated manner. The results of the Random Forest classification confirmed it as suitable for vegetation classification, but the relative feature importance values indicate that the classifier did not leverage the potential of the included topographic parameters. In contrast, our application of LMMs utilized the topographic parameters and was able to reveal dependencies in the two biomes, such as elevation and aspect, which were able to explain between 87% and 92.5% of variance

    Coffee crop coefficient prediction as a function of biophysical variables identified from RGB UAS images

    Get PDF
    Because of different Brazilian climatic conditions and the different plant conditions, such as the stage of development and even the variety, wide variation may exist in the crop coefficients () values, both spatially and temporally. Thus, the objective of this study was to develop a methodology to determine the short-term using biophysical parameters of coffee plants detected images obtained by an Unmanned Aircraft System (UAS). The study was conducted in Travessia variety coffee plantation. A UAS equipped with a digital camera was used. The images were collected in the field and were processed in Agisoft PhotoScan software. The data extracted from the images were used to calculate the biophysical parameters: leaf area index (LAI), leaf area (LA) and . GeoDA software was used for mapping and spatial analysis. The pseudo-significance test was applied with p < 0.05 to validate the statistic. Moran's index (I) for June was 0.228 and for May was 0.286. Estimates of values in June varied between 0.963 and 1.005. In May, the values were 1.05 for 32 blocks. With this study, a methodology was developed that enables the estimation of using remotely generated biophysical crop data

    Hypertemporal Imaging Capability of UAS Improves Photogrammetric Tree Canopy Models

    Get PDF
    Small uncrewed aerial systems (UASs) generate imagery that can provide detailed information regarding condition and change if the products are reproducible through time. Densified point clouds form the basic information for digital surface models and orthorectified mosaics, so variable dense point reconstruction will introduce uncertainty. Eucalyptus trees typically have sparse and discontinuous canopies with pendulous leaves that present a difficult target for photogrammetry software. We examine how spectral band, season, solar azimuth, elevation, and some processing settings impact completeness and reproducibility of dense point clouds for shrub swamp and Eucalyptus forest canopy. At the study site near solar noon, selecting near infrared camera increased projected tree canopy fourfold, and dense point features more than 2 m above ground were increased sixfold compared to red spectral bands. Near infrared (NIR) imagery improved projected and total dense features two- and threefold, respectively, compared to default green band imagery. The lowest solar elevation captured (25°) consistently improved canopy feature reconstruction in all spectral bands. Although low solar elevations are typically avoided for radiometric reasons, we demonstrate that these conditions improve the detection and reconstruction of complex tree canopy features in natural Eucalyptus forests. Combining imagery sets captured at different solar elevations improved the reproducibility of dense point clouds between seasons. Total dense point cloud features reconstructed were increased by almost 10 million points (20%) when imagery used was NIR combining solar noon and low solar elevation imagery. It is possible to use agricultural multispectral camera rigs to reconstruct Eucalyptus tree canopy and shrub swamp by combining imagery and selecting appropriate spectral bands for processin
    corecore