1,650 research outputs found

    Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.

    Get PDF
    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images

    High-Throughput System for the Early Quantification of Major Architectural Traits in Olive Breeding Trials Using UAV Images and OBIA Techniques

    Get PDF
    The need for the olive farm modernization have encouraged the research of more efficient crop management strategies through cross-breeding programs to release new olive cultivars more suitable for mechanization and use in intensive orchards, with high quality production and resistance to biotic and abiotic stresses. The advancement of breeding programs are hampered by the lack of efficient phenotyping methods to quickly and accurately acquire crop traits such as morphological attributes (tree vigor and vegetative growth habits), which are key to identify desirable genotypes as early as possible. In this context, an UAV-based high-throughput system for olive breeding program applications was developed to extract tree traits in large-scale phenotyping studies under field conditions. The system consisted of UAV-flight configurations, in terms of flight altitude and image overlaps, and a novel, automatic, and accurate object-based image analysis (OBIA) algorithm based on point clouds, which was evaluated in two experimental trials in the framework of a table olive breeding program, with the aim to determine the earliest date for suitable quantifying of tree architectural traits. Two training systems (intensive and hedgerow) were evaluated at two very early stages of tree growth: 15 and 27 months after planting. Digital Terrain Models (DTMs) were automatically and accurately generated by the algorithm as well as every olive tree identified, independently of the training system and tree age. The architectural traits, specially tree height and crown area, were estimated with high accuracy in the second flight campaign, i.e. 27 months after planting. Differences in the quality of 3D crown reconstruction were found for the growth patterns derived from each training system. These key phenotyping traits could be used in several olive breeding programs, as well as to address some agronomical goals. In addition, this system is cost and time optimized, so that requested architectural traits could be provided in the same day as UAV flights. This high-throughput system may solve the actual bottleneck of plant phenotyping of "linking genotype and phenotype," considered a major challenge for crop research in the 21st century, and bring forward the crucial time of decision making for breeders

    Unmanned Aerial Vehicles (UAVs) in environmental biology: A Review

    Get PDF
    Acquiring information about the environment is a key step during each study in the field of environmental biology at different levels, from an individual species to community and biome. However, obtaining information about the environment is frequently difficult because of, for example, the phenological timing, spatial distribution of a species or limited accessibility of a particular area for the field survey. Moreover, remote sensing technology, which enables the observation of the Earth’s surface and is currently very common in environmental research, has many limitations such as insufficient spatial, spectral and temporal resolution and a high cost of data acquisition. Since the 1990s, researchers have been exploring the potential of different types of unmanned aerial vehicles (UAVs) for monitoring Earth’s surface. The present study reviews recent scientific literature dealing with the use of UAV in environmental biology. Amongst numerous papers, short communications and conference abstracts, we selected 110 original studies of how UAVs can be used in environmental biology and which organisms can be studied in this manner. Most of these studies concerned the use of UAV to measure the vegetation parameters such as crown height, volume, number of individuals (14 studies) and quantification of the spatio-temporal dynamics of vegetation changes (12 studies). UAVs were also frequently applied to count birds and mammals, especially those living in the water. Generally, the analytical part of the present study was divided into following sections: (1) detecting, assessing and predicting threats on vegetation, (2) measuring the biophysical parameters of vegetation, (3) quantifying the dynamics of changes in plants and habitats and (4) population and behaviour studies of animals. At the end, we also synthesised all the information showing, amongst others, the advances in environmental biology because of UAV application. Considering that 33% of studies found and included in this review were published in 2017 and 2018, it is expected that the number and variety of applications of UAVs in environmental biology will increase in the future

    Methodology for the Automatic Inventory of Olive Groves at the Plot and Polygon Level

    Get PDF
    The aim of this study was to develop and validate a methodology to carry out olive grove inventories based on open data sources and automatic photogrammetric and satellite image analysis techniques. To do so, tools and protocols have been developed that have made it possible to automate the capture of images of different characteristics and origins, enable the use of open data sources, as well as integrating and metadating them. They can then be used for the development and validation of algorithms that allow for improving the characterization of olive grove surfaces at the plot and cadastral polygon scales. With the proposed system, an inventory of the Andalusian olive grove has been automatically carried out at the level of cadastral polygons and provinces, which has accounted for a total of 1,519,438 hectares and 171,980,593 olive trees. These data have been contrasted with various official statistical sources, thus ensuring their reliability and even identifying some inconsistencies or errors of some sources. Likewise, the capacity of the Sentinel 2 satellite images to estimate the FCC at the cadastral polygon, parcel and 10 × 10 m pixel level has been demonstrated and quantified, as well as the opportunity to carry out inventories with temporal resolutions of approximately up to 5 days

    Detection of Tree Crowns in Very High Spatial Resolution Images

    Get PDF
    The requirements for advanced knowledge on forest resources have led researchers to develop efficient methods to provide detailed information about trees. Since 1999, orbital remote sensing has been providing very high resolution (VHR) image data. The new generation of satellite allows individual tree crowns to be visually identifiable. The increase in spatial resolution has also had a profound effect in image processing techniques and has motivated the development of new object-based procedures to extract information. Tree crown detection has become a major area of research in image analysis considering the complex nature of trees in an uncontrolled environment. This chapter is subdivided into two parts. Part I offers an overview of the state of the art in computer detection of individual tree crowns in VHR images. Part II presents a new hybrid approach developed by the authors that integrates geometrical-optical modeling (GOM), marked point processes (MPP), and template matching (TM) to individually detect tree crowns in VHR images. The method is presented for two different applications: isolated tree detection in an urban environment and automatic tree counting in orchards with an average performance rate of 82% for tree detection and above 90% for tree counting in orchards

    Olive Tree Biovolume from UAV Multi-Resolution Image Segmentation with Mask R-CNN

    Get PDF
    Olive tree growing is an important economic activity in many countries, mostly in the Mediterranean Basin, Argentina, Chile, Australia, and California. Although recent intensification techniques organize olive groves in hedgerows, most olive groves are rainfed and the trees are scattered (as in Spain and Italy, which account for 50% of the world’s olive oil production). Accurate measurement of trees biovolume is a first step to monitor their performance in olive production and health. In this work, we use one of the most accurate deep learning instance segmentation methods (Mask R-CNN) and unmanned aerial vehicles (UAV) images for olive tree crown and shadow segmentation (OTCS) to further estimate the biovolume of individual trees. We evaluated our approach on images with different spectral bands (red, green, blue, and near infrared) and vegetation indices (normalized difference vegetation index—NDVI—and green normalized difference vegetation index—GNDVI). The performance of red-green-blue (RGB) images were assessed at two spatial resolutions 3 cm/pixel and 13 cm/pixel, while NDVI and GNDV images were only at 13 cm/pixel. All trained Mask R-CNN-based models showed high performance in the tree crown segmentation, particularly when using the fusion of all dataset in GNDVI and NDVI (F1-measure from 95% to 98%). The comparison in a subset of trees of our estimated biovolume with ground truth measurements showed an average accuracy of 82%. Our results support the use of NDVI and GNDVI spectral indices for the accurate estimation of the biovolume of scattered trees, such as olive trees, in UAV images.Russian Foundation for Basic Research (RFBR) 19-01-00215 20-07-00370European Research Council (ERC) European Commission 647038Spanish Government RYC-2015-18136Consejeria de Economia, Conocimiento y Universidad de la Junta de Andalucia P18-RT-1927DETECTOR A-RNM-256-UGR18European Research and Development Funds (ERDF) progra

    Counting dense objects in remote sensing images

    Full text link
    Estimating accurate number of interested objects from a given image is a challenging yet important task. Significant efforts have been made to address this problem and achieve great progress, yet counting number of ground objects from remote sensing images is barely studied. In this paper, we are interested in counting dense objects from remote sensing images. Compared with object counting in natural scene, this task is challenging in following factors: large scale variation, complex cluttered background and orientation arbitrariness. More importantly, the scarcity of data severely limits the development of research in this field. To address these issues, we first construct a large-scale object counting dataset based on remote sensing images, which contains four kinds of objects: buildings, crowded ships in harbor, large-vehicles and small-vehicles in parking lot. We then benchmark the dataset by designing a novel neural network which can generate density map of an input image. The proposed network consists of three parts namely convolution block attention module (CBAM), scale pyramid module (SPM) and deformable convolution module (DCM). Experiments on the proposed dataset and comparisons with state of the art methods demonstrate the challenging of the proposed dataset, and superiority and effectiveness of our method

    Semi-Automatic Method for Early Detection of Xylella fastidiosa in Olive Trees Using UAV Multispectral Imagery and Geostatistical-Discriminant Analysis

    Get PDF
    Xylella fastidiosa subsp. pauca (Xfp) is one of the most dangerous plant pathogens in the world. Identified in 2013 in olive trees in south–eastern Italy, it is spreading to the Mediterranean countries. The bacterium is transmitted by insects that feed on sap, and causes rapid wilting in olive trees. The paper explores the use of Unmanned Aerial Vehicle (UAV) in combination with a multispectral radiometer for early detection of infection. The study was carried out in three olive groves in the Apulia region (Italy) and involved four drone flights from 2017 to 2019. To classify Xfp severity level in olive trees at an early stage, a combined method of geostatistics and discriminant analysis was implemented. The results of cross-validation for the non-parametric classification method were of overall accuracy = 0.69, mean error rate = 0.31, and for the early detection class of accuracy 0.77 and misclassification probability 0.23. The results are promising and encourage the application of UAV technology for the early detection of Xfp infection

    Use of Remote Imagery and Object-based Image Methods to Count Plants in an Open-field Container Nursery

    Get PDF
    In general, the nursery industry lacks an automated inventory control system. Object-based image analysis (OBIA) software and aerial images could be used to count plants in nurseries. The objectives of this research were: 1) to evaluate the effect of an unmanned aerial vehicle (UAV) flight altitude and plant canopy separation of container-grown plants on count accuracy using aerial images and 2) to evaluate the effect of plant canopy shape, presence of flowers, and plant status (living and dead) on counting accuracy of container-grown plants using remote sensing images. Images were analyzed using Feature Analyst® (FA) and an algorithm trained using MATLAB®. Total count error, false positives and unidentified plants were recorded from output images using FA; only total count error was reported for the MATLAB algorithm. For objective 1, images were taken at 6, 12 and 22 m above the ground using a UAV. Plants were placed on black fabric and gravel, and spaced as follows: 5 cm between canopy edges, canopy edges touching, and 5 cm of canopy edge overlap. In general, when both methods were considered, total count error was smaller [ranging from -5 (undercount) to 4 (over count)] when plants were fully separated with the exception of images taken at 22 m. FA showed a smaller total count error (-2) than MATLAB (-5) when plants were placed on black fabric than those placed on gravel. For objective 2, the plan was to continue using the UAV, however, due to the unexpected disruption of the GPS-based navigation by heightened solar flare activity in 2013, a boom lift that could provide images on a more reliable basis was used. When images obtained using a boom lift were analyzed using FA there was no difference between variables measured when an algorithm trained with an image displaying regular or irregular plant canopy shape was applied to images displaying both plant canopy shapes even though the canopy shape of `Sea Green\u27 juniper is less compact than `Plumosa Compacta\u27. There was a significant difference in all variables measured between images of flowering and non-flowering plants, when non-flowering `samples\u27 were used to train the counting algorithm and analyzed with FA. No dead plants were counted as living and vice versa, when data were analyzed using FA. When the algorithm trained in MATLAB was applied, there was no significant difference in total count errors when plant canopy shape and presence of flowers were evaluated. Based on the combined results from these separate experiments, FA and MATLAB algorithms appear to be fairly robust when used to count container-grown plants from images taken at the heights specified
    corecore