4 research outputs found

    Automatic monitoring of maize seedling growth using unmanned aerial vehicle-based RGB imagery

    Get PDF
    Accurate and rapid monitoring of maize seedling growth is critical in early breeding decision making, field management, and yield improvement. However, the number and uniformity of seedlings are conventionally determined by manual evaluation, which is inefficient and unreliable. In this study, we proposed an automatic assessment method of maize seedling growth using unmanned aerial vehicle (UAV) RGB imagery. Firstly, high-resolution images of maize at the early and late seedling stages (before and after the third leaf) were acquired using the UAV RGB system. Secondly, the maize seedling center detection index (MCDI) was constructed, resulting in a significant enhancement of the color contrast between young and old leaves, facilitating the segmentation of maize seedling centers. Furthermore, the weed noise was removed by morphological processing and a dual-threshold method. Then, maize seedlings were extracted using the connected component labeling algorithm. Finally, the emergence rate, canopy coverage, and seedling uniformity in the field at the seedling stage were calculated and analyzed in combination with the number of seedlings. The results revealed that our approach showed good performance for maize seedling count with an average R2 greater than 0.99 and an accuracy of F1 greater than 98.5%. The estimation accuracies at the third leaf stage (V3) for the mean emergence rate and the mean seedling uniformity were 66.98% and 15.89%, respectively. The estimation accuracies at the sixth leaf stage (V6) for the mean seedling canopy coverage and the mean seedling uniformity were 32.21% and 8.20%, respectively. Our approach provided the automatic monitoring of maize growth per plot during early growth stages and demonstrated promising performance for precision agriculture in seedling management

    Capsule Networks for Object Detection in UAV Imagery

    Get PDF
    Recent advances in Convolutional Neural Networks (CNNs) have attracted great attention in remote sensing due to their high capability to model high-level semantic content of Remote Sensing (RS) images. However, CNNs do not explicitly retain the relative position of objects in an image and, thus, the effectiveness of the obtained features is limited in the framework of the complex object detection problems. To address this problem, in this paper we introduce Capsule Networks (CapsNets) for object detection in Unmanned Aerial Vehicle-acquired images. Unlike CNNs, CapsNets extract and exploit the information content about objects’ relative position across several layers, which enables parsing crowded scenes with overlapping objects. Experimental results obtained on two datasets for car and solar panel detection problems show that CapsNets provide similar object detection accuracies when compared to state-of-the-art deep models with significantly reduced computational time. This is due to the fact that CapsNets emphasize dynamic routine instead of the depth.EC/H2020/759764/EU/Accurate and Scalable Processing of Big Data in Earth Observation/BigEart

    An investigation of change in drone practices in broadacre farming environments

    Get PDF
    The application of drones in broadacre farming is influenced by novel and emergent factors. Drone technology is subject to legal, financial, social, and technical constraints that affect the Agri-tech sector. This research showed that emerging improvements to drone technology influence the analysis of precision data resulting in disparate and asymmetrically flawed Ag-tech outputs. The novelty of this thesis is that it examines the changes in drone technology through the lens of entropic decay. It considers the planning and controlling of an organisation’s resources to minimise harmful effects through systems change. The rapid advances in drone technology have outpaced the systematic approaches that precision agriculture insists is the backbone of reliable ongoing decision-making. Different models and brands take data from different heights, at different times of the day, and with flight of differing velocities. Drone data is in a state of decay, no longer equally comparable to past years’ harvest and crop data and are now mixed into a blended environment of brand-specific variations in height, image resolution, air speed, and optics. This thesis investigates the problem of the rapid emergence of image-capture technology in drones and the corresponding shift away from the established measurements and comparisons used in precision agriculture. New capabilities are applied in an ad hoc manner as different features are rushed to market. At the same time existing practices are subtly changed to suit individual technology capability. The result is a loose collection of technically superior drone imagery, with a corresponding mismatch of year-to-year agricultural data. The challenge is to understand and identify the difference between uniformly accepted technological advance, and market-driven changes that demonstrate entropic decay. The goal of this research is to identify best practice approaches for UAV deployment for broadacre farming. This study investigated the benefits of a range of characteristics to optimise data collection technologies. It identified widespread discrepancies demonstrating broadening decay on precision agriculture and productivity. The pace of drone development is so rapidly different from mainstream agricultural practices that the once reliable reliance upon yearly crop data no longer shares statistically comparable metrics. Whilst farmers have relied upon decades of satellite data that has used the same optics, time of day and flight paths for many years, the innovations that drive increasingly smarter drone technologies are also highly problematic since they render each successive past year’s crop metrics as outdated in terms of sophistication, detail, and accuracy. In five years, the standardised height for recording crop data has changed four times. New innovations, coupled with new rules and regulations have altered the once reliable practice of recording crop data. In addition, the cost of entry in adopting new drone technology is sufficiently varied that agriculturalists are acquiring multiple versions of different drone UAVs with variable camera and sensor settings, and vastly different approaches in terms of flight records, data management, and recorded indices. Without addressing this problem, the true benefits of optimization through machine learning are prevented from improving harvest outcomes for broadacre farming. The key findings of this research reveal a complex, constantly morphing environment that is seeking to build digital trust and reliability in an evolving global market in the face of rapidly changing technology, regulations, standards, networks, and knowledge. The once reliable discipline of precision agriculture is now a fractured melting pot of “first to market” innovations and highly competitive sellers. The future of drone technology is destined for further uncertainty as it struggles to establish a level of maturity that can return broadacre farming to consistent global outcomes

    Hyperspectral Image Classification for Remote Sensing

    Get PDF
    This thesis is focused on deep learning-based, pixel-wise classification of hyperspectral images (HSI) in remote sensing. Although presence of many spectral bands in an HSI provides a valuable source of features, dimensionality reduction is often performed in the pre-processing step to reduce the correlation between bands. Most of the deep learning-based classification algorithms use unsupervised dimensionality reduction methods such as principal component analysis (PCA). However, in this thesis in order to take advantage of class discriminatory information in the dimensionality reduction step as well as power of deep neural network we propose a new method that combines a supervised dimensionality reduction technique, principal component discriminant analysis (PCDA) and deep learning. Furthermore, in this thesis in order to overcome the common problem of inadequacy of labeled samples in remote sensing HSI classification, we propose a spectral perturbation method to augment the number of training samples and improve the classification results. Since combining spatial and spectral information can dramatically improve HSI classification results, in this thesis we propose a new spectral-spatial feature vector. In our feature vector, based on their proximity to the dominant edges, neighbors of a target pixel have different contributions in forming the spatial information. To obtain such a proximity measure, we propose a method to compute the distance transform image of the input HSI. We then improved the spatial feature vector by adding extended multi attribute profile (EMAP) features to it. Classification accuracies demonstrate the effectiveness of our proposed method in generating a powerful, expressive spectral-spatial feature vector
    corecore