3 research outputs found

    TRANSFER LEARNING PERFORMANCE FOR REMOTE PASTURELAND TRAIT ESTIMATION IN REAL-TIME FARM MONITORING

    Get PDF
    In precision agriculture, having knowledge of pastureland forage biomass and moisture content prior to an ensiling process enables pastoralists to enhance silage production. While traditional trait measurement estimation methods relied on hand-crafted vegetation indices, manual measurements, or even destructive methods, remote sensing technology coupled with state-of-the-art deep learning algorithms can enable estimation using a broader spectrum of data, but generally require large volumes of labelled data, which is lacking in this domain. This work investigates the performance of a range of deep learning algorithms on a small dataset for biomass and moisture estimation that was collected with a compact remote sensing system designed to work in real time. Our results showed that applying transfer learning to Inception ResNet improved minimum mean average percentage error from 45.58% on a basic CNN, to 28.07% on biomass, and from 29.33% to 8.03% on moisture content. From scratch models and models optimised for mobile remote sensing applications (MobileNet) failed to produce the same level of improvement

    Just-in-Time Biomass Yield Estimation with Multi-modal Data and Variable Patch Training Size

    No full text
    The just-in-time estimation of farmland traits such as biomass yield can aid considerably in the optimisation of agricultural processes. Data in domains such as precision farming is however notoriously expensive to collect and deep learning driven modelling approaches need to maximise performance but also acknowledge this reality. In this paper we present a study in which a platform was deployed to collect data from a heterogeneous collection of sensor types including visual, NIR, and LiDAR sources to estimate key pastureland traits. In addition to introducing the study itself we address two key research questions. The first of these was the trade off of multi-modal modelling against a more basic image driven methodology, while the second was the investigation of patch size variability in the image processing backbone. This second question relates to the fact that individual images of vegetation and in particular grassland are texturally rich, but can be uniform, enabling subdivision into patches. However, there may be a trade-off between patch-size and number of patches generated. Our modelling used a number of CNN architectural variations built on top of Inception Resnet V2, MobileNet, and shallower custom networks. Using minimum Mean Absolute Percentage Error (MAPE) on the validation set as our metric, we demonstrate strongest performance of 28.23% MAPE on a hybrid model. A deeper dive into our analysis demonstrated that working with fewer but larger patches of data performs as well or better for true deep models -- hence requiring the consumption of less resources during training
    corecore