52 research outputs found

    AURORA:autonomous real-time on-board video analytics

    Get PDF
    In this paper, we describe the design and implementation of a small light weight, low-cost and power-efficient payload system for the use in unmanned aerial vehicles (UAVs). The primary application of the payload system is that of performing real-time autonomous objects detection and tracking in the videos taken from a UAV camera. The implemented objects detection and tracking algorithms utilise Recursive Density Estimation (RDE) and Evolving Local Means (ELM) clustering to perform detection and tracking moving objects. Furthermore, experiments are presented which demonstrate that the introduced system is able to detect by on-board processing any moving objects from a UAV and start tracking them in real-time while at the same time sending important data only to a control station located on the ground

    A Neural Network Method for Classification of Sunlit and Shaded Components of Wheat Canopies in the Field Using High-Resolution Hyperspectral Imagery

    Get PDF
    (1) Background: Information rich hyperspectral sensing, together with robust image analysis, is providing new research pathways in plant phenotyping. This combination facilitates the acquisition of spectral signatures of individual plant organs as well as providing detailed information about the physiological status of plants. Despite the advances in hyperspectral technology in field-based plant phenotyping, little is known about the characteristic spectral signatures of shaded and sunlit components in wheat canopies. Non-imaging hyperspectral sensors cannot provide spatial information; thus, they are not able to distinguish the spectral reflectance differences between canopy components. On the other hand, the rapid development of high-resolution imaging spectroscopy sensors opens new opportunities to investigate the reflectance spectra of individual plant organs which lead to the understanding of canopy biophysical and chemical characteristics. (2) Method: This study reports the development of a computer vision pipeline to analyze ground-acquired imaging spectrometry with high spatial and spectral resolutions for plant phenotyping. The work focuses on the critical steps in the image analysis pipeline from pre-processing to the classification of hyperspectral images. In this paper, two convolutional neural networks (CNN) are employed to automatically map wheat canopy components in shaded and sunlit regions and to determine their specific spectral signatures. The first method uses pixel vectors of the full spectral features as inputs to the CNN model and the second method integrates the dimension reduction technique known as linear discriminate analysis (LDA) along with the CNN to increase the feature discrimination and improves computational efficiency. (3) Results: The proposed technique alleviates the limitations and lack of separability inherent in existing pre-defined hyperspectral classification methods. It optimizes the use of hyperspectral imaging and ensures that the data provide information about the spectral characteristics of the targeted plant organs, rather than the background. We demonstrated that high-resolution hyperspectral imagery along with the proposed CNN model can be powerful tools for characterizing sunlit and shaded components of wheat canopies in the field. The presented method will provide significant advances in the determination and relevance of spectral properties of shaded and sunlit canopy components under natural light conditions

    A nested hierarchy of dynamically evolving clouds for big data structuring and searching

    Get PDF
    The need to analyse big data streams and prescribe actions pro-actively is pervasive in nearly every industry. As growth of unstructured data increases, using analytical systems to assimilate and interpret images and videos as well as interpret structured data is essential. In this paper, we proposed a novel approach to transform image dataset into higher-level constructs that can be analysed more computationally efficiently, reliably and extremely fast. The proposed approach provides a high visual quality result between the query image and data clouds with hierarchical dynamically nested evolving structure. The results illustrate that the introduced approach can be an effective yet computationally efficient way to analyse and manipulate storedimages which has become the centre of attention of many professional fields and institutional sectors over the last few years

    Look-a-like:a fast content-based image retrieval approach using a hierarchically nested dynamically evolving image clouds and recursive local data density

    Get PDF
    The need to find related images from big data streams is shared by many professionals, such as architects, engineers, designers, journalist, and ordinary people. Users need to quickly find the relevant images from data streams generated from a variety of domains. The challenges in image retrieval are widely recognised and the research aiming to address them led to the area of CBIR becoming a 'hot' area. In this paper, we propose a novel computationally efficient approach which provides a high visual quality result based on the use of local recursive density estimation (RDE) between a given query image of interest and data clouds/clusters which have hierarchical dynamically nested evolving structure. The proposed approach makes use of a combination of multiple features. The results on a data set of 65,000 images organised in two layers of an hierarchy demonstrate its computational efficiency. Moreover, the proposed Look-a-like approach is self-evolving and updating adding new images by crawling and from the queries made

    DeepCount: In-Field Automatic Quantification of Wheat Spikes Using Simple Linear Iterative Clustering and Deep Convolutional Neural Networks

    Get PDF
    Crop yield is an essential measure for breeders, researchers and farmers and is comprised of and may be calculated by the number of ears/m2, grains per ear and thousand grain weight. Manual wheat ear counting, required in breeding programmes to evaluate crop yield potential, is labour intensive and expensive; thus, the development of a real-time wheat head counting system would be a significant advancement. In this paper, we propose a computationally efficient system called DeepCount to automatically identify and count the number of wheat spikes in digital images taken under the natural fields conditions. The proposed method tackles wheat spike quantification by segmenting an image into superpixels using Simple Linear Iterative Clustering (SLIC), deriving canopy relevant features, and then constructing a rational feature model fed into the deep Convolutional Neural Network (CNN) classification for semantic segmentation of wheat spikes. As the method is based on a deep learning model, it replaces hand-engineered features required for traditional machine learning methods with more efficient algorithms. The method is tested on digital images taken directly in the field at different stages of ear emergence/maturity (using visually different wheat varieties), with different canopy complexities (achieved through varying nitrogen inputs), and different heights above the canopy under varying environmental conditions. In addition, the proposed technique is compared with a wheat ear counting method based on a previously developed edge detection technique and morphological analysis. The proposed approach is validated with image-based ear counting and ground-based measurements. The results demonstrate that the DeepCount technique has a high level of robustness regardless of variables such as growth stage and weather conditions, hence demonstrating the feasibility of the approach in real scenarios. The system is a leap towards a portable and smartphone assisted wheat ear counting systems, results in reducing the labour involved and is suitable for high-throughput analysis. It may also be adapted to work on RGB images acquired from UAVs

    Multi-feature machine learning model for automatic segmentation of green fractional vegetation cover for high-throughput field phenotyping

    Get PDF
    Background Accurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments. Results In this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and other learning methods on digital images. All methods are compared and evaluated with different environmental conditions and the following criteria: (1) comparison with ground-truth images, (2) variation along a day with changes in ambient illumination, (3) comparison with manual measurements and (4) an estimation of performance along the full life cycle of a wheat canopy. Conclusion The method described is capable of coping with the environmental challenges faced in field conditions, with high levels of adaptiveness and without the need for adjusting a threshold for each digital image. The proposed method is also an ideal candidate to process a time series of phenotypic information throughout the crop growth acquired in the field. Moreover, the introduced method has an advantage that it is not limited to growth measurements only but can be applied on other applications such as identifying weeds, diseases, stress, etc

    Automated method to determine two critical growth stages of wheat: heading and flowering

    Get PDF
    Recording growth stage information is an important aspect of precision agriculture, crop breeding and phenotyping. In practice, crop growth stage is still primarily monitored by-eye, which is not only laborious and time-consuming, but also subjective and error-prone. The application of computer vision on digital images offers a high-throughput and non-invasive alternative to manual observations and its use in agriculture and high-throughput phenotyping is increasing. This paper presents an automated method to detect wheat heading and flowering stages, which uses the application of computer vision on digital images. The bag-of-visual-word technique is used to identify the growth stage during heading and flowering within digital images. Scale invariant feature transformation feature extraction technique is used for lower level feature extraction; subsequently, local linear constraint coding and spatial pyramid matching are developed in the mid-level representation stage. At the end, support vector machine classification is used to train and test the data samples. The method outperformed existing algorithms, having yielded 95.24, 97.79, 99.59% at early, medium and late stages of heading, respectively and 85.45% accuracy for flowering detection. The results also illustrate that the proposed method is robust enough to handle complex environmental changes (illumination, occlusion). Although the proposed method is applied only on identifying growth stage in wheat, there is potential for application to other crops and categorization concepts, such as disease classification

    Field Scanalyzer: An automated robotic field phenotyping platform for detailed crop monitoring

    Get PDF
    Current approaches to field phenotyping are laborious or permit the use of only a few sensors at a time. In an effort to overcome this, a fully automated robotic field phenotyping platform with a dedicated sensor array that may be accurately positioned in three dimensions and mounted on fixed rails has been established, to facilitate continual and high-throughput monitoring of crop performance. Employed sensors comprise of high-resolution visible, chlorophyll fluorescence and thermal infrared cameras, two hyperspectral imagers and dual 3D laser scanners. The sensor array facilitates specific growth measurements and identification of key growth stages with dense temporal and spectral resolution. Together, this platform produces a detailed description of canopy development across the crops entire lifecycle, with a high-degree of accuracy and reproducibility

    Time-intensive geoelectrical monitoring under winter wheat

    Get PDF
    Several studies have explored the potential of electrical resistivity tomography to monitor changes in soil moisture associated with the root water uptake of different crops. Such studies usually use a set of limited below-ground measurements throughout the growth season but are often unable to get a complete picture of the dynamics of the processes. With the development of high-throughput phenotyping platforms, we now have the capability to collect more frequent above-ground measurements, such as canopy cover, enabling the comparison with below-ground data. In this study hourly DC resistivity data were collected under the Field Scanalyzer platform at Rothamsted Research with different winter wheat varieties and nitrogen treatments in 2018 and 2019. Results from both years demonstrate the importance of applying the temperature correction to interpret hourly electrical conductivity (EC) data. Crops which received larger amounts of nitrogen showed larger canopy cover and more rapid changes in EC, especially during large rainfall events. The varieties showed contrasted heights although this does not appear to have influenced EC dynamics. The daily cyclic component of the EC signal was extracted by decomposing the time series. A shift in this daily component was observed during the growth season. For crops with appreciable difference in canopy cover, high frequency DC resistivity monitoring was able to distinguish the different below-ground behaviors. The results also highlight how coarse temporal sampling may affect interpretation of resistivity data from crop monitoring studies

    Functional QTL mapping and genomic prediction of canopy height in wheat measured using a robotic field phenotyping platform

    Get PDF
    Genetic studies increasingly rely on high-throughput phenotyping, but the resulting longitudinal data pose analytical challenges. We used canopy height data from an automated field phenotyping platform to compare several approaches to scanning for quantitative trait loci (QTLs) and performing genomic prediction in a wheat recombinant inbred line mapping population based on up to 26 sampled time points (TPs). We detected four persistent QTLs (i.e. expressed for most of the growing season), with both empirical and simulation analyses demonstrating superior statistical power of detecting such QTLs through functional mapping approaches compared with conventional individual TP analyses. In contrast, even very simple individual TP approaches (e.g. interval mapping) had superior detection power for transient QTLs (i.e. expressed during very short periods). Using spline-smoothed phenotypic data resulted in improved genomic predictive abilities (5–8% higher than individual TP prediction), while the effect of including significant QTLs in prediction models was relatively minor (<1–4% improvement). Finally, although QTL detection power and predictive ability generally increased with the number of TPs analysed, gains beyond five or 10 TPs chosen based on phenological information had little practical significance. These results will inform the development of an integrated, semi-automated analytical pipeline, which will be more broadly applicable to similar data sets in wheat and other crops
    corecore