84,193 research outputs found

    DeepWheat: Estimating Phenotypic Traits from Crop Images with Deep Learning

    Full text link
    In this paper, we investigate estimating emergence and biomass traits from color images and elevation maps of wheat field plots. We employ a state-of-the-art deconvolutional network for segmentation and convolutional architectures, with residual and Inception-like layers, to estimate traits via high dimensional nonlinear regression. Evaluation was performed on two different species of wheat, grown in field plots for an experimental plant breeding study. Our framework achieves satisfactory performance with mean and standard deviation of absolute difference of 1.05 and 1.40 counts for emergence and 1.45 and 2.05 for biomass estimation. Our results for counting wheat plants from field images are better than the accuracy reported for the similar, but arguably less difficult, task of counting leaves from indoor images of rosette plants. Our results for biomass estimation, even with a very small dataset, improve upon all previously proposed approaches in the literature.Comment: WACV 2018 (Code repository: https://github.com/p2irc/deepwheat_WACV-2018

    Unsupervised domain adaptation and super resolution on drone images for autonomous dry herbage biomass estimation

    Get PDF
    Herbage mass yield and composition estimation is an important tool for dairy farmers to ensure an adequate supply of high quality herbage for grazing and subsequently milk production. By accurately estimating herbage mass and composition, targeted nitrogen fertiliser application strategies can be deployed to improve localised regions in a herbage field, effectively reducing the negative impacts of over-fertilization on biodiversity and the environment. In this context, deep learning algorithms offer a tempting alternative to the usual means of sward composition estimation, which involves the destructive process of cutting a sample from the herbage field and sorting by hand all plant species in the herbage. The process is labour intensive and time consuming and so not utilised by farmers. Deep learning has been successfully applied in this context on images collected by high-resolution cameras on the ground. Moving the deep learning solution to drone imaging, however, has the potential to further improve the herbage mass yield and composition estimation task by extending the ground-level estimation to the large surfaces occupied by fields/paddocks. Drone images come at the cost of lower resolution views of the fields taken from a high altitude and requires further herbage ground-truth collection from the large surfaces covered by drone images. This paper proposes to transfer knowledge learned on ground-level images to raw drone images in an unsupervised manner. To do so, we use unpaired image style translation to enhance the resolution of drone images by a factor of eight and modify them to appear closer to their ground-level counterparts. We then ... ~\url{www.github.com/PaulAlbert31/Clover_SSL}.Comment: 11 pages, 5 figures. Accepted at the Agriculture-Vision CVPR 2022 Worksho

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Vision-based weed identification with farm robots

    Get PDF
    Robots in agriculture offer new opportunities for real time weed identification and quick removal operations. Weed identification and control remains one of the most challenging task in agriculture, particularly in organic agriculture practices. Considering environmental impacts and food quality, the excess use of chemicals in agriculture for controlling weeds and diseases is decreasing. The cost of herbercides and their field applications must be optimized. As an alternative, a smart weed identification technique followed by the mechanical and thermal weed control can fulfill the organic farmers’ expectations. The smart identification technique works on the concept of ‘shape matching’ and ‘active shape modeling’ of plant and weed leafs. The automated weed detection and control system consists of three major tools. Such as: i) eXcite multispectral camera, ii) LTI image processing library and iii) Hortibot robotic vehicle. The components are combined in Linux interface environment in the eXcite camera associate PC. The laboratory experiments for active shape matching have shown interesting results which will be further enhanced to develop the automated weed detection system. The Hortibot robot will be mounted with the camera unit in the front-end and the mechanical weed remover in the rear-end. The system will be upgraded for intense commercial applications in maize and other row crops

    A Measurement System for On-line Estimation of Weed Coverage

    Get PDF
    This paper describes two different solutions for the estimation of weed coverage. Both measuring systems discriminate the weed from the ground by means of the color difference between the weed and ground and can be used to on-line control tractor sprayers in order to reduce weedkiller use. The solutions differ with respect to the sensor type: one solution is based on a digital camera and a computer that analyzes the images and determines the weed amount, while the other simpler solution makes use of two photo detectors and an analog processing system. The camera-based solution provides an uncertainty of a few percentage, while the photo detector-based one, though extremely cheap, has an uncertainty of about 5% and suffers from changes in light conditions, which can alter the estimation

    Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection

    Full text link
    Selective weeding is one of the key challenges in the field of agriculture robotics. To accomplish this task, a farm robot should be able to accurately detect plants and to distinguish them between crop and weeds. Most of the promising state-of-the-art approaches make use of appearance-based models trained on large annotated datasets. Unfortunately, creating large agricultural datasets with pixel-level annotations is an extremely time consuming task, actually penalizing the usage of data-driven techniques. In this paper, we face this problem by proposing a novel and effective approach that aims to dramatically minimize the human intervention needed to train the detection and classification algorithms. The idea is to procedurally generate large synthetic training datasets randomizing the key features of the target environment (i.e., crop and weed species, type of soil, light conditions). More specifically, by tuning these model parameters, and exploiting a few real-world textures, it is possible to render a large amount of realistic views of an artificial agricultural scenario with no effort. The generated data can be directly used to train the model or to supplement real-world images. We validate the proposed methodology by using as testbed a modern deep learning based image segmentation architecture. We compare the classification results obtained using both real and synthetic images as training data. The reported results confirm the effectiveness and the potentiality of our approach.Comment: To appear in IEEE/RSJ IROS 201

    DIRECT ESTIMATION OF ABOVEGROUND FOREST PRODUCTIVITY THROUGH HYPERSPECTRAL REMOTE SENSING OF CANOPY NITROGEN

    Get PDF
    The concentration of nitrogen in foliage has been related to rates of net photosynthesis across a wide range of plant species and functional groups and thus represents a simple and biologically meaningful link between terrestrial cycles of carbon and nitrogen. Although foliar N is used by ecosystem models to predict rates of leaf‐level photosynthesis, it has rarely been examined as a direct scalar to stand‐level carbon gain. Establishment of such relationships would greatly simplify the nature of forest C and N linkages, enhancing our ability to derive estimates of forest productivity at landscape to regional scales. Here, we report on a highly predictive relationship between whole‐canopy nitrogen concentration and aboveground forest productivity in diverse forested stands of varying age and species composition across the 360 000‐ha White Mountain National Forest, New Hampshire, USA. We also demonstrate that hyperspectral remote sensing can be used to estimate foliar N concentration, and hence forest production across a large number of contiguous images. Together these data suggest that canopy‐level N concentration is an important correlate of productivity in these forested systems, and that imaging spectrometry of canopy N can provide direct estimates of forest productivity across large landscapes
    • …
    corecore