5 research outputs found

    Global Wheat Head Detection 2021: an improved dataset for benchmarking wheat head detection methods

    Get PDF
    The Global Wheat Head Detection (GWHD) dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions. With an associated competition hosted in Kaggle, GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities. From this first experience, a few avenues for improvements have been identified regarding data size, head diversity, and label reliability. To address these issues, the 2020 dataset has been reexamined, relabeled, and complemented by adding 1722 images from 5 additional countries, allowing for 81,553 additional wheat heads. We now release in 2021 a new version of the Global Wheat Head Detection dataset, which is bigger, more diverse, and less noisy than the GWHD_2020 version

    Analyzing Changes in Maize Leaves Orientation due to GxExM Using an Automatic Method from RGB Images

    No full text
    The sowing pattern has an important impact on light interception efficiency in maize by determining the spatial distribution of leaves within the canopy. Leaves orientation is an important architectural trait determining maize canopies light interception. Previous studies have indicated how maize genotypes may adapt leaves orientation to avoid mutual shading with neighboring plants as a plastic response to intraspecific competition. The goal of the present study is 2-fold: firstly, to propose and validate an automatic algorithm (Automatic Leaf Azimuth Estimation from Midrib detection [ALAEM]) based on leaves midrib detection in vertical red green blue (RGB) images to describe leaves orientation at the canopy level; and secondly, to describe genotypic and environmental differences in leaves orientation in a panel of 5 maize hybrids sowing at 2 densities (6 and 12 plants.m−2) and 2 row spacing (0.4 and 0.8 m) over 2 different sites in southern France. The ALAEM algorithm was validated against in situ annotations of leaves orientation, showing a satisfactory agreement (root mean square [RMSE] error = 0.1, R2 = 0.35) in the proportion of leaves oriented perpendicular to rows direction across sowing patterns, genotypes, and sites. The results from ALAEM permitted to identify significant differences in leaves orientation associated to leaves intraspecific competition. In both experiments, a progressive increase in the proportion of leaves oriented perpendicular to the row is observed when the rectangularity of the sowing pattern increases from 1 (6 plants.m−2, 0.4 m row spacing) towards 8 (12 plants.m−2, 0.8 m row spacing). Significant differences among the 5 cultivars were found, with 2 hybrids exhibiting, systematically, a more plastic behavior with a significantly higher proportion of leaves oriented perpendicularly to avoid overlapping with neighbor plants at high rectangularity. Differences in leaves orientation were also found between experiments in a squared sowing pattern (6 plants.m−2, 0.4 m row spacing), indicating a possible contribution of illumination conditions inducing a preferential orientation toward east-west direction when intraspecific competition is low

    VegAnn, Vegetation Annotation of multi-crop RGB images acquired under diverse conditions for segmentation

    No full text
    Abstract Applying deep learning to images of cropping systems provides new knowledge and insights in research and commercial applications. Semantic segmentation or pixel-wise classification, of RGB images acquired at the ground level, into vegetation and background is a critical step in the estimation of several canopy traits. Current state of the art methodologies based on convolutional neural networks (CNNs) are trained on datasets acquired under controlled or indoor environments. These models are unable to generalize to real-world images and hence need to be fine-tuned using new labelled datasets. This motivated the creation of the VegAnn - Vegetation Annotation - dataset, a collection of 3775 multi-crop RGB images acquired for different phenological stages using different systems and platforms in diverse illumination conditions. We anticipate that VegAnn will help improving segmentation algorithm performances, facilitate benchmarking and promote large-scale crop vegetation segmentation research
    corecore