20 research outputs found

    Generative-Model-Based Data Labeling for Deep Network Regression: Application to Seed Maturity Estimation from UAV Multispectral Images

    No full text
    Field seed maturity monitoring is essential to optimize the farming process and guarantee yield quality through high germination. Remote sensing of parsley fields through UAV multispectral imagery allows uniform scanning and better capture of crop information, in comparison to traditional limited field sampling analysis in the laboratory. Moreover, they only represent localized sub-sections of the crop field and are time consuming to process. The limited availability of seed sample maturity data is a drawback for applying deep learning methods, which have shown tremendous potential in estimating agronomic parameters, especially maturity, as they require large labeled datasets. In this paper, we propose a parametric and non-parametric-based weak labeling approach to overcome the lack of maturity labels and render possible maturity estimation by deep network regression to assist growers in harvest decision-making. We present the data acquisition protocol and the performance evaluation of the generative models and neural network architectures. Convolutional and recurrent neural networks were trained on the generated labels and evaluated on maturity ground truth labels to assess the maturity quantification quality. The results showed improvement by the semi-supervised approaches over the generative models, with a root-mean-squared error of 0.0770 for the long-short-term memory network trained on kernel-density-estimation-generated labels. Generative-model-based data labeling can unlock new possibilities for remote sensing fields where data collection is complex, and in our usage, they provide better-performing models for parsley maturity estimation based on UAV multispectral imagery

    Deep Learning with Unsupervised Data Labeling for Weed Detection in Line Crops in UAV Images

    No full text
    In recent years, weeds have been responsible for most agricultural yield losses. To deal with this threat, farmers resort to spraying the fields uniformly with herbicides. This method not only requires huge quantities of herbicides but impacts the environment and human health. One way to reduce the cost and environmental impact is to allocate the right doses of herbicide to the right place and at the right time (precision agriculture). Nowadays, unmanned aerial vehicles (UAVs) are becoming an interesting acquisition system for weed localization and management due to their ability to obtain images of the entire agricultural field with a very high spatial resolution and at a low cost. However, despite significant advances in UAV acquisition systems, the automatic detection of weeds remains a challenging problem because of their strong similarity to the crops. Recently, a deep learning approach has shown impressive results in different complex classification problems. However, this approach needs a certain amount of training data, and creating large agricultural datasets with pixel-level annotations by an expert is an extremely time-consuming task. In this paper, we propose a novel fully automatic learning method using convolutional neuronal networks (CNNs) with an unsupervised training dataset collection for weed detection from UAV images. The proposed method comprises three main phases. First, we automatically detect the crop rows and use them to identify the inter-row weeds. In the second phase, inter-row weeds are used to constitute the training dataset. Finally, we perform CNNs on this dataset to build a model able to detect the crop and the weeds in the images. The results obtained are comparable to those of traditional supervised training data labeling, with differences in accuracy of 1.5% in the spinach field and 6% in the bean field

    Transformer Neural Network for Weed and Crop Classification of High Resolution UAV Images

    No full text
    Monitoring crops and weeds is a major challenge in agriculture and food production today. Weeds compete directly with crops for moisture, nutrients, and sunlight. They therefore have a significant negative impact on crop yield if not sufficiently controlled. Weed detection and mapping is an essential step in weed control. Many existing research studies recognize the importance of remote sensing systems and machine learning algorithms in weed management. Deep learning approaches have shown good performance in many agriculture-related remote sensing tasks, such as plant classification, disease detection, etc. However, despite the success of these approaches, they still face many challenges such as high computation cost, the need of large labelled datasets, intra-class discrimination (in growing phase weeds and crops share many attributes similarity as color, texture, and shape), etc. This paper aims to show that the attention-based deep network is a promising approach to address the forementioned problems, in the context of weeds and crops recognition with drone system. The specific objective of this study was to investigate visual transformers (ViT) and apply them to plant classification in Unmanned Aerial Vehicles (UAV) images. Data were collected using a high-resolution camera mounted on a UAV, which was deployed in beet, parsley and spinach fields. The acquired data were augmented to build larger dataset, since ViT requires large sample sets for better performance, we also adopted the transfer learning strategy. Experiments were set out to assess the effect of training and validation dataset size, as well as the effect of increasing the test set while reducing the training set. The results show that with a small labeled training dataset, the ViT models outperform state-of-the-art models such as EfficientNet and ResNet. The results of this study are promising and show the potential of ViT to be applied to a wide range of remote sensing image analysis tasks

    Tracking System Using Camshift and Feature Points

    No full text
    Publication in the conference proceedings of EUSIPCO, Florence, Italy, 200

    A joint snake and atlas-based segmentation of plantar foot thermal images

    No full text
    International audienc

    A joint snake and atlas-based segmentation of plantar foot thermal images

    No full text
    International audienc

    A joint snake and atlas-based segmentation of plantar foot thermal images

    No full text
    International audienc

    Indoor Pedestrian Localization With a Smartphone: A Comparison of Inertial and Vision-Based Methods

    No full text
    International audienc
    corecore