213 research outputs found
The Power of Transfer Learning in Agricultural Applications: AgriNet
Advances in deep learning and transfer learning have paved the way for
various automation classification tasks in agriculture, including plant
diseases, pests, weeds, and plant species detection. However, agriculture
automation still faces various challenges, such as the limited size of datasets
and the absence of plant-domain-specific pretrained models. Domain specific
pretrained models have shown state of art performance in various computer
vision tasks including face recognition and medical imaging diagnosis. In this
paper, we propose AgriNet dataset, a collection of 160k agricultural images
from more than 19 geographical locations, several images captioning devices,
and more than 423 classes of plant species and diseases. We also introduce
AgriNet models, a set of pretrained models on five ImageNet architectures:
VGG16, VGG19, Inception-v3, InceptionResNet-v2, and Xception. AgriNet-VGG19
achieved the highest classification accuracy of 94 % and the highest F1-score
of 92%. Additionally, all proposed models were found to accurately classify the
423 classes of plant species, diseases, pests, and weeds with a minimum
accuracy of 87% for the Inception-v3 model.Finally, experiments to evaluate of
superiority of AgriNet models compared to ImageNet models were conducted on two
external datasets: pest and plant diseases dataset from Bangladesh and a plant
diseases dataset from Kashmir
Weed recognition using deep learning techniques on class-imbalanced imagery
Context: Most weed species can adversely impact agricultural productivity by competing for nutrients required by high-value crops. Manual weeding is not practical for large cropping areas. Many studies have been undertaken to develop automatic weed management systems for agricultural crops. In this process, one of the major tasks is to recognise the weeds from images. However, weed recognition is a challenging task. It is because weed and crop plants can be similar in colour, texture and shape which can be exacerbated further by the imaging conditions, geographic or weather conditions when the images are recorded. Advanced machine learning techniques can be used to recognise weeds from imagery.
Aims: In this paper, we have investigated five state-of-the-art deep neural networks, namely VGG16, ResNet-50, Inception-V3, Inception-ResNet-v2 and MobileNetV2, and evaluated their performance for weed recognition.
Methods: We have used several experimental settings and multiple dataset combinations. In particular, we constructed a large weed-crop dataset by combining several smaller datasets, mitigating class imbalance by data augmentation, and using this dataset in benchmarking the deep neural networks. We investigated the use of transfer learning techniques by preserving the pre-trained weights for extracting the features and fine-tuning them using the images of crop and weed datasets.
Key results: We found that VGG16 performed better than others on small-scale datasets, while ResNet-50 performed better than other deep networks on the large combined dataset.
Conclusions: This research shows that data augmentation and fine tuning techniques improve the performance of deep learning models for classifying crop and weed images.
Implications: This research evaluates the performance of several deep learning models and offers directions for using the most appropriate models as well as highlights the need for a large scale benchmark weed dataset
Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets
Weeds compete with productive crops for soil, nutrients and sunlight and are therefore a major contributor to crop yield loss, which is why safer and more effective herbicide products are continually being developed. Digital evaluation tools to automate and homogenize field measurements are of vital importance to accelerate their development. However, the development of these tools requires the generation of semantic segmentation datasets, which is a complex, time-consuming and not easily affordable task.
In this paper, we present a deep learning segmentation model that is able to distinguish between different plant species at the pixel level. First, we have generated three extensive datasets targeting one crop species (Zea mays), three grass species (Setaria verticillata, Digitaria sanguinalis, Echinochloa crus-galli) and three broadleaf species (Abutilon theophrasti, Chenopodium albums, Amaranthus retroflexus). The first dataset consists of real field images that were manually annotated. The second dataset is composed of images of plots where only one species is present at a time and the third type of dataset was synthetically generated from images of individual plants mimicking the distribution of real field images.
Second, we have proposed a semantic segmentation architecture by extending a PSPNet architecture with an auxiliary classification loss to aid model convergence. Our results show that the network performance increases when supplementing the real field image dataset with the other types of datasets without increasing the manual annotation effort. More specifically, the use of the real field dataset obtains a Dice-Sรธensen Coefficient (DSC) score of 25.32. This performance increases when this dataset is combined with the single-species class dataset (DSC=47.97) or the synthetic dataset (DSC=45.20). As for the proposed model, the ablation method shows that by removing the proposed auxiliary classification loss, the segmentation performance decreases (DSC=45.96) compared to the proposed architecture method (DSC=47.97).
The proposed method shows better performance than the current state of the art. In addition, the use of proposed single-species or synthetic datasets can double the performance of the algorithm than when using real datasets without additional manual annotation effort.We would like to thank BASF technicians Rainer Oberst, Gerd Kraemer, Hikal Gad, Javier Romero and Juan Manuel Contreras, as well as Amaia Ortiz-Barredo from Neiker for their support in the design of the experiments and the generation of the data sets used in this work. This was partially supported by the Basque Government through ELKARTEK project BASQNET(ref K-2021/00014)
Deep Convolutional Neural Network Architecture for Plant Seedling Classification
Weed control is essential in agriculture since weeds reduce yields, increase production cost, impede harvesting, and degrade product quality. As a result, it is indeed critical to recognize weeds early in their vegetation cycle to evade negative impacts to crop growth. Earlier traditional methods used machine learning to determine crops along with weed species, but they had issues with weed detection efficiency at early growth stages. The current work proposes the implementation of a deep learning method that provides accurate results for precise weed recognition. Two different deep convolution neural networks have been used for our classification framework, namely Efficient Net B2 and Efficient Net B4. The plant seedlings dataset is utilized to investigate the proposed work. The evaluation metrics average accuracy, precision, recall, and F1-score were used. The findings demonstrate that the proposed approach is capable of differentiating between 12 species of a plant seedling dataset which contains 3 crops and 9 weeds. The average classification accuracy and F1 score are 99.00% for our Efficient Net B4 model and 97.00% for the Efficient Net B2. In addition, the proposed Efficient Net-B4 model performance is compared to the one of existing models on the plant seedlings dataset and the results showed that the proposed model Efficient Net B4 has superior performance. We intend to detect diseases in the identified plant species in our future research
Global Wheat Head Detection (GWHD) dataset: a large and diverse dataset of high resolution RGB labelled images to develop and benchmark wheat head detection methods
Detection of wheat heads is an important task allowing to estimate pertinent
traits including head population density and head characteristics such as
sanitary state, size, maturity stage and the presence of awns. Several studies
developed methods for wheat head detection from high-resolution RGB imagery.
They are based on computer vision and machine learning and are generally
calibrated and validated on limited datasets. However, variability in
observational conditions, genotypic differences, development stages, head
orientation represents a challenge in computer vision. Further, possible
blurring due to motion or wind and overlap between heads for dense populations
make this task even more complex. Through a joint international collaborative
effort, we have built a large, diverse and well-labelled dataset, the Global
Wheat Head detection (GWHD) dataset. It contains 4,700 high-resolution RGB
images and 190,000 labelled wheat heads collected from several countries around
the world at different growth stages with a wide range of genotypes. Guidelines
for image acquisition, associating minimum metadata to respect FAIR principles
and consistent head labelling methods are proposed when developing new head
detection datasets. The GWHD is publicly available at
http://www.global-wheat.com/ and aimed at developing and benchmarking methods
for wheat head detection.Comment: 16 pages, 7 figures, Dataset pape
๋ฅ๋ฌ๋ ๋ฐฉ๋ฒ๋ก ์ ์ด์ฉํ ๋์ ์ ์ฉ์ฑ์ ๊ฐ์ง ์๊ฒฝ์ฌ๋ฐฐ ํํ๋ฆฌ์นด ๋์ ์ ์ฐจ ๊ธฐ๋ฐ ๋ชจ๋ธ ๊ฐ๋ฐ
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ๋์
์๋ช
๊ณผํ๋ํ ๋๋ฆผ์๋ฌผ์์ํ๋ถ, 2022. 8. ์์ ์ต.Many agricultural challenges are entangled in a complex interaction between crops and the environment. As a simplifying tool, crop modeling is a process of abstracting and interpreting agricultural phenomena. Understanding based on this interpretation can play a role in supporting academic and social decisions in agriculture. Process-based crop models have solved the challenges for decades to enhance the productivity and quality of crop production; the remaining objectives have led to demand for crop models handling multidirectional analyses with multidimensional information. As a possible milestone to satisfy this goal, deep learning algorithms have been introduced to the complicated tasks in agriculture. However, the algorithms could not replace existing crop models because of the research fragmentation and low accessibility of the crop models. This study established a developmental protocol for a process-based crop model with deep learning methodology. Literature Review introduced deep learning and crop modeling, and it explained the reasons for the necessity of this protocol despite numerous deep learning applications for agriculture. Base studies were conducted with several greenhouse data in Chapters 1 and 2: transfer learning and U-Net structure were utilized to construct an infrastructure for the deep learning application; HyperOpt, a Bayesian optimization method, was tested to calibrate crop models to compare the existing crop models with the developed model. Finally, the process-based crop model with full deep neural networks, DeepCrop, was developed with an attention mechanism and multitask decoders for hydroponic sweet peppers (Capsicum annuum var. annuum) in Chapter 3. The methodology for data integrity showed adequate accuracy, so it was applied to the data in all chapters. HyperOpt was able to calibrate food and feed crop models for sweet peppers. Therefore, the compared models in the final chapter were optimized using HyperOpt. DeepCrop was trained to simulate several growth factors with environment data. The trained DeepCrop was evaluated with unseen data, and it showed the highest modeling efficiency (=0.76) and the lowest normalized root mean squared error (=0.18) than the compared models. With the high adaptability of DeepCrop, it can be used for studies on various scales and purposes. Since all methods adequately solved the given tasks and underlay the DeepCrop development, the established protocol can be a high throughput for enhancing accessibility of crop models, resulting in unifying crop modeling studies.๋์
์์คํ
์์ ๋ฐ์ํ๋ ๋ฌธ์ ๋ค์ ์๋ฌผ๊ณผ ํ๊ฒฝ์ ์ํธ์์ฉ ํ์ ๋ณต์กํ๊ฒ ์ฝํ ์๋ค. ์๋ฌผ ๋ชจ๋ธ๋ง์ ๋์์ ๋จ์ํํ๋ ๋ฐฉ๋ฒ์ผ๋ก์จ, ๋์
์์ ์ผ์ด๋๋ ํ์์ ์ถ์ํํ๊ณ ํด์ํ๋ ๊ณผ์ ์ด๋ค. ๋ชจ๋ธ๋ง์ ํตํด ๋์์ ์ดํดํ๋ ๊ฒ์ ๋์
๋ถ์ผ์ ํ์ ์ ๋ฐ ์ฌํ์ ๊ฒฐ์ ์ ์ง์ํ ์ ์๋ค. ์ง๋ ์๋
๊ฐ ์ ์ฐจ ๊ธฐ๋ฐ ์๋ฌผ ๋ชจ๋ธ์ ๋์
์ ๋ฌธ์ ๋ค์ ํด๊ฒฐํ์ฌ ์๋ฌผ ์์ฐ์ฑ ๋ฐ ํ์ง์ ์ฆ์ง์์ผฐ์ผ๋ฉฐ, ํ์ฌ ์๋ฌผ ๋ชจ๋ธ๋ง์ ๋จ์์๋ ๊ณผ์ ๋ค์ ๋ค์ฐจ์ ์ ๋ณด๋ฅผ ๋ค๋ฐฉํฅ์์ ๋ถ์ํ ์ ์๋ ์๋ฌผ ๋ชจ๋ธ์ ํ์๋ก ํ๊ฒ ๋์๋ค. ์ด๋ฅผ ๋ง์กฑ์ํฌ ์ ์๋ ์ง์นจ์ผ๋ก์จ, ๋ณต์กํ ๋์
์ ๊ณผ์ ๋ค์ ๋ชฉํ๋ก ๋ฅ๋ฌ๋ ์๊ณ ๋ฆฌ์ฆ์ด ๋์
๋์๋ค. ๊ทธ๋ฌ๋, ์ด ์๊ณ ๋ฆฌ์ฆ๋ค์ ๋ฎ์ ๋ฐ์ดํฐ ์๊ฒฐ์ฑ ๋ฐ ๋์ ์ฐ๊ตฌ ๋ค์์ฑ ๋๋ฌธ์ ๊ธฐ์กด์ ์๋ฌผ ๋ชจ๋ธ๋ค์ ๋์ฒดํ์ง๋ ๋ชปํ๋ค. ๋ณธ ์ฐ๊ตฌ์์๋ ๋ฅ๋ฌ๋ ๋ฐฉ๋ฒ๋ก ์ ์ด์ฉํ์ฌ ์ ์ฐจ ๊ธฐ๋ฐ ์๋ฌผ ๋ชจ๋ธ์ ๊ตฌ์ถํ๋ ๊ฐ๋ฐ ํ๋กํ ์ฝ์ ํ๋ฆฝํ์๋ค. Literature Review์์๋ ๋ฅ๋ฌ๋๊ณผ ์๋ฌผ ๋ชจ๋ธ์ ๋ํด ์๊ฐํ๊ณ , ๋์
์ผ๋ก์ ๋ฅ๋ฌ๋ ์ ์ฉ ์ฐ๊ตฌ๊ฐ ๋ง์์๋ ์ด ํ๋กํ ์ฝ์ด ํ์ํ ์ด์ ๋ฅผ ์ค๋ช
ํ์๋ค. ์ 1์ฅ๊ณผ 2์ฅ์์๋ ๊ตญ๋ด ์ฌ๋ฌ ์ง์ญ์ ๋ฐ์ดํฐ๋ฅผ ์ด์ฉํ์ฌ ์ ์ด ํ์ต ๋ฐ U-Net ๊ตฌ์กฐ๋ฅผ ํ์ฉํ์ฌ ๋ฅ๋ฌ๋ ๋ชจ๋ธ ์ ์ฉ์ ์ํ ๊ธฐ๋ฐ์ ๋ง๋ จํ๊ณ , ๋ฒ ์ด์ง์ ์ต์ ํ ๋ฐฉ๋ฒ์ธ HyperOpt๋ฅผ ์ฌ์ฉํ์ฌ ๊ธฐ์กด ๋ชจ๋ธ๊ณผ ๋ฅ๋ฌ๋ ๊ธฐ๋ฐ ๋ชจ๋ธ์ ๋น๊ตํ๊ธฐ ์ํด ์ํ์ ์ผ๋ก WOFOST ์๋ฌผ ๋ชจ๋ธ์ ๋ณด์ ํ๋ ๋ฑ ๋ชจ๋ธ ๊ฐ๋ฐ์ ์ํ ๊ธฐ๋ฐ ์ฐ๊ตฌ๋ฅผ ์ํํ์๋ค. ๋ง์ง๋ง์ผ๋ก, ์ 3์ฅ์์๋ ์ฃผ์ ๋ฉ์ปค๋์ฆ ๋ฐ ๋ค์ค ์์
๋์ฝ๋๋ฅผ ๊ฐ์ง ์์ ์ฌ์ธต ์ ๊ฒฝ๋ง ์ ์ฐจ ๊ธฐ๋ฐ ์๋ฌผ ๋ชจ๋ธ์ธ DeepCrop์ ์๊ฒฝ์ฌ๋ฐฐ ํํ๋ฆฌ์นด(Capsicum annuum var. annuum) ๋์์ผ๋ก ๊ฐ๋ฐํ์๋ค. ๋ฐ์ดํฐ ์๊ฒฐ์ฑ์ ์ํ ๊ธฐ์ ๋ค์ ์ ํฉํ ์ ํ๋๋ฅผ ๋ณด์ฌ์ฃผ์์ผ๋ฉฐ, ์ ์ฒด ์ฑํฐ ๋ฐ์ดํฐ์ ์ ์ฉํ์๋ค. HyperOpt๋ ์๋ ๋ฐ ์ฌ๋ฃ ์๋ฌผ ๋ชจ๋ธ๋ค์ ํํ๋ฆฌ์นด ๋์์ผ๋ก ๋ณด์ ํ ์ ์์๋ค. ๋ฐ๋ผ์, ์ 3์ฅ์ ๋น๊ต ๋์ ๋ชจ๋ธ๋ค์ ๋ํด HyperOpt๋ฅผ ์ฌ์ฉํ์๋ค. DeepCrop์ ํ๊ฒฝ ๋ฐ์ดํฐ๋ฅผ ์ด์ฉํ๊ณ ์ฌ๋ฌ ์์ก ์งํ๋ฅผ ์์ธกํ๋๋ก ํ์ต๋์๋ค. ํ์ต์ ์ฌ์ฉํ์ง ์์ ๋ฐ์ดํฐ๋ฅผ ์ด์ฉํ์ฌ ํ์ต๋ DeepCrop๋ฅผ ํ๊ฐํ์์ผ๋ฉฐ, ์ด ๋ ๋น๊ต ๋ชจ๋ธ๋ค ์ค ๊ฐ์ฅ ๋์ ๋ชจํ ํจ์จ(EF=0.76)๊ณผ ๊ฐ์ฅ ๋ฎ์ ํ์คํ ํ๊ท ์ ๊ณฑ๊ทผ ์ค์ฐจ(NRMSE=0.18)๋ฅผ ๋ณด์ฌ์ฃผ์๋ค. DeepCrop์ ๋์ ์ ์ฉ์ฑ์ ๊ธฐ๋ฐ์ผ๋ก ๋ค์ํ ๋ฒ์์ ๋ชฉ์ ์ ๊ฐ์ง ์ฐ๊ตฌ์ ์ฌ์ฉ๋ ์ ์์ ๊ฒ์ด๋ค. ๋ชจ๋ ๋ฐฉ๋ฒ๋ค์ด ์ฃผ์ด์ง ์์
์ ์ ์ ํ ํ์ด๋๊ณ DeepCrop ๊ฐ๋ฐ์ ๊ทผ๊ฑฐ๊ฐ ๋์์ผ๋ฏ๋ก, ๋ณธ ๋
ผ๋ฌธ์์ ํ๋ฆฝํ ํ๋กํ ์ฝ์ ์๋ฌผ ๋ชจ๋ธ์ ์ ๊ทผ์ฑ์ ํฅ์์ํฌ ์ ์๋ ํ๊ธฐ์ ์ธ ๋ฐฉํฅ์ ์ ์ํ์๊ณ , ์๋ฌผ ๋ชจ๋ธ ์ฐ๊ตฌ์ ํตํฉ์ ๊ธฐ์ฌํ ์ ์์ ๊ฒ์ผ๋ก ๊ธฐ๋ํ๋ค.LITERATURE REVIEW 1
ABSTRACT 1
BACKGROUND 3
REMARKABLE APPLICABILITY AND ACCESSIBILITY OF DEEP LEARNING 12
DEEP LEARNING APPLICATIONS FOR CROP PRODUCTION 17
THRESHOLDS TO APPLY DEEP LEARNING TO CROP MODELS 18
NECESSITY TO PRIORITIZE DEEP-LEARNING-BASED CROP MODELS 20
REQUIREMENTS OF THE DEEP-LEARNING-BASED CROP MODELS 21
OPENING REMARKS AND THESIS OBJECTIVES 22
LITERATURE CITED 23
Chapter 1 34
Chapter 1-1 35
ABSTRACT 35
INTRODUCTION 37
MATERIALS AND METHODS 40
RESULTS 50
DISCUSSION 59
CONCLUSION 63
LITERATURE CITED 64
Chapter 1-2 71
ABSTRACT 71
INTRODUCTION 73
MATERIALS AND METHODS 75
RESULTS 84
DISCUSSION 92
CONCLUSION 101
LITERATURE CITED 102
Chapter 2 108
ABSTRACT 108
NOMENCLATURE 110
INTRODUCTION 112
MATERIALS AND METHODS 115
RESULTS 124
DISCUSSION 133
CONCLUSION 137
LITERATURE CITED 138
Chapter 3 144
ABSTRACT 144
INTRODUCTION 146
MATERIALS AND METHODS 149
RESULTS 169
DISCUSSION 182
CONCLUSION 187
LITERATURE CITED 188
GENERAL DISCUSSION 196
GENERAL CONCLUSION 201
ABSTRACT IN KOREAN 203
APPENDIX 204๋ฐ
Semantic Segmentation based deep learning approaches for weed detection
Global increase in herbicide use to control weeds has led to issues such as evolution of herbicide-resistant weeds, off-target herbicide movement, etc. Precision agriculture advocates Site Specific Weed Management (SSWM) application to achieve precise and right amount of herbicide spray and reduce off-target herbicide movement. Recent advancements in Deep Learning (DL) have opened possibilities for adaptive and accurate weed recognitions for field based SSWM applications with traditional and emerging spraying equipment; however, challenges exist in identifying the DL model structure and train the model appropriately for accurate and rapid model applications over varying crop/weed growth stages and environment. In our study, an encoder-decoder based DL architecture was proposed that performs pixel-wise Semantic Segmentation (SS) classifications of crop, soil, and weed patches in the fields. The objective of this study was to develop a robust weed detection algorithm using DL techniques that can accurately and reliably locate weed infestations in low altitude Unmanned Aerial Vehicle (UAV) imagery with acceptable application speed. Two different encoder-decoder based SS models of LinkNet and UNet were developed using transfer learning techniques. We performed various measures such as backpropagation optimization and refining of the dataset used for training to address the class-imbalance problem which is a common issue in developing weed detection models. It was found that LinkNet model with ResNet18 as the encoder section and use of โFocal lossโ loss function was able to achieve the highest mean and class-wise Intersection over Union scores for different class categories while performing predictions on unseen dataset. The developed state-of-art model did not require a large amount of data during training and the techniques used to develop the model in our study provides a propitious opportunity that performs better than the existing SS based weed detections models. The proposed model integrates a futuristic approach to develop a model that could be used for weed detection on aerial imagery from UAV and perform real-time SSWM applications
Advisor: Yeyin Sh
Sustainable Agriculture and Advances of Remote Sensing (Volume 1)
Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publishing the results, among others
Artificial Neural Networks in Agriculture
Modern agriculture needs to have high production efficiency combined with a high quality of obtained products. This applies to both crop and livestock production. To meet these requirements, advanced methods of data analysis are more and more frequently used, including those derived from artificial intelligence methods. Artificial neural networks (ANNs) are one of the most popular tools of this kind. They are widely used in solving various classification and prediction tasks, for some time also in the broadly defined field of agriculture. They can form part of precision farming and decision support systems. Artificial neural networks can replace the classical methods of modelling many issues, and are one of the main alternatives to classical mathematical models. The spectrum of applications of artificial neural networks is very wide. For a long time now, researchers from all over the world have been using these tools to support agricultural production, making it more efficient and providing the highest-quality products possible
- โฆ