213 research outputs found

    The Power of Transfer Learning in Agricultural Applications: AgriNet

    Full text link
    Advances in deep learning and transfer learning have paved the way for various automation classification tasks in agriculture, including plant diseases, pests, weeds, and plant species detection. However, agriculture automation still faces various challenges, such as the limited size of datasets and the absence of plant-domain-specific pretrained models. Domain specific pretrained models have shown state of art performance in various computer vision tasks including face recognition and medical imaging diagnosis. In this paper, we propose AgriNet dataset, a collection of 160k agricultural images from more than 19 geographical locations, several images captioning devices, and more than 423 classes of plant species and diseases. We also introduce AgriNet models, a set of pretrained models on five ImageNet architectures: VGG16, VGG19, Inception-v3, InceptionResNet-v2, and Xception. AgriNet-VGG19 achieved the highest classification accuracy of 94 % and the highest F1-score of 92%. Additionally, all proposed models were found to accurately classify the 423 classes of plant species, diseases, pests, and weeds with a minimum accuracy of 87% for the Inception-v3 model.Finally, experiments to evaluate of superiority of AgriNet models compared to ImageNet models were conducted on two external datasets: pest and plant diseases dataset from Bangladesh and a plant diseases dataset from Kashmir

    Weed recognition using deep learning techniques on class-imbalanced imagery

    Get PDF
    Context: Most weed species can adversely impact agricultural productivity by competing for nutrients required by high-value crops. Manual weeding is not practical for large cropping areas. Many studies have been undertaken to develop automatic weed management systems for agricultural crops. In this process, one of the major tasks is to recognise the weeds from images. However, weed recognition is a challenging task. It is because weed and crop plants can be similar in colour, texture and shape which can be exacerbated further by the imaging conditions, geographic or weather conditions when the images are recorded. Advanced machine learning techniques can be used to recognise weeds from imagery. Aims: In this paper, we have investigated five state-of-the-art deep neural networks, namely VGG16, ResNet-50, Inception-V3, Inception-ResNet-v2 and MobileNetV2, and evaluated their performance for weed recognition. Methods: We have used several experimental settings and multiple dataset combinations. In particular, we constructed a large weed-crop dataset by combining several smaller datasets, mitigating class imbalance by data augmentation, and using this dataset in benchmarking the deep neural networks. We investigated the use of transfer learning techniques by preserving the pre-trained weights for extracting the features and fine-tuning them using the images of crop and weed datasets. Key results: We found that VGG16 performed better than others on small-scale datasets, while ResNet-50 performed better than other deep networks on the large combined dataset. Conclusions: This research shows that data augmentation and fine tuning techniques improve the performance of deep learning models for classifying crop and weed images. Implications: This research evaluates the performance of several deep learning models and offers directions for using the most appropriate models as well as highlights the need for a large scale benchmark weed dataset

    Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets

    Get PDF
    Weeds compete with productive crops for soil, nutrients and sunlight and are therefore a major contributor to crop yield loss, which is why safer and more effective herbicide products are continually being developed. Digital evaluation tools to automate and homogenize field measurements are of vital importance to accelerate their development. However, the development of these tools requires the generation of semantic segmentation datasets, which is a complex, time-consuming and not easily affordable task. In this paper, we present a deep learning segmentation model that is able to distinguish between different plant species at the pixel level. First, we have generated three extensive datasets targeting one crop species (Zea mays), three grass species (Setaria verticillata, Digitaria sanguinalis, Echinochloa crus-galli) and three broadleaf species (Abutilon theophrasti, Chenopodium albums, Amaranthus retroflexus). The first dataset consists of real field images that were manually annotated. The second dataset is composed of images of plots where only one species is present at a time and the third type of dataset was synthetically generated from images of individual plants mimicking the distribution of real field images. Second, we have proposed a semantic segmentation architecture by extending a PSPNet architecture with an auxiliary classification loss to aid model convergence. Our results show that the network performance increases when supplementing the real field image dataset with the other types of datasets without increasing the manual annotation effort. More specifically, the use of the real field dataset obtains a Dice-Sรธensen Coefficient (DSC) score of 25.32. This performance increases when this dataset is combined with the single-species class dataset (DSC=47.97) or the synthetic dataset (DSC=45.20). As for the proposed model, the ablation method shows that by removing the proposed auxiliary classification loss, the segmentation performance decreases (DSC=45.96) compared to the proposed architecture method (DSC=47.97). The proposed method shows better performance than the current state of the art. In addition, the use of proposed single-species or synthetic datasets can double the performance of the algorithm than when using real datasets without additional manual annotation effort.We would like to thank BASF technicians Rainer Oberst, Gerd Kraemer, Hikal Gad, Javier Romero and Juan Manuel Contreras, as well as Amaia Ortiz-Barredo from Neiker for their support in the design of the experiments and the generation of the data sets used in this work. This was partially supported by the Basque Government through ELKARTEK project BASQNET(ref K-2021/00014)

    Deep Convolutional Neural Network Architecture for Plant Seedling Classification

    Get PDF
    Weed control is essential in agriculture since weeds reduce yields, increase production cost, impede harvesting, and degrade product quality. As a result, it is indeed critical to recognize weeds early in their vegetation cycle to evade negative impacts to crop growth. Earlier traditional methods used machine learning to determine crops along with weed species, but they had issues with weed detection efficiency at early growth stages. The current work proposes the implementation of a deep learning method that provides accurate results for precise weed recognition. Two different deep convolution neural networks have been used for our classification framework, namely Efficient Net B2 and Efficient Net B4. The plant seedlings dataset is utilized to investigate the proposed work. The evaluation metrics average accuracy, precision, recall, and F1-score were used. The findings demonstrate that the proposed approach is capable of differentiating between 12 species of a plant seedling dataset which contains 3 crops and 9 weeds. The average classification accuracy and F1 score are 99.00% for our Efficient Net B4 model and 97.00% for the Efficient Net B2. In addition, the proposed Efficient Net-B4 model performance is compared to the one of existing models on the plant seedlings dataset and the results showed that the proposed model Efficient Net B4 has superior performance. We intend to detect diseases in the identified plant species in our future research

    Global Wheat Head Detection (GWHD) dataset: a large and diverse dataset of high resolution RGB labelled images to develop and benchmark wheat head detection methods

    Get PDF
    Detection of wheat heads is an important task allowing to estimate pertinent traits including head population density and head characteristics such as sanitary state, size, maturity stage and the presence of awns. Several studies developed methods for wheat head detection from high-resolution RGB imagery. They are based on computer vision and machine learning and are generally calibrated and validated on limited datasets. However, variability in observational conditions, genotypic differences, development stages, head orientation represents a challenge in computer vision. Further, possible blurring due to motion or wind and overlap between heads for dense populations make this task even more complex. Through a joint international collaborative effort, we have built a large, diverse and well-labelled dataset, the Global Wheat Head detection (GWHD) dataset. It contains 4,700 high-resolution RGB images and 190,000 labelled wheat heads collected from several countries around the world at different growth stages with a wide range of genotypes. Guidelines for image acquisition, associating minimum metadata to respect FAIR principles and consistent head labelling methods are proposed when developing new head detection datasets. The GWHD is publicly available at http://www.global-wheat.com/ and aimed at developing and benchmarking methods for wheat head detection.Comment: 16 pages, 7 figures, Dataset pape

    ๋”ฅ๋Ÿฌ๋‹ ๋ฐฉ๋ฒ•๋ก ์„ ์ด์šฉํ•œ ๋†’์€ ์ ์šฉ์„ฑ์„ ๊ฐ€์ง„ ์ˆ˜๊ฒฝ์žฌ๋ฐฐ ํŒŒํ”„๋ฆฌ์นด ๋Œ€์ƒ ์ ˆ์ฐจ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ ๊ฐœ๋ฐœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๋†์—…์ƒ๋ช…๊ณผํ•™๋Œ€ํ•™ ๋†๋ฆผ์ƒ๋ฌผ์ž์›ํ•™๋ถ€, 2022. 8. ์†์ •์ต.Many agricultural challenges are entangled in a complex interaction between crops and the environment. As a simplifying tool, crop modeling is a process of abstracting and interpreting agricultural phenomena. Understanding based on this interpretation can play a role in supporting academic and social decisions in agriculture. Process-based crop models have solved the challenges for decades to enhance the productivity and quality of crop production; the remaining objectives have led to demand for crop models handling multidirectional analyses with multidimensional information. As a possible milestone to satisfy this goal, deep learning algorithms have been introduced to the complicated tasks in agriculture. However, the algorithms could not replace existing crop models because of the research fragmentation and low accessibility of the crop models. This study established a developmental protocol for a process-based crop model with deep learning methodology. Literature Review introduced deep learning and crop modeling, and it explained the reasons for the necessity of this protocol despite numerous deep learning applications for agriculture. Base studies were conducted with several greenhouse data in Chapters 1 and 2: transfer learning and U-Net structure were utilized to construct an infrastructure for the deep learning application; HyperOpt, a Bayesian optimization method, was tested to calibrate crop models to compare the existing crop models with the developed model. Finally, the process-based crop model with full deep neural networks, DeepCrop, was developed with an attention mechanism and multitask decoders for hydroponic sweet peppers (Capsicum annuum var. annuum) in Chapter 3. The methodology for data integrity showed adequate accuracy, so it was applied to the data in all chapters. HyperOpt was able to calibrate food and feed crop models for sweet peppers. Therefore, the compared models in the final chapter were optimized using HyperOpt. DeepCrop was trained to simulate several growth factors with environment data. The trained DeepCrop was evaluated with unseen data, and it showed the highest modeling efficiency (=0.76) and the lowest normalized root mean squared error (=0.18) than the compared models. With the high adaptability of DeepCrop, it can be used for studies on various scales and purposes. Since all methods adequately solved the given tasks and underlay the DeepCrop development, the established protocol can be a high throughput for enhancing accessibility of crop models, resulting in unifying crop modeling studies.๋†์—… ์‹œ์Šคํ…œ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ๋“ค์€ ์ž‘๋ฌผ๊ณผ ํ™˜๊ฒฝ์˜ ์ƒํ˜ธ์ž‘์šฉ ํ•˜์— ๋ณต์žกํ•˜๊ฒŒ ์–ฝํ˜€ ์žˆ๋‹ค. ์ž‘๋ฌผ ๋ชจ๋ธ๋ง์€ ๋Œ€์ƒ์„ ๋‹จ์ˆœํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ์จ, ๋†์—…์—์„œ ์ผ์–ด๋‚˜๋Š” ํ˜„์ƒ์„ ์ถ”์ƒํ™”ํ•˜๊ณ  ํ•ด์„ํ•˜๋Š” ๊ณผ์ •์ด๋‹ค. ๋ชจ๋ธ๋ง์„ ํ†ตํ•ด ๋Œ€์ƒ์„ ์ดํ•ดํ•˜๋Š” ๊ฒƒ์€ ๋†์—… ๋ถ„์•ผ์˜ ํ•™์ˆ ์  ๋ฐ ์‚ฌํšŒ์  ๊ฒฐ์ •์„ ์ง€์›ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ง€๋‚œ ์ˆ˜๋…„ ๊ฐ„ ์ ˆ์ฐจ ๊ธฐ๋ฐ˜ ์ž‘๋ฌผ ๋ชจ๋ธ์€ ๋†์—…์˜ ๋ฌธ์ œ๋“ค์„ ํ•ด๊ฒฐํ•˜์—ฌ ์ž‘๋ฌผ ์ƒ์‚ฐ์„ฑ ๋ฐ ํ’ˆ์งˆ์„ ์ฆ์ง„์‹œ์ผฐ์œผ๋ฉฐ, ํ˜„์žฌ ์ž‘๋ฌผ ๋ชจ๋ธ๋ง์— ๋‚จ์•„์žˆ๋Š” ๊ณผ์ œ๋“ค์€ ๋‹ค์ฐจ์› ์ •๋ณด๋ฅผ ๋‹ค๋ฐฉํ–ฅ์—์„œ ๋ถ„์„ํ•  ์ˆ˜ ์žˆ๋Š” ์ž‘๋ฌผ ๋ชจ๋ธ์„ ํ•„์š”๋กœ ํ•˜๊ฒŒ ๋˜์—ˆ๋‹ค. ์ด๋ฅผ ๋งŒ์กฑ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” ์ง€์นจ์œผ๋กœ์จ, ๋ณต์žกํ•œ ๋†์—…์  ๊ณผ์ œ๋“ค์„ ๋ชฉํ‘œ๋กœ ๋”ฅ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ๋„์ž…๋˜์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ์ด ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค์€ ๋‚ฎ์€ ๋ฐ์ดํ„ฐ ์™„๊ฒฐ์„ฑ ๋ฐ ๋†’์€ ์—ฐ๊ตฌ ๋‹ค์–‘์„ฑ ๋•Œ๋ฌธ์— ๊ธฐ์กด์˜ ์ž‘๋ฌผ ๋ชจ๋ธ๋“ค์„ ๋Œ€์ฒดํ•˜์ง€๋Š” ๋ชปํ–ˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๋ฐฉ๋ฒ•๋ก ์„ ์ด์šฉํ•˜์—ฌ ์ ˆ์ฐจ ๊ธฐ๋ฐ˜ ์ž‘๋ฌผ ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๊ฐœ๋ฐœ ํ”„๋กœํ† ์ฝœ์„ ํ™•๋ฆฝํ•˜์˜€๋‹ค. Literature Review์—์„œ๋Š” ๋”ฅ๋Ÿฌ๋‹๊ณผ ์ž‘๋ฌผ ๋ชจ๋ธ์— ๋Œ€ํ•ด ์†Œ๊ฐœํ•˜๊ณ , ๋†์—…์œผ๋กœ์˜ ๋”ฅ๋Ÿฌ๋‹ ์ ์šฉ ์—ฐ๊ตฌ๊ฐ€ ๋งŽ์Œ์—๋„ ์ด ํ”„๋กœํ† ์ฝœ์ด ํ•„์š”ํ•œ ์ด์œ ๋ฅผ ์„ค๋ช…ํ•˜์˜€๋‹ค. ์ œ1์žฅ๊ณผ 2์žฅ์—์„œ๋Š” ๊ตญ๋‚ด ์—ฌ๋Ÿฌ ์ง€์—ญ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ ์ „์ด ํ•™์Šต ๋ฐ U-Net ๊ตฌ์กฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ ์ ์šฉ์„ ์œ„ํ•œ ๊ธฐ๋ฐ˜์„ ๋งˆ๋ จํ•˜๊ณ , ๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™” ๋ฐฉ๋ฒ•์ธ HyperOpt๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ธฐ์กด ๋ชจ๋ธ๊ณผ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์„ ๋น„๊ตํ•˜๊ธฐ ์œ„ํ•ด ์‹œํ—˜์ ์œผ๋กœ WOFOST ์ž‘๋ฌผ ๋ชจ๋ธ์„ ๋ณด์ •ํ•˜๋Š” ๋“ฑ ๋ชจ๋ธ ๊ฐœ๋ฐœ์„ ์œ„ํ•œ ๊ธฐ๋ฐ˜ ์—ฐ๊ตฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ œ3์žฅ์—์„œ๋Š” ์ฃผ์˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜ ๋ฐ ๋‹ค์ค‘ ์ž‘์—… ๋””์ฝ”๋”๋ฅผ ๊ฐ€์ง„ ์™„์ „ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง ์ ˆ์ฐจ ๊ธฐ๋ฐ˜ ์ž‘๋ฌผ ๋ชจ๋ธ์ธ DeepCrop์„ ์ˆ˜๊ฒฝ์žฌ๋ฐฐ ํŒŒํ”„๋ฆฌ์นด(Capsicum annuum var. annuum) ๋Œ€์ƒ์œผ๋กœ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๋ฐ์ดํ„ฐ ์™„๊ฒฐ์„ฑ์„ ์œ„ํ•œ ๊ธฐ์ˆ ๋“ค์€ ์ ํ•ฉํ•œ ์ •ํ™•๋„๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ์œผ๋ฉฐ, ์ „์ฒด ์ฑ•ํ„ฐ ๋ฐ์ดํ„ฐ์— ์ ์šฉํ•˜์˜€๋‹ค. HyperOpt๋Š” ์‹๋Ÿ‰ ๋ฐ ์‚ฌ๋ฃŒ ์ž‘๋ฌผ ๋ชจ๋ธ๋“ค์„ ํŒŒํ”„๋ฆฌ์นด ๋Œ€์ƒ์œผ๋กœ ๋ณด์ •ํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋”ฐ๋ผ์„œ, ์ œ3์žฅ์˜ ๋น„๊ต ๋Œ€์ƒ ๋ชจ๋ธ๋“ค์— ๋Œ€ํ•ด HyperOpt๋ฅผ ์‚ฌ์šฉํ•˜์˜€๋‹ค. DeepCrop์€ ํ™˜๊ฒฝ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜๊ณ  ์—ฌ๋Ÿฌ ์ƒ์œก ์ง€ํ‘œ๋ฅผ ์˜ˆ์ธกํ•˜๋„๋ก ํ•™์Šต๋˜์—ˆ๋‹ค. ํ•™์Šต์— ์‚ฌ์šฉํ•˜์ง€ ์•Š์€ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ ํ•™์Šต๋œ DeepCrop๋ฅผ ํ‰๊ฐ€ํ•˜์˜€์œผ๋ฉฐ, ์ด ๋•Œ ๋น„๊ต ๋ชจ๋ธ๋“ค ์ค‘ ๊ฐ€์žฅ ๋†’์€ ๋ชจํ˜• ํšจ์œจ(EF=0.76)๊ณผ ๊ฐ€์žฅ ๋‚ฎ์€ ํ‘œ์ค€ํ™” ํ‰๊ท  ์ œ๊ณฑ๊ทผ ์˜ค์ฐจ(NRMSE=0.18)๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. DeepCrop์€ ๋†’์€ ์ ์šฉ์„ฑ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋‹ค์–‘ํ•œ ๋ฒ”์œ„์™€ ๋ชฉ์ ์„ ๊ฐ€์ง„ ์—ฐ๊ตฌ์— ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ด๋‹ค. ๋ชจ๋“  ๋ฐฉ๋ฒ•๋“ค์ด ์ฃผ์–ด์ง„ ์ž‘์—…์„ ์ ์ ˆํžˆ ํ’€์–ด๋ƒˆ๊ณ  DeepCrop ๊ฐœ๋ฐœ์˜ ๊ทผ๊ฑฐ๊ฐ€ ๋˜์—ˆ์œผ๋ฏ€๋กœ, ๋ณธ ๋…ผ๋ฌธ์—์„œ ํ™•๋ฆฝํ•œ ํ”„๋กœํ† ์ฝœ์€ ์ž‘๋ฌผ ๋ชจ๋ธ์˜ ์ ‘๊ทผ์„ฑ์„ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” ํš๊ธฐ์ ์ธ ๋ฐฉํ–ฅ์„ ์ œ์‹œํ•˜์˜€๊ณ , ์ž‘๋ฌผ ๋ชจ๋ธ ์—ฐ๊ตฌ์˜ ํ†ตํ•ฉ์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€ํ•œ๋‹ค.LITERATURE REVIEW 1 ABSTRACT 1 BACKGROUND 3 REMARKABLE APPLICABILITY AND ACCESSIBILITY OF DEEP LEARNING 12 DEEP LEARNING APPLICATIONS FOR CROP PRODUCTION 17 THRESHOLDS TO APPLY DEEP LEARNING TO CROP MODELS 18 NECESSITY TO PRIORITIZE DEEP-LEARNING-BASED CROP MODELS 20 REQUIREMENTS OF THE DEEP-LEARNING-BASED CROP MODELS 21 OPENING REMARKS AND THESIS OBJECTIVES 22 LITERATURE CITED 23 Chapter 1 34 Chapter 1-1 35 ABSTRACT 35 INTRODUCTION 37 MATERIALS AND METHODS 40 RESULTS 50 DISCUSSION 59 CONCLUSION 63 LITERATURE CITED 64 Chapter 1-2 71 ABSTRACT 71 INTRODUCTION 73 MATERIALS AND METHODS 75 RESULTS 84 DISCUSSION 92 CONCLUSION 101 LITERATURE CITED 102 Chapter 2 108 ABSTRACT 108 NOMENCLATURE 110 INTRODUCTION 112 MATERIALS AND METHODS 115 RESULTS 124 DISCUSSION 133 CONCLUSION 137 LITERATURE CITED 138 Chapter 3 144 ABSTRACT 144 INTRODUCTION 146 MATERIALS AND METHODS 149 RESULTS 169 DISCUSSION 182 CONCLUSION 187 LITERATURE CITED 188 GENERAL DISCUSSION 196 GENERAL CONCLUSION 201 ABSTRACT IN KOREAN 203 APPENDIX 204๋ฐ•

    Semantic Segmentation based deep learning approaches for weed detection

    Get PDF
    Global increase in herbicide use to control weeds has led to issues such as evolution of herbicide-resistant weeds, off-target herbicide movement, etc. Precision agriculture advocates Site Specific Weed Management (SSWM) application to achieve precise and right amount of herbicide spray and reduce off-target herbicide movement. Recent advancements in Deep Learning (DL) have opened possibilities for adaptive and accurate weed recognitions for field based SSWM applications with traditional and emerging spraying equipment; however, challenges exist in identifying the DL model structure and train the model appropriately for accurate and rapid model applications over varying crop/weed growth stages and environment. In our study, an encoder-decoder based DL architecture was proposed that performs pixel-wise Semantic Segmentation (SS) classifications of crop, soil, and weed patches in the fields. The objective of this study was to develop a robust weed detection algorithm using DL techniques that can accurately and reliably locate weed infestations in low altitude Unmanned Aerial Vehicle (UAV) imagery with acceptable application speed. Two different encoder-decoder based SS models of LinkNet and UNet were developed using transfer learning techniques. We performed various measures such as backpropagation optimization and refining of the dataset used for training to address the class-imbalance problem which is a common issue in developing weed detection models. It was found that LinkNet model with ResNet18 as the encoder section and use of โ€˜Focal lossโ€™ loss function was able to achieve the highest mean and class-wise Intersection over Union scores for different class categories while performing predictions on unseen dataset. The developed state-of-art model did not require a large amount of data during training and the techniques used to develop the model in our study provides a propitious opportunity that performs better than the existing SS based weed detections models. The proposed model integrates a futuristic approach to develop a model that could be used for weed detection on aerial imagery from UAV and perform real-time SSWM applications Advisor: Yeyin Sh

    Sustainable Agriculture and Advances of Remote Sensing (Volume 1)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publishing the results, among others

    Artificial Neural Networks in Agriculture

    Get PDF
    Modern agriculture needs to have high production efficiency combined with a high quality of obtained products. This applies to both crop and livestock production. To meet these requirements, advanced methods of data analysis are more and more frequently used, including those derived from artificial intelligence methods. Artificial neural networks (ANNs) are one of the most popular tools of this kind. They are widely used in solving various classification and prediction tasks, for some time also in the broadly defined field of agriculture. They can form part of precision farming and decision support systems. Artificial neural networks can replace the classical methods of modelling many issues, and are one of the main alternatives to classical mathematical models. The spectrum of applications of artificial neural networks is very wide. For a long time now, researchers from all over the world have been using these tools to support agricultural production, making it more efficient and providing the highest-quality products possible
    • โ€ฆ
    corecore