33 research outputs found

    PEMILIHAN HYPERPARAMETER PADA ALEXNET CNN UNTUK KLASIFIKASI CITRA PENYAKIT KEDELAI

    Get PDF
    Kedelai merupakan makanan yang populer. Estimasi hasil di Indonesia berkisar antara 1,2 dan 1,5 ton per hektar, masih jauh di bawah potensi hasil sebesar 2 hingga 2,5 ton per hektar. Gangguan penyakit pada tanaman kedelai merupakan salah satu penyebab rendahnya hasil, sehingga petani harus mengenal penyakit yang menyerang kedelai agar dapat memilih jenis penyakit dan tindakan pengobatan yang tepat. Proses laboratorium pendampingan penyakit masih belum efisien sehingga membutuhkan waktu yang lama. Computer vision dan pembelajaran mendalam sekarang dapat digunakan untuk mengenali informasi prediktif pada objek, meskipun objek diposisikan di mana pun objek tersebut dimasukkan. Convolutional Neural Network (CNN) adalah teknik pembelajaran mendalam yang paling sering digunakan saat ini. Penelitian ini menggunakan arsitektur CNN yakni AlexNet dengan hyperparameter tuning untuk mengklasifikasikan citra penyakit kedelai. Hyperparameter tuning sangat berpengaruh terhadap performa model. Dataset yang digunakan berjumlah 1.500 citra penyakit pada daun kedelai, terdiri dari 3 kelas yakni caterpillar, diabrotica speciosa, dan healthy. Hyperparameter tuning pada AlexNet CNN dengan ukuran bacth size 12, dropout 0.2, optimizer Adam menghasilkan hasil terbaik dari segi nilai akurasi 84%, presisi 81,95%, recall 80,66%, serta  f1-score 80,96%

    Quantifying soybean phenotypes using UAV imagery and machine learning, deep learning methods

    Get PDF
    Crop breeding programs aim to introduce new cultivars to the world with improved traits to solve the food crisis. Food production should need to be twice of current growth rate to feed the increasing number of people by 2050. Soybean is one the major grain in the world and only US contributes around 35 percent of world soybean production. To increase soybean production, breeders still rely on conventional breeding strategy, which is mainly a 'trial and error' process. These constraints limit the expected progress of the crop breeding program. The goal was to quantify the soybean phenotypes of plant lodging and pubescence color using UAV-based imagery and advanced machine learning. Plant lodging and soybean pubescence color are two of the most important phenotypes for soybean breeding programs. Soybean lodging and pubescence color is conventionally evaluated visually by breeders, which is time-consuming and subjective to human errors. The goal of this study was to investigate the potential of unmanned aerial vehicle (UAV)-based imagery and machine learning in the assessment of lodging conditions and deep learning in the assessment pubescence color of soybean breeding lines. A UAV imaging system equipped with an RGB (red-green-blue) camera was used to collect the imagery data of 1,266 four-row plots in a soybean breeding field at the reproductive stage. Soybean lodging scores and pubescence scores were visually assessed by experienced breeders. Lodging scores were grouped into four classes, i.e., non-lodging, moderate lodging, high lodging, and severe lodging. In contrast, pubescence color scores were grouped into three classes, i.e., gray, tawny, and segregation. UAV images were stitched to build orthomosaics, and soybean plots were segmented using a grid method. Twelve image features were extracted from the collected images to assess the lodging scores of each breeding line. Four models, i.e., extreme gradient boosting (XGBoost), random forest (RF), K-nearest neighbor (KNN), and artificial neural network (ANN), were evaluated to classify soybean lodging classes. Five data pre-processing methods were used to treat the imbalanced dataset to improve the classification accuracy. Results indicate that the pre-processing method SMOTE-ENN consistently performs well for all four (XGBoost, RF, KNN, and ANN) classifiers, achieving the highest overall accuracy (OA), lowest misclassification, higher F1-score, and higher Kappa coefficient. This suggests that Synthetic Minority Over-sampling-Edited Nearest Neighbor (SMOTE-ENN) may be an excellent pre-processing method for using unbalanced datasets and classification tasks. Furthermore, an overall accuracy of 96 percent was obtained using the SMOTE-ENN dataset and ANN classifier. On the other hand, to classify the soybean pubescence color, seven pre-trained deep learning models, i.e., DenseNet121, DenseNet169, DenseNet201, ResNet50, InceptionResNet-V2, Inception-V3, and EfficientNet were used, and images of each plot were fed into the model. Data was enhanced using two rotational and two scaling factors to increase the datasets. Among the seven pre-trained deep learning models, ResNet50 and DenseNet121 classifiers showed a higher overall accuracy of 88 percent, along with higher precision, recall, and F1-score for all three classes of pubescence color. In conclusion, the developed UAV-based high-throughput phenotyping system can gather image features to estimate soybean crucial phenotypes and classify the phenotypes, which will help the breeders in phenotypic variations in breeding trials. Also, the RGB imagery-based classification could be a cost-effective choice for breeders and associated researchers for plant breeding programs in identifying superior genotypes.Includes bibliographical references

    Olive-Fruit Variety Classification by Means of Image Processing and Convolutional Neural Networks

    Get PDF
    The automation of classifcation and grading of horticultural products attending to different features comprises a major challenge in food industry. Thus, focused on the olive sector, which boasts of a huge range of cultivars, it is proposed a methodology for olive-fruit variety classifcation, approaching it as an image classifcation problem. To that purpose, 2,800 fruits belonging to seven different olive varieties were photographed. After processing these initial captures by means of image processing techniques, the resulting set of images of individual fruits were used to train, and continuedly to externally validate, the implementations of six different Convolutional Neural Networks architectures. This, in order to compute the classifers with which perform the variety categorization of the fruits. Remarkable hit rates were obtained after testing the classifers on the corresponding external validation sets. Thus, it was yielded a top accuracy of 95.91% when using the Inception-ResnetV2 architecture. The results suggest that the proposed methodology, once integrated into industrial conveyor belts, promises to be an advanced solution to postharvest olive-fruit processing and classifcation

    A comprehensive review of crop yield prediction using machine learning approaches with special emphasis on palm oil yield prediction

    Get PDF
    An early and reliable estimation of crop yield is essential in quantitative and financial evaluation at the field level for determining strategic plans in agricultural commodities for import-export policies and doubling farmer’s incomes. Crop yield predictions are carried out to estimate higher crop yield through the use of machine learning algorithms which are one of the challenging issues in the agricultural sector. Due to this developing significance of crop yield prediction, this article provides an exhaustive review on the use of machine learning algorithms to predict crop yield with special emphasis on palm oil yield prediction. Initially, the current status of palm oil yield around the world is presented, along with a brief discussion on the overview of widely used features and prediction algorithms. Then, the critical evaluation of the state-of-the-art machine learning-based crop yield prediction, machine learning application in the palm oil industry and comparative analysis of related studies are presented. Consequently, a detailed study of the advantages and difficulties related to machine learning-based crop yield prediction and proper identification of current and future challenges to the agricultural industry is presented. The potential solutions are additionally prescribed in order to alleviate existing problems in crop yield prediction. Since one of the major objectives of this study is to explore the future perspectives of machine learning-based palm oil yield prediction, the areas including application of remote sensing, plant’s growth and disease recognition, mapping and tree counting, optimum features and algorithms have been broadly discussed. Finally, a prospective architecture of machine learning-based palm oil yield prediction has been proposed based on the critical evaluation of existing related studies. This technology will fulfill its promise by performing new research challenges in the analysis of crop yield prediction and the development

    Artificial intelligence and image processing applications for high-throughput phenotyping

    Get PDF
    Doctor of PhilosophyDepartment of Computer ScienceMitchell L NeilsenThe areas of Computer Vision and Scientific Computing have witnessed rapid growth in the last decade with the fields of industrial robotics, automotive and healthcare acting as the primary vehicles for research and advancement. However, related research in other fields, such as agriculture, remains an understudied problem. This dissertation explores the application of Computer Vision and Scientific Computing in an agricultural domain known as High-throughput Phenotyping (HTP). HTP is the assessment of complex seed traits such as growth, development, tolerance, resistance, ecology, yield, and the measurement of parameters that form more complex traits. The dissertation makes the following contributions: The first contribution is the development of algorithms to estimate morphometric traits such as length, width, area, and seed kernel count using 3-D graphics and static image processing, and the extension of existing algorithms for the same. The second contribution is the development of lightweight frameworks to aid in synthetic image dataset creation and image cropping for deep neural networks in HTP. Deep neural networks require a plethora of training data to yield results of the highest quality. However, no such training datasets are readily available for HTP research, especially on seed kernels. The proposed synthetic image generation framework helps generate a profusion of training data at will to train neural networks from a meager samples of seed kernels. Besides requiring large quantities of data, deep neural networks require the input to be a certain size. However, not all available data are in the size required by the deep neural networks. The proposed image cropper helps to resize images without resulting in any distortion, thereby, making image data fit for consumption. The third contribution is the design and analysis of supervised and self-supervised neural network architectures trained on synthetic images to perform the tasks of seed kernel classification, counting and morphometry. In the area of supervised image classification, state-of-the-art neural network models of VGG-16, VGG-19 and ResNet-101 are investigated. A Simple framework for Contrastive Learning of visual Representations (SimCLR) [137], Momentum Contrast (MoCo) [55] and Bootstrap Your Own Latent (BYOL) [123] are leveraged for self-supervised image classification. The instance-based segmentation deep neural network models of Mask R-CNN and YOLO are utilized to perform the tasks of seed kernel classification, segmentation and counting. The results demonstrate the feasibility of deep neural networks for their respective tasks of classification and instance segmentation. In addition to estimating seed kernel count from static images, algorithms that aid in seed kernel counting from videos are proposed and analyzed. Proposed is an algorithm that creates a slit image which can be analyzed to estimate seed count. Upon the creation of the slit image, the video is no longer required to estimate seed count, thereby, significantly lowering the computational resources required for the estimation. The fourth contribution is the development of an end-to-end, automated image capture system for single seed kernel analysis. In addition to estimating length and width from 2-D images, the proposed system estimates the volume of a seed kernel from 2-D images using the technique of volume sculpting. The relative standard deviation of the results produced by the proposed technique is lower (better) than the relative standard deviation of the results produced by volumetric estimation using the ellipsoid slicing technique. The fifth contribution is the development of image processing algorithms to provide feature enhancements to mobile applications to improve upon on-site phenotyping capabilities. Algorithms for two features of high value namely, leaf angle estimation and fractional plant cover estimation are developed. The leaf angle estimation feature estimates the angle between stem and leaf for images captured using mobile phone cameras whereas fractional plant cover is to determine companion plants i.e., plants that are able to co-exist and mutually benefit. The proposed techniques, frameworks and findings lay a solid foundation for future Computer Vision and Scientific Computing research in the domain of agriculture. The contributions are significant since the dissertation not only proposes techniques, but also develops low-cost end-to-end frameworks to leverage the proposed techniques in a scalable fashion

    Agricultural Diversification

    Get PDF
    Agricultural diversification can occur in many forms (e.g., genetic variety, species, structural) and can be created temporally and over different spatially scales (e.g., within crop, within field, and landscape level). Crop diversification is the practice of growing more than one crop species within a farming area in the form of rotations (two or more crops on the same field in different years), multiple crops (more than one crop in the same season on the same field) or intercropping (at least two crops simultaneously on the same field).Various cropping strategies and management practices, such as diversification of cropping systems by crop rotation, conservation tillage, and the use of cover crops, have been promoted to enhance crop productivity and ecosystem services. However, the opportunities and means differ among regions and the actual effects of diversification on cropping system sustainability still need more investigation.This Special Issue covers the state-of-the-art and recent progress in different aspects related to agricultural diversification to increase the sustainability and resilience of a wide range of cropping systems (grassland, horticultural crops, fruit trees) and in a scenario of environmental challenges due to climate change: Crop production and quality; Impact of crop diversification on soil quality and biodiversity; Environmental impact and delivery of ecosystem services by crop diversification

    Line-based deep learning method for tree branch detection from digital images

    Get PDF
    The final publication is available at Elsevier via https://doi.org/10.1016/j.jag.2022.102759. © 2021. This manuscript version is made available under the CC-BY-NC-ND 4.0 licensePreventive maintenance of power lines, including cutting and pruning of tree branches, is essential to avoid interruptions in the energy supply. Automatic methods can support this risky task and also reduce time consuming. Here, we propose a method in which the orientation and the grasping positions of tree branches are estimated. The proposed method firstly predicts the straight line (representing the tree branch extension) based on a convolutional neural network (CNN). Secondly, a Hough transform is applied to estimate the direction and position of the line. Finally, we estimate the grip point as the pixel point with the highest probability of belonging to the line. We generated a dataset based on internet searches and annotated 1868 images considering challenging scenarios with different tree branch shapes, capture devices, and environmental conditions. Ten-fold cross-validation was adopted, considering 90% for training and 10% for testing. We also assessed the method under corruptions (gaussian and shot) with different severity levels. The experimental analysis showed the effectiveness of the proposed method reporting F1-score of 96.78%. Our method outperformed state-of-the-art Deep Hough Transform (DHT) and Fully Convolutional Line Parsing (F-Clip).This research was funded by CNPq (p: 433783/2018–4, 310517/2020–6, 314902/2018–0, 304052/2019–1 and 303559/2019–5), FUNDECT (p: 59/300. 066/2015, 071/2015) and CAPES PrInt (p: 88881.311850/2018–01). The authors acknowledge the support of the UFMS (Federal University of Mato Grosso do Sul) and CAPES (Finance Code 001). This research was also partially supported by the Emerging Interdisciplinary Project of Central University of Finance and Economics

    Just-in-time Pastureland Trait Estimation for Silage Optimization, under Limited Data Constraints

    Get PDF
    To ensure that pasture-based farming meets production and environmental targets for a growing population under increasing resource constraints, producers need to know pastureland traits. Current proximal pastureland trait prediction methods largely rely on vegetation indices to determine biomass and moisture content. The development of new techniques relies on the challenging task of collecting labelled pastureland data, leading to small datasets. Classical computer vision has already been applied to weed identification and recognition of fruit blemishes using morphological features, but machine learning algorithms can parameterise models without the provision of explicit features, and deep learning can extract even more abstract knowledge although typically this is assumed to be based around very large datasets. This work hypothesises that through the advantages of state-of-the-art deep learning systems, pastureland crop traits can be accurately assessed in a just-in-time fashion, based on data retrieved from an inexpensive sensor platform, under the constraint of limited amounts of labelled data. However the challenges to achieve this overall goal are great, and for applications such as just-in-time yield and moisture estimation for farm-machinery, this work must bring together systems development, knowledge of good pastureland practice, and also techniques for handling low-volume datasets in a machine learning context. Given these challenges, this thesis makes a number of contributions. The first of these is a comprehensive literature review, relating pastureland traits to ruminant nutrient requirements and exploring trait estimation methods, from contact to remote sensing methods, including details of vegetation indices and the sensors and techniques required to use them. The second major contribution is a high-level specification of a platform for collecting and labelling pastureland data. This includes the collection of four-channel Blue, Green, Red and NIR (VISNIR) images, narrowband data, height and temperature differential, using inexpensive proximal sensors and provides a basis for holistic data analysis. Physical data platforms built around this specification were created to collect and label pastureland data, involving computer scientists, agricultural, mechanical and electronic engineers, and biologists from academia and industry, working with farmers. Using the developed platform and a set of protocols for data collection, a further contribution of this work was the collection of a multi-sensor multimodal dataset for pastureland properties. This was made up of four-channel image data, height data, thermal data, Global Positioning System (GPS) and hyperspectral data, and is available and labelled with biomass (Kg/Ha) and percentage dry matter, ready for use in deep learning. However, the most notable contribution of this work was a systematic investigation of various machine learning methods applied to the collected data in order to maximise model performance under the constraints indicated above. The initial set of models focused on collected hyperspectral datasets. However, due to their relative complexity in real-time deployment, the focus was instead on models that could best leverage image data. The main body of these models centred on image processing methods and, in particular, the use of the so-called Inception Resnet and MobileNet models to predict fresh biomass and percentage dry matter, enhancing performance using data fusion, transfer learning and multi-task learning. Images were subdivided to augment the dataset, using two different patch sizes, resulting in around 10,000 small patches of size 156 x 156 pixels and around 5,000 large patches of size 240 x 240 pixels. Five-fold cross validation was used in all analysis. Prediction accuracy was compared to older mechanisms, albeit using hyperspectral data collected, with no provision made for lighting, humidity or temperature. Hyperspectral labelled data did not produce accurate results when used to calculate Normalized Difference Vegetation Index (NDVI), or to train a neural network (NN), a 1D Convolutional Neural Network (CNN) or Long Short Term Memory (LSTM) models. Potential reasons for this are discussed, including issues around the use of highly sensitive devices in uncontrolled environments. The most accurate prediction came from a multi-modal hybrid model that concatenated output from an Inception ResNet based model, run on RGB data with ImageNet pre-trained RGB weights, output from a residual network trained on NIR data, and LiDAR height data, before fully connected layers, using the small patch dataset with a minimum validation MAPE of 28.23% for fresh biomass and 11.43% for dryness. However, a very similar prediction accuracy resulted from a model that omitted NIR data, thus requiring fewer sensors and training resources, making it more sustainable. Although NIR and temperature differential data were collected and used for analysis, neither improved prediction accuracy, with the Inception ResNet model’s minimum validation MAPE rising to 39.42% when NIR data was added. When both NIR data and temperature differential were added to a multi-task learning Inception ResNet model, it yielded a minimum validation MAPE of 33.32%. As more labelled data are collected, the models can be further trained, enabling sensors on mowers to collect data and give timely trait information to farmers. This technology is also transferable to other crops. Overall, this work should provide a valuable contribution to the smart agriculture research space
    corecore