1,809 research outputs found

    Development of a mobile application for identification of grapevine (Vitis vinifera L.) cultivars via deep learning

    Get PDF
    Acknowledgements: The authors would like to express their gratitude to the Teaching Experiment Farm of Ningxia University, for their kind help. This study was supported by the Key R & D projects of Ningxia Hui Autonomous Region (Grant No. 2019BBF02013)Peer reviewedPublisher PD

    A Mixed Data-Based Deep Neural Network to Estimate Leaf Area Index in Wheat Breeding Trials

    Get PDF
    Remote and non-destructive estimation of leaf area index (LAI) has been a challenge in the last few decades as the direct and indirect methods available are laborious and time-consuming. The recent emergence of high-throughput plant phenotyping platforms has increased the need to develop new phenotyping tools for better decision-making by breeders. In this paper, a novel model based on artificial intelligence algorithms and nadir-view red green blue (RGB) images taken from a terrestrial high throughput phenotyping platform is presented. The model mixes numerical data collected in a wheat breeding field and visual features extracted from the images to make rapid and accurate LAI estimations. Model-based LAI estimations were validated against LAI measurements determined non-destructively using an allometric relationship obtained in this study. The model performance was also compared with LAI estimates obtained by other classical indirect methods based on bottom-up hemispherical images and gaps fraction theory. Model-based LAI estimations were highly correlated with ground-truth LAI. The model performance was slightly better than that of the hemispherical image-based method, which tended to underestimate LAI. These results show the great potential of the developed model for near real-time LAI estimation, which can be further improved in the future by increasing the dataset used to train the model

    Deep learning networks for olive cultivar identification: a comprehensive analysis of convolutional neural networks

    Get PDF
    Deep learning networks, more specifically convolutional neural networks, have shown a notable distinction when it comes to computer vision problems. Their versatility spans various domains, where they are applied for tasks such as classification and regression, contingent primarily on the availability of a representative dataset. This work explores the feasibility of employing this approach in the domain of agriculture, particularly within the context of olive growing. The objective is to enhance and facilitate cultivar identification techniques by using images of olive tree leaves. To achieve this, a comparative analysis involving ten distinct convolutional networks (VGG16, VGG19, ResNet50, ResNet152-V2, Inception V3, Inception ResNetV2, XCeption, MobileNet, MobileNetV2, EfficientNetB7) was conducted, all initiated with transfer learning as a common starting point. Also, the impact of adjusting network hyperparameters and structural elements was explored. For the training and evaluation of the networks, a dedicated dataset was created and made available, consisting of approximately 4200 images from the four most representative categories of the region. The findings of this study provide compelling evidence that the majority of the examined methods offer a robust foundation for cultivar identification, ensuring a high level of accuracy. Notably, the first nine methods consistently attain accuracy rates surpassing 95%, with the top three methods achieving an impressive 98% accuracy (ResNet50, EfficientNetB7). In practical terms, out of approximately 2016 images, 1976 were accurately classified. These results signify a substantial advancement in olive cultivar identification through computer vision techniques.This work was carried out under the Project “OleaChain: Competências para a sustentabilidade e inovação da cadeia de valor do olival tradicional no Norte Interior de Portugal” (NORTE-06-3559-FSE- 000188), an operation to hire highly qualified human resources, funded by NORTE 2020 through the European Social Fund (ESF) and was supported by international funds STEP, HORIZON-WIDERA-2021-ACCESS- 03-01, n. 101078933. The authors are grateful to the Foundation for Science and Technology (FCT, Portugal) for financial support through national funds FCT/MCTES (PIDDAC) to CeDRI (UIDB/05757/2020 (DOI: 10 .54499 /UIDB /05757 /2020) and UIDP/05757/2020) (DOI: 10.54499 /UIDB /05757 /2020) and SusTEC (LA/P/0007/2021) (DOI: 10.54499 /LA /P /0007 /2020).info:eu-repo/semantics/publishedVersio

    Volatile-olfactory profiles of cv. Arbequina olive oils extracted without/with olive leaves addition and their discrimination using an electronic nose

    Get PDF
    Oils from cv. Arbequina were industrially extracted together with olive leaves of cv. Arbequina or Santulhana (1%, w/w), and their olfactory and volatile profiles were compared to those extracted without leaves addition (control). The leaves incorporation resulted in green fruity oils with fresh herbs and cabbage olfactory notes, while control oils showed a ripe fruity sensation with banana, apple, and dry hay grass notes. In all oils, total volatile contents varied from 57.5 to 65.5mg/kg (internal standard equivalents), being aldehydes followed by esters, hydrocarbons, and alcohols the most abundant classes. No differences in the number of volatiles were observed. The incorporation of cv. Arbequina or Santulhana leaves significantly reduced the total content of alcohols and esters (minus 3756% and 1013%, respectively). Contrary, cv. Arbequina leaves did not influence the total content of aldehydes or hydrocarbons, while cv. Santulhana leaves promoted a significant increase (plus 49 and 10%, respectively). Thus, a leaf-cultivar dependency was observed, tentatively attributed to enzymatic differences related to the lipoxygenase pathway. Olfactory or volatile profiles allowed the successful unsupervised differentiation of the three types of studied cv. Arbequina oils. Finally, a lab-made electronic nose was applied to allow the nondestructive discrimination of cv. Arbequina oils extracted with or without the incorporation of olive leaves (100% and 99±5% of correct classifications for leave-one-out and repeated K-fold cross-validation variants), being a practical tool for ensuring the label correctness if future commercialization is envisaged. Moreover, this finding also strengthened that olive oils extracted with or without olive leaves incorporation possessed quite different olfactory patterns, which also depended on the cultivar of the olive leaves.The authors are grateful to the Foundation for Science and Technology (FCT, Portugal) for financial support by national funds FCT/MCTES to CIMO (UIDB/00690/2020), CEB (UIDB/04469/2020), REQUIMTE-LAQV (UIDB/ 50006/2020) units, and the Associate Laboratories for Green Chemistry-LAQV (UIDB/50006/2020) and SusTEC (LA/P/0007/2020), as well as to BioTecNorte operation (NORTE01-0145-FEDER-000004) funded by the European Regional Development Fund under the scope of Norte2020—Programa Operacional Regional do Norte. ´Itala M.G. Marx also acknowledges the PhD research grant (SFRH/BD/137283/2018) provided by FCT. Nuno Rodrigues likes to thank national funding by FCT—Foundation for Science and Technology, P.I., through the institutional scientific employment program contract.info:eu-repo/semantics/publishedVersio

    On the efficacy of handcrafted and deep features for seed image classification

    Get PDF
    Computer vision techniques have become important in agriculture and plant sciences due to their wide variety of applications. In particular, the analysis of seeds can provide meaningful information on their evolution, the history of agriculture, the domestication of plants, and knowledge of diets in ancient times. This work aims to propose an exhaustive comparison of several different types of features in the context of multiclass seed classification, leveraging two public plant seeds data sets to classify their families or species. In detail, we studied possible optimisations of five traditional machine learning classifiers trained with seven different categories of handcrafted features. We also fine-tuned several well-known convolutional neural networks (CNNs) and the recently proposed SeedNet to determine whether and to what extent using their deep features may be advantageous over handcrafted features. The experimental results demonstrated that CNN features are appropriate to the task and representative of the multiclass scenario. In particular, SeedNet achieved a mean F-measure of 96%, at least. Nevertheless, several cases showed satisfactory performance from the handcrafted features to be considered a valid alternative. In detail, we found that the Ensemble strategy combined with all the handcrafted features can achieve 90.93% of mean F-measure, at least, with a considerably lower amount of times. We consider the obtained results an excellent preliminary step towards realising an automatic seeds recognition and classification framework

    Grapevine yield estimation using image analysis for the variety Arinto

    Get PDF
    Mestrado em Engenharia de Viticultura e Enologia (Double Degree) / Instituto Superior de Agronomia. Universidade de Lisboa / Faculdade de Ciências. Universidade do PortoYield estimation can lead to difficulties in the vineyard and winery, if it is done inaccurately following wrong procedures, doing a non-representative sampling or for the human error. Moreover, the traditional yield estimation methods are time consuming and destructive because they need someone that goes into the vineyard to count the yield components and that take out from the vineyard inflorescence or bunches to count and weight the flowers and the berries. To avoid these problems and the errors that can occur on this way, the development and application of new and innovative techniques to estimate the yield through the analysis of RGB images taken under field conditions are under study from different groups of research. In our research work we’ve studied the application of counting the yield components in the images throughout all the growing season. Furthermore, we’ve studied two different algorithms that starting from the survey of canopy porosity and/or visible bunches area, can help to do an estimation of the yield. The most promising yield estimation, based on the counting of the yield components done through image analysis, was found to be at the phenological stage of four leaves out, which shown a mean absolute percent error (MA%E) of 32 ± 2% and an correlaion coefficient (r Obs,Est) between observed and estimated shoots of 0.62. The two algorithms used different models: for estimating the area of the bunches covered by leaves and to estimate the weight of the bunches per linear canopy meter. When the area of the bunches without leaf occlusion was estimated, an average percentage of occlusion generated by the bunches on the other bunches of 8%, 6% and 12% respectively at pea size, veraison and maturation, was used to estimate the total area of the bunches. When the total area of the bunches per linear canopy meter was estimated the two models to estimate the grape weight were used. Finally, to estimate the weight at harvest, the growth factors of 6.6 and 1.7 respectively, at pea size and veraison were used. The first algorithm shown a MA%E, between the estimated and observed values of yield, of - 33.59%, -9.24% and -11.25%, instead the second algorithm shown a MA%E of -6.81%, -1.35% and 0.01% respectively at pea-size, veraison and maturationN/
    corecore