15,290 research outputs found

    Looking behind occlusions: A study on amodal segmentation for robust on-tree apple fruit size estimation

    Get PDF
    The detection and sizing of fruits with computer vision methods is of interest because it provides relevant information to improve the management of orchard farming. However, the presence of partially occluded fruits limits the performance of existing methods, making reliable fruit sizing a challenging task. While previous fruit segmentation works limit segmentation to the visible region of fruits (known as modal segmentation), in this work we propose an amodal segmentation algorithm to predict the complete shape, which includes its visible and occluded regions. To do so, an end-to-end convolutional neural network (CNN) for simultaneous modal and amodal instance segmentation was implemented. The predicted amodal masks were used to estimate the fruit diameters in pixels. Modal masks were used to identify the visible region and measure the distance between the apples and the camera using the depth image. Finally, the fruit diameters in millimetres (mm) were computed by applying the pinhole camera model. The method was developed with a Fuji apple dataset consisting of 3925 RGB-D images acquired at different growth stages with a total of 15,335 annotated apples, and was subsequently tested in a case study to measure the diameter of Elstar apples at different growth stages. Fruit detection results showed an F1-score of 0.86 and the fruit diameter results reported a mean absolute error (MAE) of 4.5 mm and R2 = 0.80 irrespective of fruit visibility. Besides the diameter estimation, modal and amodal masks were used to automatically determine the percentage of visibility of measured apples. This feature was used as a confidence value, improving the diameter estimation to MAE = 2.93 mm and R2 = 0.91 when limiting the size estimation to fruits detected with a visibility higher than 60%. The main advantages of the present methodology are its robustness for measuring partially occluded fruits and the capability to determine the visibility percentage. The main limitation is that depth images were generated by means of photogrammetry methods, which limits the efficiency of data acquisition. To overcome this limitation, future works should consider the use of commercial RGB-D sensors. The code and the dataset used to evaluate the method have been made publicly available at https://github.com/GRAP-UdL-AT/Amodal_Fruit_SizingThis work was partly funded by the Departament de Recerca i Universitats de la Generalitat de Catalunya (grant 2021 LLAV 00088), the Spanish Ministry of Science, Innovation and Universities (grants RTI2018-094222-B-I00 [PAgFRUIT project], PID2021-126648OB-I00 [PAgPROTECT project] and PID2020-117142GB-I00 [DeeLight project] by MCIN/AEI/10.13039/501100011033 and by “ERDF, a way of making Europe”, by the European Union). The work of Jordi Gené Mola was supported by the Spanish Ministry of Universities through a Margarita Salas postdoctoral grant funded by the European Union - NextGenerationEU. We would also like to thank Nufri (especially Santiago Salamero and Oriol Morreres) for their support during data acquisition, and Pieter van Dalfsen and Dirk de Hoog from Wageningen University & Research for additional data collection used in the case study.info:eu-repo/semantics/publishedVersio

    CHORUS Deliverable 3.4: Vision Document

    Get PDF
    The goal of the CHORUS Vision Document is to create a high level vision on audio-visual search engines in order to give guidance to the future R&D work in this area and to highlight trends and challenges in this domain. The vision of CHORUS is strongly connected to the CHORUS Roadmap Document (D2.3). A concise document integrating the outcomes of the two deliverables will be prepared for the end of the project (NEM Summit)

    Image Analysis and Machine Learning in Agricultural Research

    Get PDF
    Agricultural research has been a focus for academia and industry to improve human well-being. Given the challenges in water scarcity, global warming, and increased prices of fertilizer, and fossil fuel, improving the efficiency of agricultural research has become even more critical. Data collection by humans presents several challenges including: 1) the subjectiveness and reproducibility when doing the visual evaluation, 2) safety when dealing with high toxicity chemicals or severe weather events, 3) mistakes cannot be avoided, and 4) low efficiency and speed. Image analysis and machine learning are more versatile and advantageous in evaluating different plant characteristics, and this could help with agricultural data collection. In the first chapter, information related to different types of imaging (e.g., RGB, multi/hyperspectral, and thermal imaging) was explored in detail for its advantages in different agriculture applications. The process of image analysis demonstrated how target features were extracted for analysis including shape, edge, texture, and color. After acquiring features information, machine learning can be used to automatically detect or predict features of interest such as disease severity. In the second chapter, case studies of different agricultural applications were demonstrated including: 1) leaf damage symptoms, 2) stress evaluation, 3) plant growth evaluation, 4) stand/insect counting, and 5) evaluation for produce quality. Case studies showed that the use of image analysis is often more advantageous than visual rating. Advantages of image analysis include increased objectivity, speed, and more reproducibly reliable results. In the third chapter, machine learning was explored using romaine lettuce images from RD4AG to automatically grade for bolting and compactness (two of the important parameters for lettuce quality). Although the accuracy is at 68.4 and 66.6% respectively, a much larger data base and many improvements are needed to increase the model accuracy and reliability. With the advancement in cameras, computers with high computing power, and the development of different algorithms, image analysis and machine learning have the potential to replace part of the labor and improve the current data collection procedure in agricultural research. Advisor: Gary L. Hei

    Effect of curing conditions and harvesting stage of maturity on Ethiopian onion bulb drying properties

    Get PDF
    The study was conducted to investigate the impact of curing conditions and harvesting stageson the drying quality of onion bulbs. The onion bulbs (Bombay Red cultivar) were harvested at three harvesting stages (early, optimum, and late maturity) and cured at three different temperatures (30, 40 and 50 oC) and relative humidity (30, 50 and 70%). The results revealed that curing temperature, RH, and maturity stage had significant effects on all measuredattributesexcept total soluble solids

    Fruit detection and 3D location using instance segmentation neural networks and structure-from-motion photogrammetry

    Get PDF
    The development of remote fruit detection systems able to identify and 3D locate fruits provides opportunities to improve the efficiency of agriculture management. Most of the current fruit detection systems are based on 2D image analysis. Although the use of 3D sensors is emerging, precise 3D fruit location is still a pending issue. This work presents a new methodology for fruit detection and 3D location consisting of: (1) 2D fruit detection and segmentation using Mask R-CNN instance segmentation neural network; (2) 3D point cloud generation of detected apples using structure-from-motion (SfM) photogrammetry; (3) projection of 2D image detections onto 3D space; (4) false positives removal using a trained support vector machine. This methodology was tested on 11 Fuji apple trees containing a total of 1455 apples. Results showed that, by combining instance segmentation with SfM the system performance increased from an F1-score of 0.816 (2D fruit detection) to 0.881 (3D fruit detection and location) with respect to the total amount of fruits. The main advantages of this methodology are the reduced number of false positives and the higher detection rate, while the main disadvantage is the high processing time required for SfM, which makes it presently unsuitable for real-time work. From these results, it can be concluded that the combination of instance segmentation and SfM provides high performance fruit detection with high 3D data precision. The dataset has been made publicly available and an interactive visualization of fruit detection results is accessible at http://www.grap.udl.cat/documents/photogrammetry_fruit_detection.html. Dades primàries associades a l'article http://hdl.handle.net/10459.1/68505This work was partly funded by the Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement de la Generalitat de Catalunya (grant 2017 SGR646), the Spanish Ministry of Economy and Competitiveness (project AGL2013-48297-C2-2-R) and the Spanish Ministry of Science, Innovation and Universities (project RTI2018-094222-B-I00). Part of the work was also developed within the framework of the project TEC2016-75976-R, financed by the Spanish Ministry of Economy, Industry and Competitiveness and the European Regional Development Fund (ERDF). The Spanish Ministry of Educationis thanked for Mr. J.Gené’s pre-doctoral fellowships (FPU15/03355). We would also like to thank Nufri (especially Santiago Salamero and Oriol Morreres) and Vicens Maquinària Agrícola S.A. for their support during data acquisition, and Ernesto Membrillo and Roberto Maturino for their support in dataset labelling

    Current status and future trends of mechanized fruit thinning devices and sensor technology

    Get PDF
    This paper reviews the different concepts that have been investigated concerning the mechanization of fruit thinning as well as multiple working principles and solutions that have been developed for feature extraction of horticultural products, both in the field and industrial environments. The research should be committed towards selective methods, which inevitably need to incorporate some kinds of sensor technology. Computer vision often comes out as an obvious solution for unstructured detection problems, although leaves despite the chosen point of view frequently occlude fruits. Further research on non-traditional sensors that are capable of object differentiation is needed. Ultrasonic and Near Infrared (NIR) technologies have been investigated for applications related to horticultural produce and show a potential to satisfy this need while simultaneously providing spatial information as time of flight sensors. Light Detection and Ranging (LIDAR) technology also shows a huge potential but it implies much greater costs and the related equipment is usually much larger, making it less suitable for portable devices, which may serve a purpose on smaller unstructured orchards. Portable devices may serve a purpose on these types of orchards. In what concerns sensor methods, on-tree fruit detection, major challenge is to overcome the problem of fruits’ occlusion by leaves and branches. Hence, nontraditional sensors capable of providing some type of differentiation should be investigated.This work was developed as part of +Pêssego project which purpose is to promote the innovation and development of peach tree culture in the region of Beira Interior, Portugal. This project was financed by a national rural development and support program, PRODER.info:eu-repo/semantics/publishedVersio

    A survey of image-based computational learning techniques for frost detection in plants

    Get PDF
    Frost damage is one of the major concerns for crop growers as it can impact the growth of the plants and hence, yields. Early detection of frost can help farmers mitigating its impact. In the past, frost detection was a manual or visual process. Image-based techniques are increasingly being used to understand frost development in plants and automatic assessment of damage resulting from frost. This research presents a comprehensive survey of the state-of the-art methods applied to detect and analyse frost stress in plants. We identify three broad computational learning approaches i.e., statistical, traditional machine learning and deep learning, applied to images to detect and analyse frost in plants. We propose a novel taxonomy to classify the existing studies based on several attributes. This taxonomy has been developed to classify the major characteristics of a significant body of published research. In this survey, we profile 80 relevant papers based on the proposed taxonomy. We thoroughly analyse and discuss the techniques used in the various approaches, i.e., data acquisition, data preparation, feature extraction, computational learning, and evaluation. We summarise the current challenges and discuss the opportunities for future research and development in this area including in-field advanced artificial intelligence systems for real-time frost monitoring

    Computer Vision System for Non-Destructive and Contactless Evaluation of Quality Traits in Fresh Rocket Leaves (Diplotaxis Tenuifolia L.)

    Get PDF
    La tesi di dottorato è incentrata sull'analisi di tecnologie non distruttive per il controllo della qualità dei prodotti agroalimentari, lungo l'intera filiera agroalimentare. In particolare, la tesi riguarda l'applicazione del sistema di visione artificiale per valutare la qualità delle foglie di rucola fresh-cut. La tesi è strutturata in tre parti (introduzione, applicazioni sperimentali e conclusioni) e in cinque capitoli, rispettivamente il primo e il secondo incentrati sulle tecnologie non distruttive e in particolare sui sistemi di computer vision per il monitoraggio della qualità dei prodotti agroalimentari. Il terzo, quarto e quinto capitolo mirano a valutare le foglie di rucola sulla base della stima di parametri qualitativi, considerando diversi aspetti: (i) la variabilità dovuta alle diverse pratiche agricole, (ii) la senescenza dei prodotti confezionati e non, e (iii) lo sviluppo e sfruttamento dei vantaggi di nuovi modelli più semplici rispetto al machine learning utilizzato negli esperimenti precedenti. Il lavoro di ricerca di questa tesi di dottorato è stato svolto dall'Università di Foggia, dall'Istituto di Scienze delle Produzioni Alimentari (ISPA) e dall'Istituto di Tecnologie e Sistemi Industriali Intelligenti per le Manifatture Avanzate (STIIMA) del Consiglio Nazionale delle Ricerche (CNR). L’attività di ricerca è stata condotta nell'ambito del Progetto SUS&LOW (Sustaining Low-impact Practices in Horticulture through Non-destructive Approach to Provide More Information on Fresh Produce History & Quality), finanziato dal MUR-PRIN 2017, e volto a sostenere la qualità della produzione e dell'ambiente utilizzando pratiche agricole a basso input e la valutazione non distruttiva della qualità di prodotti ortofrutticoli.The doctoral thesis focused on the analysis of non-destructive technologies available for the control quality of agri-food products, along the whole supply chain. In particular, the thesis concerns the application of computer vision system to evaluate the quality of fresh rocket leaves. The thesis is structured in three parts (introduction, experimental applications and conclusions) and in 5 chapters, the first and second focused on non-destructive technologies and in particular on computer vision systems for monitoring the quality of agri-food products, respectively. The third, quarter, and fifth chapters aim to assess the rocket leaves based on the estimation of quality aspects, considering different aspects: (i) the variability due to the different agricultural practices, (ii) the senescence of packed and unpacked products, and (iii) development and exploitation of the advantages of new models simpler than the machine learning used in the previous experiments. The research work of this doctoral thesis was carried out by the University of Foggia, the Institute of Science of Food Production (ISPA) and the Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing (STIIMA) of National Research Council (CNR). It was conducted within the Project SUS&LOW (Sustaining Low-impact Practices in Horticulture through Non-destructive Approach to Provide More Information on Fresh Produce History & Quality), funded by MUR- PRIN 2017, and aimed at sustaining quality of production and of the environment using low input agricultural practices and non-destructive quality evaluation

    Automatic plant features recognition using stereo vision for crop monitoring

    Get PDF
    Machine vision and robotic technologies have potential to accurately monitor plant parameters which reflect plant stress and water requirements, for use in farm management decisions. However, autonomous identification of individual plant leaves on a growing plant under natural conditions is a challenging task for vision-guided agricultural robots, due to the complexity of data relating to various stage of growth and ambient environmental conditions. There are numerous machine vision studies that are concerned with describing the shape of leaves that are individually-presented to a camera. The purpose of these studies is to identify plant species, or for the autonomous detection of multiple leaves from small seedlings under greenhouse conditions. Machine vision-based detection of individual leaves and challenges presented by overlapping leaves on a developed plant canopy using depth perception properties under natural outdoor conditions is yet to be reported. Stereo vision has recently emerged for use in a variety of agricultural applications and is expected to provide an accurate method for plant segmentation and identification which can benefit from depth properties and robustness. This thesis presents a plant leaf extraction algorithm using a stereo vision sensor. This algorithm is used on multiple leaf segmentation and overlapping leaves separation using a combination of image features, specifically colour, shape and depth. The separation between the connected and the overlapping leaves relies on the measurement of the discontinuity in depth gradient for the disparity maps. Two techniques have been developed to implement this task based on global and local measurement. A geometrical plane from each segmented leaf can be extracted and used to parameterise a 3D model of the plant image and to measure the inclination angle of each individual leaf. The stem and branch segmentation and counting method was developed based on the vesselness measure and Hough transform technique. Furthermore, a method for reconstructing the segmented parts of hibiscus plants is presented and a 2.5D model is generated for the plant. Experimental tests were conducted with two different selected plants: cotton of different sizes, and hibiscus, in an outdoor environment under varying light conditions. The proposed algorithm was evaluated using 272 cotton and hibiscus plant images. The results show an observed enhancement in leaf detection when utilising depth features, where many leaves in various positions and shapes (single, touching and overlapping) were detected successfully. Depth properties were more effective in separating between occluded and overlapping leaves with a high separation rate of 84% and these can be detected automatically without adding any artificial tags on the leaf boundaries. The results exhibit an acceptable segmentation rate of 78% for individual plant leaves thereby differentiating the leaves from their complex backgrounds and from each other. The results present almost identical performance for both species under various lighting and environmental conditions. For the stem and branch detection algorithm, experimental tests were conducted on 64 colour images of both species under different environmental conditions. The results show higher stem and branch segmentation rates for hibiscus indoor images (82%) compared to hibiscus outdoor images (49.5%) and cotton images (21%). The segmentation and counting of plant features could provide accurate estimation about plant growth parameters which can be beneficial for many agricultural tasks and applications

    Simple identification tools in FishBase

    Get PDF
    Simple identification tools for fish species were included in the FishBase information system from its inception. Early tools made use of the relational model and characters like fin ray meristics. Soon pictures and drawings were added as a further help, similar to a field guide. Later came the computerization of existing dichotomous keys, again in combination with pictures and other information, and the ability to restrict possible species by country, area, or taxonomic group. Today, www.FishBase.org offers four different ways to identify species. This paper describes these tools with their advantages and disadvantages, and suggests various options for further development. It explores the possibility of a holistic and integrated computeraided strategy
    corecore