12,309 research outputs found

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    3D Scanning System for Automatic High-Resolution Plant Phenotyping

    Full text link
    Thin leaves, fine stems, self-occlusion, non-rigid and slowly changing structures make plants difficult for three-dimensional (3D) scanning and reconstruction -- two critical steps in automated visual phenotyping. Many current solutions such as laser scanning, structured light, and multiview stereo can struggle to acquire usable 3D models because of limitations in scanning resolution and calibration accuracy. In response, we have developed a fast, low-cost, 3D scanning platform to image plants on a rotating stage with two tilting DSLR cameras centred on the plant. This uses new methods of camera calibration and background removal to achieve high-accuracy 3D reconstruction. We assessed the system's accuracy using a 3D visual hull reconstruction algorithm applied on 2 plastic models of dicotyledonous plants, 2 sorghum plants and 2 wheat plants across different sets of tilt angles. Scan times ranged from 3 minutes (to capture 72 images using 2 tilt angles), to 30 minutes (to capture 360 images using 10 tilt angles). The leaf lengths, widths, areas and perimeters of the plastic models were measured manually and compared to measurements from the scanning system: results were within 3-4% of each other. The 3D reconstructions obtained with the scanning system show excellent geometric agreement with all six plant specimens, even plants with thin leaves and fine stems.Comment: 8 papes, DICTA 201

    Flexible system of multiple RGB-D sensors for measuring and classifying fruits in agri-food Industry

    Get PDF
    The productivity of the agri-food sector experiences continuous and growing challenges that make the use of innovative technologies to maintain and even improve their competitiveness a priority. In this context, this paper presents the foundations and validation of a flexible and portable system capable of obtaining 3D measurements and classifying objects based on color and depth images taken from multiple Kinect v1 sensors. The developed system is applied to the selection and classification of fruits, a common activity in the agri-food industry. Being able to obtain complete and accurate information of the environment, as it integrates the depth information obtained from multiple sensors, this system is capable of self-location and self-calibration of the sensors to then start detecting, classifying and measuring fruits in real time. Unlike other systems that use specific set-up or need a previous calibration, it does not require a predetermined positioning of the sensors, so that it can be adapted to different scenarios. The characterization process considers: classification of fruits, estimation of its volume and the number of assets per each kind of fruit. A requirement for the system is that each sensor must partially share its field of view with at least another sensor. The sensors localize themselves by estimating the rotation and translation matrices that allow to transform the coordinate system of one sensor to the other. To achieve this, Iterative Closest Point (ICP) algorithm is used and subsequently validated with a 6 degree of freedom KUKA robotic arm. Also, a method is implemented to estimate the movement of objects based on the Kalman Filter. A relevant contribution of this work is the detailed analysis and propagation of the errors that affect both the proposed methods and hardware. To determine the performance of the proposed system the passage of different types of fruits on a conveyor belt is emulated by a mobile robot carrying a surface where the fruits were placed. Both the perimeter and volume are measured and classified according to the type of fruit. The system was able to distinguish and classify the 95% of fruits and to estimate their volume with a 85% of accuracy in worst cases (fruits whose shape is not symmetrical) and 94% of accuracy in best cases (fruits whose shape is more symmetrical), showing that the proposed approach can become a useful tool in the agri-food industry.This project has been supported by the National Commission for Science and Technology Research of Chile (Conicyt) under FONDECYT grant 1140575 and the Advanced Center of Electrical and Electronic Engineering - AC3E (CONICYT/FB0008)

    Image-based food classification and volume estimation for dietary assessment: a review.

    Get PDF
    A daily dietary assessment method named 24-hour dietary recall has commonly been used in nutritional epidemiology studies to capture detailed information of the food eaten by the participants to help understand their dietary behaviour. However, in this self-reporting technique, the food types and the portion size reported highly depends on users' subjective judgement which may lead to a biased and inaccurate dietary analysis result. As a result, a variety of visual-based dietary assessment approaches have been proposed recently. While these methods show promises in tackling issues in nutritional epidemiology studies, several challenges and forthcoming opportunities, as detailed in this study, still exist. This study provides an overview of computing algorithms, mathematical models and methodologies used in the field of image-based dietary assessment. It also provides a comprehensive comparison of the state of the art approaches in food recognition and volume/weight estimation in terms of their processing speed, model accuracy, efficiency and constraints. It will be followed by a discussion on deep learning method and its efficacy in dietary assessment. After a comprehensive exploration, we found that integrated dietary assessment systems combining with different approaches could be the potential solution to tackling the challenges in accurate dietary intake assessment

    A Systematic Literature Review With Bibliometric Meta-Analysis Of Deep Learning And 3D Reconstruction Methods In Image Based Food Volume Estimation Using Scopus, Web Of Science And IEEE Database

    Get PDF
    Purpose- Estimation of food portions is necessary in image based dietary monitoring techniques. The purpose of this systematic survey is to identify peer reviewed literature in image-based food volume estimation methods in Scopus, Web of Science and IEEE database. It further analyzes bibliometric survey of image-based food volume estimation methods with 3D reconstruction and deep learning techniques. Design/methodology/approach- Scopus, Web of Science and IEEE citation databases are used to gather the data. Using advanced keyword search and PRISMA approach, relevant papers were extracted, selected and analyzed. The bibliographic data of the articles published in the journals over the past twenty years were extracted. A deeper analysis was performed using bibliometric indicators and applications with Microsoft Excel and VOS viewer. A comparative analysis of the most cited works in deep learning and 3D reconstruction methods is performed. Findings: This review summarizes the results from the extracted literature. It traces research directions in the food volume estimation methods. Bibliometric analysis and PRISMA search results suggest a broader taxonomy of the image-based methods to estimate food volume in dietary management systems and projects. Deep learning and 3D reconstruction methods show better accuracy in the estimations over other approaches. The work also discusses importance of diverse and robust image datasets for training accurate learning models in food volume estimation. Practical implications- Bibliometric analysis and systematic review gives insights to researchers, dieticians and practitioners with the research trends in estimation of food portions and their accuracy. It also discusses the challenges in building food volume estimator model using deep learning and opens new research directions. Originality/value- This study represents an overview of the research in the food volume estimation methods using deep learning and 3D reconstruction methods using works from 1995 to 2020. The findings present the five different popular methods which have been used in the image based food volume estimation and also shows the research trends with the emerging 3D reconstruction and deep learning methodologies. Additionally, the work emphasizes the challenges in the use of these approaches and need of developing more diverse, benchmark image data sets for food volume estimation including raw food, cooked food in all states and served with different containers

    Visibility-Aware Pixelwise View Selection for Multi-View Stereo Matching

    Full text link
    The performance of PatchMatch-based multi-view stereo algorithms depends heavily on the source views selected for computing matching costs. Instead of modeling the visibility of different views, most existing approaches handle occlusions in an ad-hoc manner. To address this issue, we propose a novel visibility-guided pixelwise view selection scheme in this paper. It progressively refines the set of source views to be used for each pixel in the reference view based on visibility information provided by already validated solutions. In addition, the Artificial Multi-Bee Colony (AMBC) algorithm is employed to search for optimal solutions for different pixels in parallel. Inter-colony communication is performed both within the same image and among different images. Fitness rewards are added to validated and propagated solutions, effectively enforcing the smoothness of neighboring pixels and allowing better handling of textureless areas. Experimental results on the DTU dataset show our method achieves state-of-the-art performance among non-learning-based methods and retrieves more details in occluded and low-textured regions.Comment: 8 page

    Assessment of grape cluster yield components based on 3D descriptors using stereo vision

    Full text link
    NOTICE: this is the author’s version of a work that was accepted for publication in Food Control. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Food Control, [Volume 50, April 2015, Pages 273–282] DOI 10.1016/j.foodcont.2014.09.004Wine quality depends mostly on the features of the grapes it is made from. Cluster and berry morphology are key factors in determining grape and wine quality. However, current practices for grapevine quality estimation require time-consuming destructive analysis or largely subjective judgment by experts. The purpose of this paper is to propose a three-dimensional computer vision approach to assessing grape yield components based on new 3D descriptors. To achieve this, firstly a partial three-dimensional model of the grapevine cluster is extracted using stereo vision. After that a number of grapevine quality components are predicted using SVM models based on new 3D descriptors. Experiments confirm that this approach is capable of predicting the main cluster yield components, which are related to quality, such as cluster compactness and berry size (R2 > 0.80, p < 0.05). In addition, other yield components: cluster volume, total berry weight and number of berries, were also estimated using SVM models, obtaining prediction R2 of 0.82, 0.83 and 0.71, respectively.This work has been partially funded by the Instituto Nacional de Investigacion y Tecnologia Agraria y Alimentaria de Espana (INIA - Spanish National Institute for Agriculture and Food Research and Technology) through research project RTA2012-00062-C04-02, support of European FEDER funds, UPV-SP20120276 and AGL2011-23673 project.Ivorra Martínez, E.; Sánchez Salmerón, AJ.; Camarasa Baixauli, JG.; Diago, M.; Tardaguila, J. (2015). Assessment of grape cluster yield components based on 3D descriptors using stereo vision. Food Control. 50:273-282. https://doi.org/10.1016/j.foodcont.2014.09.004S2732825

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available
    • …
    corecore