22 research outputs found

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    3D Scanning System for Automatic High-Resolution Plant Phenotyping

    Full text link
    Thin leaves, fine stems, self-occlusion, non-rigid and slowly changing structures make plants difficult for three-dimensional (3D) scanning and reconstruction -- two critical steps in automated visual phenotyping. Many current solutions such as laser scanning, structured light, and multiview stereo can struggle to acquire usable 3D models because of limitations in scanning resolution and calibration accuracy. In response, we have developed a fast, low-cost, 3D scanning platform to image plants on a rotating stage with two tilting DSLR cameras centred on the plant. This uses new methods of camera calibration and background removal to achieve high-accuracy 3D reconstruction. We assessed the system's accuracy using a 3D visual hull reconstruction algorithm applied on 2 plastic models of dicotyledonous plants, 2 sorghum plants and 2 wheat plants across different sets of tilt angles. Scan times ranged from 3 minutes (to capture 72 images using 2 tilt angles), to 30 minutes (to capture 360 images using 10 tilt angles). The leaf lengths, widths, areas and perimeters of the plastic models were measured manually and compared to measurements from the scanning system: results were within 3-4% of each other. The 3D reconstructions obtained with the scanning system show excellent geometric agreement with all six plant specimens, even plants with thin leaves and fine stems.Comment: 8 papes, DICTA 201

    Development of software tools to control an artificial vision-based phenotyping system

    Get PDF
    [SPA] Los sistemas basados en visión artificial permiten automatizar el proceso de fenotipado de los sistemas biológicos. Estos sistemas permiten capturar grandes cantidades de datos de forma rápida y con un bajo coste asociado. Hemos desarrollado una herramienta software flexible para el fenotipado basada en visión artificial. La herramienta controla los parámetros del experimento: días de experimento, horas día/noche, permite la utilización de diferentes tipos de cámaras, etc. La herramienta ha sido programada en C++ lo que ha permitido integrar y ejecutar diferentes algoritmos de procesado de imagen de librerías como OPENCV y MIL. [ENG] Computer vision systems allow to automate the process of obtaining phenotypic features in plants. These systems produce large amounts of data in a quick fashion and with a low associated cost. In this work we present a flexible software tool for phenotyping analysis based on computer vision. The tool allows a total management of the experiment parameters such as experiment time, hours of nighttime and daytime periods or use of different cameras with time of image acquisition. The system has been programmed in C++ allowing it to be applied in different computer environments, using different computer vision algorithms to perform image processing.El trabajo realizado se enmarca dentro de los proyectos MICINN BFU-2013-45148-R y ViSel-TR(TIN2012-39279) y ha sido presentado en el II Simposio Nacional de Ingeniería Hortícola, celebrado en Almería del 10-12 de febrero de 2016

    Desarrollo de una herramienta software para el control de un sistema de fenotipado basado en visión artificial

    Get PDF
    [ESP] Los sistemas basados en visión artificial permiten automatizar el proceso de fenotipado de los sistemas biológicos. Estos sistemas permiten capturar grandes cantidades de datos de forma rápida y con un bajo coste asociado. Hemos desarrollado una herramienta software flexible para el fenotipado basada en visión artificial. La herramienta controla los parámetros del experimento: días de experimento, horas día/noche, permite la utilización de diferentes tipos de cámaras, etc. La herramienta ha sido programada en C++ lo que ha permitido integrar y ejecutar diferentes algoritmos de procesado de imagen de librerías como OPENCV y MIL. [ENG] Computer vision systems allow to automate the process of obtaining phenotypic features in plants. These systems produce large amounts of data in a quick fashion and with a low associated cost. In this work we present a flexible software tool for phenotyping analysis based on computer vision. The tool allows a total management of the experiment parameters such as experiment time, hours of night-time and daytime periods or use of different cameras with time of image acquisition. The system has been programmed in C++ allowing it to be applied in different computer environments, using different computer vision algorithms to perform image processing.Escuela Técnica Superior de Ingeniería de Telecomunicación (ETSIT), Escuela Técnica Superior de Ingeniería Agronómica (ETSIA), Escuela Técnica Superior de Ingeniería Industrial (ETSII), Escuela Técnica Superior de Arquitectura y Edificación (ETSAE), Escuela Técnica Superior de Ingeniería de Caminos, Canales y Puertos y de Ingeniería de Minas (ETSICCPIM), Facultad de Ciencias de la Empresa (FCCE), Parque Tecnológico de Fuente Álamo (PTFA), Vicerrectorado de Estudiantes y Extensión de la UPCT, Vicerrectorado de Investigación e Innovación de la UPCT, y Vicerrectorado de Internacionalización y Cooperación al Desarrollo de la UPCT

    Dense 3D Facial Reconstruction from a Single Depth Image in Unconstrained Environment

    Get PDF
    With the increasing demands of applications in virtual reality such as 3D films, virtual Human-Machine Interactions and virtual agents, the analysis of 3D human face analysis is considered to be more and more important as a fundamental step for those virtual reality tasks. Due to information provided by an additional dimension, 3D facial reconstruction enables aforementioned tasks to be achieved with higher accuracy than those based on 2D facial analysis. The denser the 3D facial model is, the more information it could provide. However, most existing dense 3D facial reconstruction methods require complicated processing and high system cost. To this end, this paper presents a novel method that simplifies the process of dense 3D facial reconstruction by employing only one frame of depth data obtained with an off-the-shelf RGB-D sensor. The experiments showed competitive results with real world data

    Terrestrial laser scanning to reconstruct branch architecture from harvested branches

    Get PDF
    Quantifying whole branch architecture is critical to understanding tree function, for example, branch surface area controls woody gas exchange. Yet, due to measurement difficulty, branch architecture of small diameter branches (e.g. <10 cm ø) is either modelled, subsampled or ignored. Methods that use Terrestrial Laser Scanning (TLS) are now being widely applied to analyse tree and plot level tree architecture; however, resolving small diameter branches in-situ remains a challenge. Currently, it is suggested accurate reconstruction of small diameter branches can only be achieved by harvest and measurement in controlled conditions. Here we present a new TLS workflow for rapid and accurate reconstruction of complete branch architecture from harvested branches. The workflow sets out scan configuration, post processing (including a novel reectance filter) and fitting of Quantitative Structure Models (QSM) to reconstruct topologically coherent branch models. This is demonstrated on 595 branches (scanned indoors to negate the impact of wind) and compared with 65 branches that were manually measured (i.e. with measuring tape and callipers). Comparison of a suite of morphological and topological traits reveals a good agreement between TLS derived metrics and manual measurements where RMSE (%RMSE) for total branch length = 0.7 m (10%), volume = 0.09 litres (43%), surface area = 0.04 m2 (26%) and N tips = 6.4 (35%). Scanning was faster and invariant to branch size compared with manual measurements which required significantly more personnel time. We recommend measuring a subsample of tip-widths to constrain the QSM taper function as the TLS workflow tends to overestimate tip-width. The workflow presented here allows for a rapid characterisation of branch architecture from harvested branches. Increasing the number of branches analysed (e.g. many branches from a single tree or branches from many species globally) could allow for a comprehensive analysis of the “missing link" between the leaves and larger diameter branches

    Development of an Autonomous Indoor Phenotyping Robot

    Get PDF
    In order to fully understand the interaction between phenotype and genotype x environment to improve crop performance, a large amount of phenotypic data is needed. Studying plants of a given strain under multiple environments can greatly help to reveal their interactions. To collect the labor-intensive data required to perform experiments in this area, an indoor rover has been developed, which can accurately and autonomously move between and inside growth chambers. The system uses mecanum wheels, magnetic tape guidance, a Universal Robots UR 10 robot manipulator, and a Microsoft Kinect v2 3D sensor to position various sensors in this constrained environment. Integration of the motor controllers, robot arm, and a Microsoft Kinect (v2) 3D sensor was achieved in a customized C++ program. Detecting and segmenting plants in a multi-plant environment is a challenging task, which can be aided by integration of depth data into these algorithms. Image-processing functions were implemented to filter the depth image to minimize noise and remove undesired surfaces, reducing the memory requirement and allowing the plant to be reconstructed at a higher resolution in real-time. Three-dimensional meshes representing plants inside the chamber were reconstructed using the Kinect SDK’s KinectFusion. After transforming user-selected points in camera coordinates to robot-arm coordinates, the robot arm is used in conjunction with the rover to probe desired leaves, simulating the future use of sensors such as a fluorimeter and Raman spectrometer. This paper shows the system architecture and some preliminary results of the system, as tested using a life-sized growth chamber mock-up. A comparison of using raw camera coordinates data and using KinectFusion data is presented. The results suggest that the KinectFusion pose estimation is fairly accurate, only decreasing accuracy by a few millimeters at distances of roughly 0.8 meter

    3D Maize Plant Reconstruction Based on Georeferenced Overlapping LiDAR Point Clouds

    Get PDF
    3D crop reconstruction with a high temporal resolution and by the use of non-destructive measuring technologies can support the automation of plant phenotyping processes. Thereby, the availability of such 3D data can give valuable information about the plant development and the interaction of the plant genotype with the environment. This article presents a new methodology for georeferenced 3D reconstruction of maize plant structure. For this purpose a total station, an IMU, and several 2D LiDARs with different orientations were mounted on an autonomous vehicle. By the multistep methodology presented, based on the application of the ICP algorithm for point cloud fusion, it was possible to perform the georeferenced point clouds overlapping. The overlapping point cloud algorithm showed that the aerial points (corresponding mainly to plant parts) were reduced to 1.5%–9% of the total registered data. The remaining were redundant or ground points. Through the inclusion of different LiDAR point of views of the scene, a more realistic representation of the surrounding is obtained by the incorporation of new useful information but also of noise. The use of georeferenced 3D maize plant reconstruction at different growth stages, combined with the total station accuracy could be highly useful when performing precision agriculture at the crop plant level

    Development of a Mobile Robotic Phenotyping System for Growth Chamber-based Studies of Genotype x Environment Interactions

    Get PDF
    To increase understanding of the interaction between phenotype and genotype x environment to improve crop performance, large amounts of phenotypic data are needed. Studying plants of a given strain under multiple environments can greatly help to reveal their interactions. To collect the labor-intensive data required to perform experiments in this area, a Mecanum-wheeled, magnetic-tape-following indoor rover has been developed to accurately and autonomously move between and inside growth chambers. Integration of the motor controllers, a robot arm, and a Microsoft Kinect (v2) 3D sensor was achieved in a customized C++ program. Detecting and segmenting plants in a multi-plant environment is a challenging task, which can be aided by integration of depth data into these algorithms. Image-processing functions were implemented to filter the depth image to minimize noise and remove undesired surfaces, reducing the memory requirement and allowing the plant to be reconstructed at a higher resolution in real-time. Three-dimensional meshes representing plants inside the chamber were reconstructed using the Kinect SDK’s KinectFusion. After transforming user-selected points in camera coordinates to robot-arm coordinates, the robot arm is used in conjunction with the rover to probe desired leaves, simulating the future use of sensors such as a fluorimeter and Raman spectrometer. This paper reports the system architecture and some preliminary results of the system
    corecore