39 research outputs found

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Tree Structure Retrieval for Apple Trees from 3D Pointcloud

    Full text link
    3D reconstruction is a challenging problem and has been an important research topic in the areas of remote sensing and computer vision for many years. Existing 3D reconstruction approaches are not suitable for orchard applications due to complicated tree structures. Current tree reconstruction has included models specific to trees of a certain density, but the impact of varying Leaf Area Index(LAI) on model performance has not been studied. To better manage an apple orchard, this thesis proposes methods for evaluating an apple canopy density mapping system as an input for a variable-rate sprayer for both trellis-structured (2D) and standalone (3D) apple orchards using a 2D LiDAR (Light Detection and Ranging). The canopy density mapping system has been validated for robustness and repeatability with multiple scans. The consistency of the whole row during multiple passes has a correlation R^2 = 0.97. The proposed system will help the decision-making in a variable-rate sprayer. To further study the individual tree structure, this thesis proposes a novel and fast approach to reconstruct and analyse 3D trees over a range of Leaf Area Index (LAI) values from LiDAR for morphology analysis for height, branch length and angles of real and simulated apple trees. After using Principal Component Analysis (PCA) to extract the trunk points, an improved Mean Shift algorithm is introduced as Adapted Mean Shift (AMS) to classify different branch clusters and extract the branch nodes. A full evaluation workflow of tree parameters including trunk and branches is introduced for morphology analysis to investigate the accuracy of the approach over different LAI values. Tree height, branch length, and branch angles were analysed and compared to the ground truth for trees with a range of LAI values. When the LAI is smaller than 0.1, the accuracy for height and length is greater than 90\% and the accuracy for the angles is around 80\%. When the LAI is greater than 0.1, the branch accuracy reduces to 40\%. This analysis of tree reconstruction performance concerning LAI values, as well as the combination of efficient and accurate structure reconstruction, opens the possibility of improving orchard management and botanical studies on a large scale. To improve the accuracy of traditional tree structure analysis, a deep learning approach is introduced to pre-process and classify unbalanced, in-homogeneous, and noisy point cloud data. TreeNet is inspired by 3D U-Net, adding classes and median filters to segment trunk, branch, and leave parts. TreeNet outperformed 3D U-Net and SVM in the case of Kappa, Matthews Correlation Coefficient(MCC), and F1-score value in segmentation. The TreeNet-AMS combined method also showed improvement in tree structure analysis than the traditional AMS method mentioned above. Following on from this research, efficient tree structure analysis on tree height, trunk length, branch position, and branch length could be conducted. Knowing the tree morphology is proved to be closely relevant to thinning, spraying and yield, the proposed work will then largely benefit the relevant studies in agriculture and forestry

    A method for measuring banana pseudo-stem phenotypic parameters based on handheld mobile LiDAR and IMU fusion

    Get PDF
    Diameter and height are crucial morphological parameters of banana pseudo-stems, serving as indicators of the plant’s growth status. Currently, in densely cultivated banana plantations, there is a lack of applicable research methods for the scalable measurement of phenotypic parameters such as diameter and height of banana pseudo-stems. This paper introduces a handheld mobile LiDAR and Inertial Measurement Unit (IMU)-fused laser scanning system designed for measuring phenotypic parameters of banana pseudo-stems within banana orchards. To address the challenges posed by dense canopy cover in banana orchards, a distance-weighted feature extraction method is proposed. This method, coupled with Lidar-IMU integration, constructs a three-dimensional point cloud map of the banana plantation area. To overcome difficulties in segmenting individual banana plants in complex environments, a combined segmentation approach is proposed, involving Euclidean clustering, Kmeans clustering, and threshold segmentation. A sliding window recognition method is presented to determine the connection points between pseudo-stems and leaves, mitigating issues caused by crown closure and heavy leaf overlap. Experimental results in banana orchards demonstrate that, compared with manual measurements, the mean absolute errors and relative errors for banana pseudo-stem diameter and height are 0.2127 cm (4.06%) and 3.52 cm (1.91%), respectively. These findings indicate that the proposed method is suitable for scalable measurements of banana pseudo-stem diameter and height in complex, obscured environments, providing a rapid and accurate inter-orchard measurement approach for banana plantation managers

    Instance-based learning of affordances

    Get PDF
    The discovery of possible interactions with objects is a vital part of an exploration task for robots. An important subset of these possible interactions are affordances. Affordances describe what a specific object can afford to a specific agent, based on the capabilities of the agent and the properties of the object in relation to the agent. For example, a chair affords a human to be sat-upon, if the sitting area of the chair is approximately knee-high. In this work, an instance-based learning approach is made to discover these affordances solely through different visual representations of point cloud data of an object. The point clouds are acquired with a Microsoft Kinect sensor. Different representations are tested and evaluated against a set of point cloud data of various objects found in a living room environment

    Mapping and Real-Time Navigation With Application to Small UAS Urgent Landing

    Full text link
    Small Unmanned Aircraft Systems (sUAS) operating in low-altitude airspace require flight near buildings and over people. Robust urgent landing capabilities including landing site selection are needed. However, conventional fixed-wing emergency landing sites such as open fields and empty roadways are rare in cities. This motivates our work to uniquely consider unoccupied flat rooftops as possible nearby landing sites. We propose novel methods to identify flat rooftop buildings, isolate their flat surfaces, and find touchdown points that maximize distance to obstacles. We model flat rooftop surfaces as polygons that capture their boundaries and possible obstructions on them. This thesis offers five specific contributions to support urgent rooftop landing. First, the Polylidar algorithm is developed which enables efficient non-convex polygon extraction with interior holes from 2D point sets. A key insight of this work is a novel boundary following method that contrasts computationally expensive geometric unions of triangles. Results from real-world and synthetic benchmarks show comparable accuracy and more than four times speedup compared to other state-of-the-art methods. Second, we extend polygon extraction from 2D to 3D data where polygons represent flat surfaces and interior holes representing obstacles. Our Polylidar3D algorithm transforms point clouds into a triangular mesh where dominant plane normals are identified and used to parallelize and regularize planar segmentation and polygon extraction. The result is a versatile and extremely fast algorithm for non-convex polygon extraction of 3D data. Third, we propose a framework for classifying roof shape (e.g., flat) within a city. We process satellite images, airborne LiDAR point clouds, and building outlines to generate both a satellite and depth image of each building. Convolutional neural networks are trained for each modality to extract high level features and sent to a random forest classifier for roof shape prediction. This research contributes the largest multi-city annotated dataset with over 4,500 rooftops used to train and test models. Our results show flat-like rooftops are identified with > 90% precision and recall. Fourth, we integrate Polylidar3D and our roof shape prediction model to extract flat rooftop surfaces from archived data sources. We uniquely identify optimal touchdown points for all landing sites. We model risk as an innovative combination of landing site and path risk metrics and conduct a multi-objective Pareto front analysis for sUAS urgent landing in cities. Our proposed emergency planning framework guarantees a risk-optimal landing site and flight plan is selected. Fifth, we verify a chosen rooftop landing site on real-time vertical approach with on-board LiDAR and camera sensors. Our method contributes an innovative fusion of semantic segmentation using neural networks with computational geometry that is robust to individual sensor and method failure. We construct a high-fidelity simulated city in the Unreal game engine with a statistically-accurate representation of rooftop obstacles. We show our method leads to greater than 4% improvement in accuracy for landing site identification compared to using LiDAR only. This work has broad impact for the safety of sUAS in cities as well as Urban Air Mobility (UAM). Our methods identify thousands of additional rooftop landing sites in cities which can provide safe landing zones in the event of emergencies. However, the maps we create are limited by the availability, accuracy, and resolution of archived data. Methods for quantifying data uncertainty or performing real-time map updates from a fleet of sUAS are left for future work.PHDRoboticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/170026/1/jdcasta_1.pd

    Automatic 3D model creation with velocity-based surface deformations

    Get PDF
    The virtual worlds of Computer Graphics are populated by geometric objects, called models. Researchers have addressed the problem of synthesizing models automatically. Traditional modeling approaches often require a user to guide the synthesis process and to look after the geometry being synthesized, but user attention is expensive, and reducing user interaction is therefore desirable. I present a scheme for the automatic creation of geometry by deforming surfaces. My scheme includes a novel surface representation; it is an explicit representation consisting of points and edges, but it is not a traditional polygonal mesh. The novel surface representation is paired with a resampling policy to control the surface density and its evolution during deformation. The surface deforms with velocities assigned to its points through a set of deformation operators. Deformation operators avoid the manual computation and assignment of velocities, the operators allow a user to interactively assign velocities with minimal effort. Additionally, Petri nets are used to automatically deform a surface by mimicking a user assigning deformation operators. Furthermore, I present an algorithm to translate from the novel surface representations to a polygonal mesh. I demonstrate the utility of my model generation scheme with a gallery of models created automatically. The scheme's surface representation and resampling policy enables a surface to deform without requiring a user to control the deformation; self-intersections and hole creation are automatically prevented. The generated models show that my scheme is well suited to create organic-like models, whose surfaces have smooth transitions between surface features, but can also produce other kinds of models. My scheme allows a user to automatically generate varied instances of richly detailed models with minimal user interaction

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Visual analytics of multidimensional time-dependent trails:with applications in shape tracking

    Get PDF
    Lots of data collected for both scientific and non-scientific purposes have similar characteristics: changing over time with many different properties. For example, consider the trajectory of an airplane travelling from one location to the other. Not only does the airplane itself move over time, but its heading, height and speed are changing at the same time. During this research, we investigated different ways to collect and visualze data with these characteristics. One practical application being for an automated milking device which needs to be able to determine the position of a cow's teats. By visualizing all data which is generated during the tracking process we can acquire insights in the working of the tracking system and identify possibilites for improvement which should lead to better recognition of the teats by the machine. Another important result of the research is a method which can be used to efficiently process a large amount of trajectory data and visualize this in a simplified manner. This has lead to a system which can be used to show the movement of all airplanes around the world for a period of multiple weeks
    corecore