5 research outputs found

    Mitigating RGB-D camera errors for robust ultrasonic inspections using a force-torque sensor

    Get PDF
    Robot-based phased array ultrasonic testing is widely used for precise defect detection, particularly in complex geometries and various materials. Compact robots with miniature arms can inspect constrained areas, but payload limitations restrict sensor choice. RGB-D cameras, due to their small size and light weight, capture RGB colour and depth data, creating colourised 3D point clouds for scene representation. These point clouds help estimate surface normals to align the ultrasound transducer on complex surfaces. However, sole reliance on RGB-D cameras can lead to inaccuracies, affecting ultrasonic beam direction and test results. This paper investigates the impact of transducer pose and RGB-D camera limitations on ultrasonic inspections and proposes a novel method using force-torque sensors to mitigate errors from inaccurately estimated normals from the camera. The force-torque sensor, integrated into the robot end effector, provides tactile feedback to the controller, enabling joint angle adjustments to correct errors in the estimated normal. Experimental results show the successful application of ultrasound transducers using this method, even with significant misalignment. Adjustments took approximately 4 seconds to correct deviations from 12.55°, with an additional 4 seconds to ensure the probe was parallel to the surface, enhancing ultrasonic inspection accuracy in complex, constrained environments

    Single-pass inline pipeline 3D reconstruction using depth camera array

    Get PDF
    A novel inline inspection (ILI) approach using depth cameras array (DCA) is introduced to create high-fidelity, dense 3D pipeline models. A new camera calibration method is introduced to register the color and the depth information of the cameras into a unified pipe model. By incorporating the calibration outcomes into a robust camera motion estimation approach, dense and complete 3D pipe surface reconstruction is achieved by using only the inline image data collected by a self-powered ILI rover in a single pass through a straight pipeline. The outcomes of the laboratory experiments demonstrate one-millimeter geometrical accuracy and 0.1-pixel photometric accuracy. In the reconstructed model of a longer pipeline, the proposed method generates the dense 3D surface reconstruction model at the millimeter level accuracy with less than 0.5% distance error. The achieved performance highlights its potential as a useful tool for efficient in-line, non-destructive evaluation of pipeline assets

    Polylidar3D -- Fast Polygon Extraction from 3D Data

    Full text link
    Flat surfaces captured by 3D point clouds are often used for localization, mapping, and modeling. Dense point cloud processing has high computation and memory costs making low-dimensional representations of flat surfaces such as polygons desirable. We present Polylidar3D, a non-convex polygon extraction algorithm which takes as input unorganized 3D point clouds (e.g., LiDAR data), organized point clouds (e.g., range images), or user-provided meshes. Non-convex polygons represent flat surfaces in an environment with interior cutouts representing obstacles or holes. The Polylidar3D front-end transforms input data into a half-edge triangular mesh. This representation provides a common level of input data abstraction for subsequent back-end processing. The Polylidar3D back-end is composed of four core algorithms: mesh smoothing, dominant plane normal estimation, planar segment extraction, and finally polygon extraction. Polylidar3D is shown to be quite fast, making use of CPU multi-threading and GPU acceleration when available. We demonstrate Polylidar3D's versatility and speed with real-world datasets including aerial LiDAR point clouds for rooftop mapping, autonomous driving LiDAR point clouds for road surface detection, and RGBD cameras for indoor floor/wall detection. We also evaluate Polylidar3D on a challenging planar segmentation benchmark dataset. Results consistently show excellent speed and accuracy.Comment: 40 page

    Mapping and Real-Time Navigation With Application to Small UAS Urgent Landing

    Full text link
    Small Unmanned Aircraft Systems (sUAS) operating in low-altitude airspace require flight near buildings and over people. Robust urgent landing capabilities including landing site selection are needed. However, conventional fixed-wing emergency landing sites such as open fields and empty roadways are rare in cities. This motivates our work to uniquely consider unoccupied flat rooftops as possible nearby landing sites. We propose novel methods to identify flat rooftop buildings, isolate their flat surfaces, and find touchdown points that maximize distance to obstacles. We model flat rooftop surfaces as polygons that capture their boundaries and possible obstructions on them. This thesis offers five specific contributions to support urgent rooftop landing. First, the Polylidar algorithm is developed which enables efficient non-convex polygon extraction with interior holes from 2D point sets. A key insight of this work is a novel boundary following method that contrasts computationally expensive geometric unions of triangles. Results from real-world and synthetic benchmarks show comparable accuracy and more than four times speedup compared to other state-of-the-art methods. Second, we extend polygon extraction from 2D to 3D data where polygons represent flat surfaces and interior holes representing obstacles. Our Polylidar3D algorithm transforms point clouds into a triangular mesh where dominant plane normals are identified and used to parallelize and regularize planar segmentation and polygon extraction. The result is a versatile and extremely fast algorithm for non-convex polygon extraction of 3D data. Third, we propose a framework for classifying roof shape (e.g., flat) within a city. We process satellite images, airborne LiDAR point clouds, and building outlines to generate both a satellite and depth image of each building. Convolutional neural networks are trained for each modality to extract high level features and sent to a random forest classifier for roof shape prediction. This research contributes the largest multi-city annotated dataset with over 4,500 rooftops used to train and test models. Our results show flat-like rooftops are identified with > 90% precision and recall. Fourth, we integrate Polylidar3D and our roof shape prediction model to extract flat rooftop surfaces from archived data sources. We uniquely identify optimal touchdown points for all landing sites. We model risk as an innovative combination of landing site and path risk metrics and conduct a multi-objective Pareto front analysis for sUAS urgent landing in cities. Our proposed emergency planning framework guarantees a risk-optimal landing site and flight plan is selected. Fifth, we verify a chosen rooftop landing site on real-time vertical approach with on-board LiDAR and camera sensors. Our method contributes an innovative fusion of semantic segmentation using neural networks with computational geometry that is robust to individual sensor and method failure. We construct a high-fidelity simulated city in the Unreal game engine with a statistically-accurate representation of rooftop obstacles. We show our method leads to greater than 4% improvement in accuracy for landing site identification compared to using LiDAR only. This work has broad impact for the safety of sUAS in cities as well as Urban Air Mobility (UAM). Our methods identify thousands of additional rooftop landing sites in cities which can provide safe landing zones in the event of emergencies. However, the maps we create are limited by the availability, accuracy, and resolution of archived data. Methods for quantifying data uncertainty or performing real-time map updates from a fleet of sUAS are left for future work.PHDRoboticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/170026/1/jdcasta_1.pd
    corecore