59 research outputs found

    The influence of the spatial distribution of 2D features on pose estimation for a visual pipe mapping sensor

    Get PDF
    This paper considers factors which influence the visual motion estimation of a sensor system designed for visually mapping the internal surface of pipework using omnidirectional lenses. In particular, a systematic investigation of the error caused by a non-uniform 2D spatial distribution of features on the resultant estimate of camera pose is presented. The effect of non-uniformity is known to cause issue and is commonly mitigated using techniques such as bucketing, however, a rigorous analysis of this problem has not been carried out in the literature. The pipe’s inner surface tend to be uniform and texture poor driving the need to understand and quantify the feature matching process. A simulation environment is described in which the investigation was conducted in a controlled manner. Pose error and uncertainty is considered as a function of the number of correspondences and feature coverage pattern in the form of contiguous and equiangular coverage around a circular image acquired by a fisheye lens. It is established that beyond 16 feature matches between the images, that coverage is the most influential variable, with the equiangular coverage pattern leading to a greater rate of reduction in pose error with increasing coverage. The application of the results of the simulation to a real world dataset are also provided

    Error Model of Misalignment Error in a Radial 3D Scanner

    Get PDF
    A radial 3D, structured light scanner, was developed from a laser projector and a wide field of view machine vision camera to inspect two - four inch diameter pipers, primarily in the nuclear industry. For identifying the nature and the spatial extent of defective regions, the system constructs a surface point cloud. A dominant source of error in the system is caused by manufacturing tolerances which leads to misalignment between the laser projector and the camera. This causes a triangulation error, reducing the accuracy of the result. In this paper, the error model of the misalignment of the laser and image plane. For a given target distance, we derive an almost linear relationship between angular error in degrees and the error in reported radius (distance from the probe to the surface) in mm and found that for the target 0.1 mm accuracy on a 4 inch pipe, the misalignment needs to be controlled to less than 0.05 degrees. Future work will consider a post manufacturing calibration routine to compensate for this misalignment

    A new probe concept for internal pipework inspection

    Get PDF
    The interior visual inspection of nuclear pipework is a critical inspection activity required to ensure the continued safe, reliable operation of plant and thus avoid costly outages. Typically, the video output from a manually deployed probe is viewed by an operator online with the task of identifying and estimating the location of surface defects such as cracks, corrosion and pitting. However, it is very difficult to estimate the nature and spatial extent of defects from the often disorientating small field of view video of a relatively large structure. This work describes a new visual inspection system incorporating a wide field of view machine vision camera and additional sensors designed for inspecting 3 - 6 inch diameter pipes. The output of the system is a photorealistic model of the internal surface of the pipework. The generation of this model relies upon a core component of the system in the form of image feature extraction which estimates camera location. This paper considers the accuracy of this estimation as a function of the number and configuration of the extracted image features

    Development of a novel probe for remote visual inspection of pipework

    Get PDF
    The interior visual inspection of pipework is a critical inspection activity required to ensure the continued safe, reliable operation of plant and thus avoid costly outages. Typically, the video output from a manually deployed probe is viewed by an operator with the task of identifying and estimating the location of surface defects such as cracks, corrosion and pitting. However, it is very difficult to estimate the nature and spatial extent of defects from the often disorientating small field of view video of a relatively large structure. This paper describes the development of a new visual inspection system designed for inspecting 3 - 6 inch diameter pipes. The system uses a high resolution camera and structure from motion (SFM) algorithm to compute the trajectory of the probe through the pipe. In addition a laser profiler is used to measure the inner surface of the pipe and generate a meshed point cloud. The camera images are projected onto the mesh and the final output of the system is a photorealistic 3-D model of the internal surface of the pipework

    Capturing 3D textured inner pipe surfaces for sewer inspection

    Get PDF
    Inspection robots equipped with TV camera technology are commonly used to detect defects in sewer systems. Currently, these defects are predominantly identified by human assessors, a process that is not only time-consuming and costly but also susceptible to errors. Furthermore, existing systems primarily offer only information from 2D imaging for damage assessment, limiting the accurate identification of certain types of damage due to the absence of 3D information. Thus, the necessary solid quantification and characterisation of damage, which is needed to evaluate remediation measures and the associated costs, is limited from the sensory side. In this paper, we introduce an innovative system designed for acquiring multimodal image data using a camera measuring head capable of capturing both color and 3D images with high accuracy and temporal availability based on the single-shot principle. This sensor head, affixed to a carriage, continuously captures the sewer's inner wall during transit. The collected data serves as the basis for an AI-based automatic analysis of pipe damages as part of the further assessment and monitoring of sewers. Moreover, this paper is focused on the fundamental considerations about the design of the multimodal measuring head and elaborates on some application-specific implementation details. These include data pre-processing, 3D reconstruction, registration of texture and depth images, as well as 2D-3D registration and 3D image fusion

    Simultaneous localization and mapping for inspection robots in water and sewer pipe networks: a review

    Get PDF
    At the present time, water and sewer pipe networks are predominantly inspected manually. In the near future, smart cities will perform intelligent autonomous monitoring of buried pipe networks, using teams of small robots. These robots, equipped with all necessary computational facilities and sensors (optical, acoustic, inertial, thermal, pressure and others) will be able to inspect pipes whilst navigating, selflocalising and communicating information about the pipe condition and faults such as leaks or blockages to human operators for monitoring and decision support. The predominantly manual inspection of pipe networks will be replaced with teams of autonomous inspection robots that can operate for long periods of time over a large spatial scale. Reliable autonomous navigation and reporting of faults at this scale requires effective localization and mapping, which is the estimation of the robot’s position and its surrounding environment. This survey presents an overview of state-of-the-art works on robot simultaneous localization and mapping (SLAM) with a focus on water and sewer pipe networks. It considers various aspects of the SLAM problem in pipes, from the motivation, to the water industry requirements, modern SLAM methods, map-types and sensors suited to pipes. Future challenges such as robustness for long term robot operation in pipes are discussed, including how making use of prior knowledge, e.g. geographic information systems (GIS) can be used to build map estimates, and improve the multi-robot SLAM in the pipe environmen

    Measuring the interior of in-use sewage pipes using 3D vision

    Get PDF
    Sewage pipes may be renovated using tailored linings. However, the interior diameter of the pipes must be measured prior to renovation. This paper investigates the use of 3D vision sensors for measuring the interior diameter of sewage pipes, removing the need for human entry in the pipes. The 3D sensors are residing in a waterproof box that is lowered into the well. A RANSAC-based method is used for cylinder estimation from the acquired point clouds of the pipe and the diameter of these cylinders is used as a measure of the interior pipe diameter. The method is tested in 74 real-world sewage pipes with diameters between 150- and 1100 mm. The diameter of 68 pipes is measured within a tolerance of ±20mm whereas 8 pipes are above. It was found that the faulty estimates can be detected in the field using a combination of human-in-the-loop qualitative and quantitative data-driven measures.</p

    Image-based 3-D reconstruction of constrained environments

    Get PDF
    Nuclear power plays a important role to the United Kingdom electricity generation infrastructure, providing a reliable baseload of low carbon electricity. The Advanced Gas-cooled Reactor (AGR) design makes up approximately 50% of the existing fleet, however, many of the operating reactors have exceeding their original design lifetimes.To ensure safe reactor operation, engineers perform periodic in-core visual inspections of reactor components to monitor the structural health of the core as it ages. However, current inspection mechanisms deployed provide limited structural information about the fuel channel or defects.;This thesis investigates the suitability of image-based 3-D reconstruction techniques to acquire 3-D structural geometry to enable improved diagnostic and prognostic abilities for inspection engineers. The application of image-based 3-D reconstruction to in-core inspection footage highlights significant challenges, most predominantly that the image saliency proves insuffcient for general reconstruction frameworks. The contribution of the thesis is threefold. Firstly, a novel semi-dense matching scheme which exploits sparse and dense image correspondence in combination with a novel intra-image region strength approach to improve the stability of the correspondence between images.;This results in a percentage increase of 138.53% of correct feature matches over similar state-of-the-art image matching paradigms. Secondly, a bespoke incremental Structure-from-Motion (SfM) framework called the Constrained Homogeneous SfM (CH-SfM) which is able to derive structure from deficient feature spaces and constrained environments. Thirdly, the application of the CH-SfM framework to remote visual inspection footage gathered within AGR fuel channels, outperforming other state-of-the-art reconstruction approaches and extracting representative 3-D structural geometry of orientational scans and fully circumferential reconstructions.;This is demonstrated on in-core and laboratory footage, achieving an approximate 3-D point density of 2.785 - 23.8025NX/cm² for real in-core inspection footage and high quality laboratory footage respectively. The demonstrated novelties have applicability to other constrained or feature-poor environments, with future work looking to producing fully dense, photo-realistic 3-D reconstructions.Nuclear power plays a important role to the United Kingdom electricity generation infrastructure, providing a reliable baseload of low carbon electricity. The Advanced Gas-cooled Reactor (AGR) design makes up approximately 50% of the existing fleet, however, many of the operating reactors have exceeding their original design lifetimes.To ensure safe reactor operation, engineers perform periodic in-core visual inspections of reactor components to monitor the structural health of the core as it ages. However, current inspection mechanisms deployed provide limited structural information about the fuel channel or defects.;This thesis investigates the suitability of image-based 3-D reconstruction techniques to acquire 3-D structural geometry to enable improved diagnostic and prognostic abilities for inspection engineers. The application of image-based 3-D reconstruction to in-core inspection footage highlights significant challenges, most predominantly that the image saliency proves insuffcient for general reconstruction frameworks. The contribution of the thesis is threefold. Firstly, a novel semi-dense matching scheme which exploits sparse and dense image correspondence in combination with a novel intra-image region strength approach to improve the stability of the correspondence between images.;This results in a percentage increase of 138.53% of correct feature matches over similar state-of-the-art image matching paradigms. Secondly, a bespoke incremental Structure-from-Motion (SfM) framework called the Constrained Homogeneous SfM (CH-SfM) which is able to derive structure from deficient feature spaces and constrained environments. Thirdly, the application of the CH-SfM framework to remote visual inspection footage gathered within AGR fuel channels, outperforming other state-of-the-art reconstruction approaches and extracting representative 3-D structural geometry of orientational scans and fully circumferential reconstructions.;This is demonstrated on in-core and laboratory footage, achieving an approximate 3-D point density of 2.785 - 23.8025NX/cm² for real in-core inspection footage and high quality laboratory footage respectively. The demonstrated novelties have applicability to other constrained or feature-poor environments, with future work looking to producing fully dense, photo-realistic 3-D reconstructions

    Fast Cylinder and Plane Extraction from Depth Cameras for Visual Odometry

    Full text link
    This paper presents CAPE, a method to extract planes and cylinder segments from organized point clouds, which processes 640x480 depth images on a single CPU core at an average of 300 Hz, by operating on a grid of planar cells. While, compared to state-of-the-art plane extraction, the latency of CAPE is more consistent and 4-10 times faster, depending on the scene, we also demonstrate empirically that applying CAPE to visual odometry can improve trajectory estimation on scenes made of cylindrical surfaces (e.g. tunnels), whereas using a plane extraction approach that is not curve-aware deteriorates performance on these scenes. To use these geometric primitives in visual odometry, we propose extending a probabilistic RGB-D odometry framework based on points, lines and planes to cylinder primitives. Following this framework, CAPE runs on fused depth maps and the parameters of cylinders are modelled probabilistically to account for uncertainty and weight accordingly the pose optimization residuals.Comment: Accepted to IROS 201
    • …
    corecore