12 research outputs found

    MODELLING ERRORS IN X-RAY FLUOROSCOPIC IMAGING SYSTEMS USING PHOTOGRAMMETRIC BUNDLE ADJUSTMENT WITH A DATA-DRIVEN SELF-CALIBRATION APPROACH

    Get PDF
    X-ray imaging is a fundamental tool of routine clinical diagnosis. Fluoroscopic imaging can further acquire X-ray images at video frame rates, thus enabling non-invasive in-vivo motion studies of joints, gastrointestinal tract, etc. For both the qualitative and quantitative analysis of static and dynamic X-ray images, the data should be free of systematic biases. Besides precise fabrication of hardware, software-based calibration solutions are commonly used for modelling the distortions. In this primary research study, a robust photogrammetric bundle adjustment was used to model the projective geometry of two fluoroscopic X-ray imaging systems. However, instead of relying on an expert photogrammetrist’s knowledge and judgement to decide on a parametric model for describing the systematic errors, a self-tuning data-driven approach is used to model the complex non-linear distortion profile of the sensors. Quality control from the experiment showed that 0.06 mm to 0.09 mm 3D reconstruction accuracy was achievable post-calibration using merely 15 X-ray images. As part of the bundle adjustment, the location of the virtual fluoroscopic system relative to the target field can also be spatially resected with an RMSE between 3.10 mm and 3.31 mm

    A review of the use of terrestrial laser scanning application for change detection and deformation monitoring of structures

    Get PDF
    Change detection and deformation monitoring is an active area of research within the field of engineering surveying as well as overlapping areas such as structural and civil engineering. The application of Terrestrial Laser Scanning (TLS) techniques for change detection and deformation monitoring of concrete structures has increased over the years as illustrated in the past studies. This paper presents a review of literature on TLS application in the monitoring of structures and discusses registration and georeferencing of TLS point cloud data as a critical issue in the process chain of accurate deformation analysis. Past TLS research work has shown some trends in addressing issues such as accurate registration and georeferencing of the scans and the need of a stable reference frame, TLS error modelling and reduction, point cloud processing techniques for deformation analysis, scanner calibration issues and assessing the potential of TLS in detecting sub-centimetre and millimetre deformations. However, several issues are still open to investigation as far as TLS is concerned in change detection and deformation monitoring studies such as rigorous and efficient workflow methodology of point cloud processing for change detection and deformation analysis, incorporation of measurement geometry in deformation measurements of high-rise structures, design of data acquisition and quality assessment for precise measurements and modelling the environmental effects on the performance of laser scanning. Even though some studies have attempted to address these issues, some gaps exist as information is still limited. Some methods reviewed in the case studies have been applied in landslide monitoring and they seem promising to be applied in engineering surveying to monitor structures. Hence the proposal of a three-stage process model for deformation analysis is presented. Furthermore, with technological advancements new TLS instruments with better accuracy are being developed necessitating more research for precise measurements in the monitoring of structures

    RANSAC approach for automated registration of terrestrial laser scans using linear features

    No full text
    The registration process of terrestrial laser scans (TLS) targets the problem of how to combine several laser scans in order to attain better information about features than what could be obtained through single scan. The main goal of the registration process is to estimate the parameters which determine geometrical variation between the origins of datasets collected from different locations. Scale, shifts, and rotation parameters are usually used to describe such variation. This paper presents a framework for the registration of overlapping terrestrial laser scans by establishing an automatic matching strategy that uses 3D linear features. More specifically, invariant separation characteristics between 3D linear features extracted from laser scans will be used to establish hypothesized conjugate linear features between the laser scans. These candidate matches are then used to geo-reference scans relative to a common reference frame. The registration workflow simulates the well-known RANndom Sample Consensus method (RANSAC) for determining the registration parameters, whereas the iterative closest projected point (ICPP) is utilized to determine the most probable solution of the transformation parameters from several solutions. The experimental results prove that the proposed methodology can be used for the automatic registration of terrestrial laser scans using linear features

    A novel quality control procedure for the evaluation of laser scanning data segmentation

    No full text
    Over the past few years, laser scanning systems have been acknowledged as the leading tools for the collection of high density 3D point cloud over physical surfaces for many different applications. However, no interpretation and scene classification is performed during the acquisition of these datasets. Consequently, the collected data must be processed to extract the required information. The segmentation procedure is usually considered as the fundamental step in information extraction from laser scanning data. So far, various approaches have been developed for the segmentation of 3D laser scanning data. However, none of them is exempted from possible anomalies due to disregarding the internal characteristics of laser scanning data, improper selection of the segmentation thresholds, or other problems during the segmentation procedure. Therefore, quality control procedures are required to evaluate the segmentation outcome and report the frequency of instances of expected problems. A few quality control techniques have been proposed for the evaluation of laser scanning segmentation. These approaches usually require reference data and user intervention for the assessment of segmentation results. In order to resolve these problems, a new quality control procedure is introduced in this paper. This procedure makes hypotheses regarding potential problems that might take place in the segmentation process, detects instances of such problems, quantifies the frequency of these problems, and suggests possible actions to remedy them. The feasibility of the proposed approach is verified through quantitative evaluation of planar and linear/cylindrical segmentation outcome from two recently-developed parameter-domain and spatial-domain segmentation techniques

    Performance of parameter-domain and spatial-domain pole-like feature segmentation using single and multiple terrestrial laser scans

    No full text
    Terrestrial laser scanning (TLS) systems have been established as a leading tool for the acquisition of high density three-dimensional point clouds from physical objects. The collected point clouds by these systems can be utilized for a wide spectrum of object extraction, modelling, and monitoring applications. Pole-like features are among the most important objects that can be extracted from TLS data especially those acquired in urban areas and industrial sites. However, these features cannot be completely extracted and modelled using a single TLS scan due to significant local point density variations and occlusions caused by the other objects. Therefore, multiple TLS scans from different perspectives should be integrated through a registration procedure to provide a complete coverage of the pole-like features in a scene. To date, different segmentation approaches have been proposed for the extraction of pole-like features from either single or multiple-registered TLS scans. These approaches do not consider the internal characteristics of a TLS point cloud (local point density variations and noise level in data) and usually suffer from computational inefficiency. To overcome these problems, two recently-developed PCA-based parameter-domain and spatial-domain approaches for the segmentation of pole-like features are introduced, in this paper. Moreover, the performance of the proposed segmentation approaches for the extraction of pole-like features from a single or multiple-registered TLS scans is investigated in this paper. The alignment of the utilized TLS scans is implemented using an Iterative Closest Projected Point (ICPP) registration procedure. Qualitative and quantitative evaluation of the extracted pole-like features from single and multiple-registered TLS scans, using both of the proposed segmentation approaches, is conducted to verify the extraction of more complete pole-like features using multipleregistered TLS scans

    Toward an Automatic Calibration of Dual Fluoroscopy Imaging Systems

    No full text
    High-speed dual fluoroscopy (DF) imaging provides a novel, in-vivo solution to quantify the six-degree-of-freedom skeletal kinematics of humans and animals with sub-millimetre accuracy and high temporal resolution. A rigorous geometric calibration of DF system parameters is essential to ensure precise bony rotation and translation measurements. One way to achieve the system calibration is by performing a bundle adjustment with self-calibration. A first-time bundle adjustment-based system calibration was recently achieved. The system calibration through the bundle adjustment has been shown to be robust, precise, and straightforward. Nevertheless, due to the inherent absence of colour/semantic information in DF images, a significant amount of user input is needed to prepare the image observations for the bundle adjustment. This paper introduces a semi-automated methodology to minimise the amount of user input required to process calibration images and henceforth to facilitate the calibration task. The methodology is optimized for processing images acquired over a custom-made calibration frame with radio-opaque spherical targets. Canny edge detection is used to find distinct structural components of the calibration images. Edge-linking is applied to cluster the edge pixels into unique groups. Principal components analysis is utilized to automatically detect the calibration targets from the groups and to filter out possible outliers. Ellipse fitting is utilized to achieve the spatial measurements as well as to perform quality analysis over the detected targets. Single photo resection is used together with a template matching procedure to establish the image-to-object point correspondence and to simplify target identification. The proposed methodology provided 56,254 identified-targets from 411 images that were used to run a second-time bundle adjustment-based DF system calibration. Compared to a previous fully manual procedure, the proposed methodology has significantly reduced the amount of user input needed for processing the calibration images. In addition, the bundle adjustment calibration has reported a 50% improvement in terms of image observation residuals

    A ROBUST REGISTRATION ALGORITHM FOR POINT CLOUDS FROM UAV IMAGES FOR CHANGE DETECTION

    No full text
    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs over the period of one year (i.e., May 2014 and May 2015). Note that due to the coarse accuracy of the on-board GPS receiver (e.g., +/- 5-10 m) the geo-tagged positions of the images were only used as initial values in the bundle block adjustment. Normal distances, signifying detected changes, varying from 20 cm to 4 m were identified between the two epochs. The accuracy of the co-registered surfaces was estimated by comparing non-active patches within the monitored area of interest. Since these non-active sub-areas are stationary, the computed normal distances should theoretically be close to zero. The quality control of the registration results showed that the average normal distance was approximately 4 cm, which is within the noise level of the reconstructed surfaces

    BUNDLE ADJUSTMENT-BASED STABILITY ANALYSIS METHOD WITH A CASE STUDY OF A DUAL FLUOROSCOPY IMAGING SYSTEM

    No full text
    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system’s calibration parameters. This is essential to validate the repeatability of the parameters’ estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system – for a single camera analysis – was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors’ best knowledge that this work is the first to address the topic of DF stability analysis
    corecore