8 research outputs found

    Total least squares registration of 3D surfaces

    Get PDF
    Co-registration of point clouds of partially scanned objects is the first step of the 3D modeling workflow. The aim of coregistration is to merge the overlapping point clouds by estimating the spatial transformation parameters. In computer vision and photogrammetry domain one of the most popular methods is the ICP (Iterative Closest Point) algorithm and its variants. There exist the 3D Least Squares (LS) matching methods as well (Gruen and Akca, 2005). The co-registration methods commonly use the least squares (LS) estimation method in which the unknown transformation parameters of the (floating) search surface is functionally related to the observation of the (fixed) template surface. Here, the stochastic properties of the search surfaces are usually omitted. This omission is expected to be minor and does not disturb the solution vector significantly. However, the a posteriori covariance matrix will be affected by the neglected uncertainty of the function values of the search surface. . This causes deterioration in the realistic precision estimates. In order to overcome this limitation, we propose a method where the stochastic properties of both the observations and the parameters are considered under an errors-in-variables (EIV) model. The experiments have been carried out using diverse laser scanning data sets and the results of EIV with the ICP and the conventional LS matching methods have been compared.Publisher's Versio

    Co-registration of 3d point clouds by using an errors-in-variables model

    Get PDF
    Co-registration of point clouds of partially scanned objects is the first step of the 3D modeling workflow. The aim of co-registration is to merge the overlapping point clouds by estimating the spatial transformation parameters. In the literature, one of the most popular methods is the ICP (Iterative Closest Point) algorithm and its variants. There exist the 3D least squares (LS) matching methods as well. In most of the co-registration methods, the stochastic properties of the search surfaces are usually omitted. This omission is expected to be minor and does not disturb the solution vector significantly. However, the a posteriori covariance matrix will be affected by the neglected uncertainty of the function values. This causes deterioration in the realistic precision estimates. In order to overcome this limitation, we propose a new method where the stochastic properties of both (template and search) surfaces are considered under an errors-in-variables (EIV) model. The experiments have been carried out using a close range laser scanning data set and the results of the conventional and EIV types of the ICP matching methods have been compared.Publisher's Versio

    Total Least Squares Registration of 3D Surfaces

    No full text
    Co-registration of point clouds of partially scanned objects is the first step of the 3D modeling workflow. The aim of co-registration is to merge the overlapping point clouds by estimating the spatial transformation parameters. In computer vision and photogrammetry domain one of the most popular methods is the ICP (Iterative Closest Point) algorithm and its variants. There exist the 3D Least Squares (LS) matching methods as well (Gruen and Akca, 2005). The co-registration methods commonly use the least squares (LS) estimation method in which the unknown transformation parameters of the (floating) search surface is functionally related to the observation of the (fixed) template surface. Here, the stochastic properties of the search surfaces are usually omitted. This omission is expected to be minor and does not disturb the solution vector significantly. However, the a posteriori covariance matrix will be affected by the neglected uncertainty of the function values of the search surface. . This causes deterioration in the realistic precision estimates. In order to overcome this limitation, we propose a method where the stochastic properties of both the observations and the parameters are considered under an errors-in-variables (EIV) model. The experiments have been carried out using diverse laser scanning data sets and the results of EIV with the ICP and the conventional LS matching methods have been compared

    Semantic Segmentation of High-Resolution Airborne Images with Dual-Stream DeepLabV3+

    No full text
    In geospatial applications such as urban planning and land use management, automatic detection and classification of earth objects are essential and primary subjects. When the significant semantic segmentation algorithms are considered, DeepLabV3+ stands out as a state-of-the-art CNN. Although the DeepLabV3+ model is capable of extracting multi-scale contextual information, there is still a need for multi-stream architectural approaches and different training approaches of the model that can leverage multi-modal geographic datasets. In this study, a new end-to-end dual-stream architecture that considers geospatial imagery was developed based on the DeepLabV3+ architecture. As a result, the spectral datasets other than RGB provided increments in semantic segmentation accuracies when they were used as additional channels to height information. Furthermore, both the given data augmentation and Tversky loss function which is sensitive to imbalanced data accomplished better overall accuracies. Also, it has been shown that the new dual-stream architecture using Potsdam and Vaihingen datasets produced 88.87% and 87.39% overall semantic segmentation accuracies, respectively. Eventually, it was seen that enhancement of the traditional significant semantic segmentation networks has a great potential to provide higher model performances, whereas the contribution of geospatial data as the second stream to RGB to segmentation was explicitly shown

    Semantic Segmentation of High-Resolution Airborne Images with Dual-Stream DeepLabV3+

    No full text
    In geospatial applications such as urban planning and land use management, automatic detection and classification of earth objects are essential and primary subjects. When the significant semantic segmentation algorithms are considered, DeepLabV3+ stands out as a state-of-the-art CNN. Although the DeepLabV3+ model is capable of extracting multi-scale contextual information, there is still a need for multi-stream architectural approaches and different training approaches of the model that can leverage multi-modal geographic datasets. In this study, a new end-to-end dual-stream architecture that considers geospatial imagery was developed based on the DeepLabV3+ architecture. As a result, the spectral datasets other than RGB provided increments in semantic segmentation accuracies when they were used as additional channels to height information. Furthermore, both the given data augmentation and Tversky loss function which is sensitive to imbalanced data accomplished better overall accuracies. Also, it has been shown that the new dual-stream architecture using Potsdam and Vaihingen datasets produced 88.87% and 87.39% overall semantic segmentation accuracies, respectively. Eventually, it was seen that enhancement of the traditional significant semantic segmentation networks has a great potential to provide higher model performances, whereas the contribution of geospatial data as the second stream to RGB to segmentation was explicitly shown

    Determining Roughness Angle of Limestone Using Optical Laser Scanner

    No full text
    In this study, a limestone rock core specimen with 6.94 cm x 4.95 cm dimensions was exposed to tensile force by Brazilian test and rough surfaces were obtained. Following the Brazilian test, roughness angles were measured by a laser scanner along one side of the rock specimen. For this purpose, Nextengine 3D Desktop scanner was used. The 17 profiles were studied along the width of the core with a 0.3 mm interval. Approximately 10000 points produced for each profile, some of them are in the “+” and some are in the “-” direction along each profile. Maximum and minimum roughness angles are calculated as 65.580 and 1.56x10-5 degree respectively. The average roughness angle value of the profiles is 13.870. The percentage of the roughness angle between 13 and 14 degrees were 2.65% and 2.70% for “-” and “+” directions on the rock surface, respectively. Mathematical analyses of 17 profiles showed that roughness profiles can be expressed by 21st – 30th degree polynomial equations with approximately 10-4 degree standard deviation
    corecore